Jump to content

Consciousness


helene_t

Recommended Posts

And there seems to be no plausible reason why the puppet master has to have free will. Occam's Razor suggests, to me anyway, that the puppet master is more plausibly reacting due to a combination of randomness coupled with hard-wired responses that have evolved in our life-forms over time... the startle response, for one.... i.e. instinct.

I agree with this.

 

I don't need the concept of a free will. Then again, in principle I don't need the concept of sentience either, since for the purpose of understanding behavior in terms of neurophysiology it doesn't matter to me if a computer that passed the Turing test would be sentient, or if the question about it being sentient is even meaningful.

 

And yet I "belive" in sentience, partly because I "feel" myself being sentient, partly because my moral sense relies on sentience in other humans and higher (whatever that means) animals. Besides, I have the idea that sentience is a better candidate of one day becoming a scientific concept than is free will, which to me sounds metaphysical.

 

This may be completely subjective. In fact I haven't seen any useful account on sentience (as distinct from the illusion of sentience). If I read and thought a lot more on this subject, and some milestone progress is made in consciousness research, I might change my view on sentience (and/or on free will) completely.

Link to comment
Share on other sites

  • Replies 102
  • Created
  • Last Reply

Top Posters In This Topic

I have never seen any proof of this statement. Hence your argument is based on a premise without any foundation. Why exactly can't a turing machine prove godel's theorem? Or some other interesting statement starting from the same axioms. As a mathematician, with some expertise into what constitutes proofs, I would dispute that a sequence of statements that are not translatable into something tha can be described in a formal language, and proven sequentially, was a proof to begin with. So almost by definition, a proof is a sequence of statements that a computer can devlop.

That was one of my problems with Penrose's argument as well. But we have had this discussion in

another thread.

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe? It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything. Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it. Sentience is just an accident of increased brain power?
Link to comment
Share on other sites

I'm saying that computer scientists are qualified.  Cognitive scientists are not qualified to refute an argument based on computability.  I think you miss the point here.  You can't say a turing machine can prove Godel just because you did it because the supposition here is that humans can do things that turing machines can't.

OK, so you are

1. clearly saying that Penrose in NOT qualified, since he is not a computer scientist. Its very interesting that you are praising his argument, and then saying other people with similiar background do not have the background to make an argument about the subject.

 

2. Further, you are saying that someone who studies computers is capable of making a statement that compares computers and brains, but someone who studies brains is not qualified. Facenating.

 

3. What supposition? You are making a supposistion that humans can do things that turning machines can't. I have never seen any proof of this statement. Hence your argument is based on a premise without any foundation. Why exactly can't a turing machine prove godel's theorem? Or some other interesting statement starting from the same axioms. As a mathematician, with some expertise into what constitutes proofs, I would dispute that a sequence of statements that are not translatable into something tha can be described in a formal language, and proven sequentially, was a proof to begin with. So almost by definition, a proof is a sequence of statements that a computer can devlop. So the main question is "does the brain have to use something from outside the formal system, to realize what the correct sequence of steps was?" Further, is that extra thing the brain is using from outside the formal system its self just part of a slightly largely formal system (and hence can be part of a turing machine) or does the key ingrediant come magically from someplace other than the formal physics rules that govern the neurons in the brain.

Sheehs...you're a stickler for subtlety. Let me try again. Here is what I'm saying. If Penrose says the brain is more than a turning machine and offers a proof from computer science then it makes no sense to respond by saying that the brain is wholly deterministic with randomness based on some theory from cognitive science. I'm just saying you have to address the argument directly and not ignore it and counter with "nuh uh" cause my psychology book said so. I'd venture to say Penrose understands computability better than I do and I took graduate level courses in it. So just because his degree is not in computer science does not mean he doesn't understand computability. In this sense, I was calling him a computer scientist.

 

If someone is told "a brain is just a high power computer" then yes, only someone that understands computability is in a position to make an argument that the brain can compute things that are not computable by a turning machine. They are looking strictly at brain input/output and using that to infer something about function. If the argument is sound then I don't care how much those cognitive scientist or philosophers disagree they are wrong. Philosophers certainly can't decide on monism/dualism since those have been around thousands of years. Monist cognitive scientists certainly would have a very difficult time proving beyond doubt there aren't quantum effects in the brain. They may say it would be hard to imagine it but if a prove from a different perspective is offered that something interesting is going on they should listen.

 

It has been a while since I read Penrose's argument....I'll try to reread it soon...definitely by the weekend. I think you are asking the right questions but obviously nobody agrees on the answer. The size of the formal system doesn't matter. There are always true statements that no formal system can prove true. If a human could somehow do it then that would seem to be proof of magic.

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe?

Tough one. The (partial) answers I have encountered so far are:

1: Sentience is beneficial to itself, not to humans. It's a mental parasite that invaded our minds.

2: Sentience is not real.

3: Sentience arises automatically as a spin-off whenever a mind aquires particular faculties (say, self-awareness).

4: Sentience is an unlikely evolutionary accident but as long as it's probability is non-zero (think of a gene with clearly selective disadvantages that survives for million of years because the non-holders of the gene tend to get hit by meteorites), if one believes in the Multiversum theory it follows from the anthropic principle that WE must be sentient. Thinking about it, maybe the infinity improbability device from Hitch-hiker's Guide must be invented for the same reason.

 

1: may in some sense be more plausible than a theory based on fitness of the mind, but as I see it it just replaces one mystery with another one that is equally perplexing. Maybe it will one day form the basis of a cool theory but so far I don't see the light.

 

2: would certainly make things easier but as said, it doesn't appeal to me.

 

3: seems to be the best candidate so far but I'm not sure if it solves anything, see 1:.

 

4: Is scientifically unsound and besides the hard problem seems to be how this probability could be non-zero. If it could be 10^(-10000000) we might as well speculate that it could be 50%, or 98% or whatever.

 

Consider the two models I proposed for Gerben's cat that sometimes wants to get petted and sometimes not:

1) a mechanistic model, taking the behavior of the human and the emotional state of the cat as input, and a decision as output.

2) the cat creates mental images of scenarios involving each of its alternatives, evaluates the emotions induced by those images, and selects the alternative that induces the most positive emotions.

 

Now 2) may be most efficiently (or most evolvably) implemented by recycling the emotion-inducing, emotion-evaluating and imagination faculties that already evolved to serve more basic functions.

 

This is all speculative, I do not argue that it's plausible, it's just one of many possible models. But if something like that model gained support, I would say that we have pinpointed some biological phenomena that seems somehow to be related to sentience and/or free will (whatever those things really are). And the evolution of what model 2) describes is something that can be addressed scientifically.

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe?

Tough one.

Not THAT tough....creativity.

 

Sentience begat consciousness that begat self-awareness that resulted in the one real thing that we share with creation.

 

All things evolve or devolve into states that optimize their creative potential. Only the inherent internal interference caused by the systemic structures of the developing psyche are an impediment to this creative potential. Our first step after the birth of our comprehension of this situation is the systematic elimination of everything that would impede this process.

 

Talk about raison d'être... :)

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe?

Fawn and her mother are drinking at a stream. Mountain lion appears and chases the fawn.

 

If the fawn moves deterministically, she'll get caught...if all fawns move right, the Mountain Lion will have learned that, and anticipate the move right.

 

If the fawn moves randomly, she and her mom will never get tothether again, and the fawn will starve to death.

 

If the fawn moves sentiently, then the fawn will move towards something she 'likes'. Since the doe also knows what the fawn 'likes', the doe will know where to go. Since the mountain lion doesn't have that information, the mountain lion cannot anticipate where the fawn is going.

 

To me, the primary purpose of sentience is 'irrational' likes and dislikes. Having prefences which can be predicted by your friends but cannot be predicted by your enemies is a powerful survival tool.

 

With enough observation....

A Turing machine can predict the actions of another Turing machine.

Human can predict the actions of a Turing Machine.

A Turing machine cannot predict the actions of a Human.

 

That's how all the Turing machines made so far have been found out. With enough time talking to them, humans figure out that they're Turing machines because they become predictable.

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe? It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything. Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it. Sentience is just an accident of increased brain power?

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe?

Tough one.

Not THAT tough....creativity.

 

Sentience begat consciousness that begat self-awareness that resulted in the one real thing that we share with creation.

 

All things evolve or devolve into states that optimize their creative potential. Only the inherent internal interference caused by the systemic structures of the developing psyche are an impediment to this creative potential. Our first step after the birth of our comprehension of this situation is the systematic elimination of everything that would impede this process.

 

Talk about raison d'être... :blink:

Creativity cannot exist in a deterministic universe.

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe?  It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything.  Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it.  Sentience is just an accident of increased brain power?

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

I guess I would conclude that it was a result of the randomness of evolution. If I believed this way though I might start to conclude that dividing the world into "me and everything else" doesn't make much sense and I might start to lose the notion of "I."

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe?

Tough one.

Not THAT tough....creativity.

 

Sentience begat consciousness that begat self-awareness that resulted in the one real thing that we share with creation.

 

All things evolve or devolve into states that optimize their creative potential. Only the inherent internal interference caused by the systemic structures of the developing psyche are an impediment to this creative potential. Our first step after the birth of our comprehension of this situation is the systematic elimination of everything that would impede this process.

 

Talk about raison d'être... :blink:

Creativity cannot exist in a deterministic universe.

I can appreciate the intent, but that statement, deterministically, is creative. B)

Link to comment
Share on other sites

The statement didn't exist before I typed it...that is true but that doesn't mean it is creative. I think you are confusing new and creative.

 

I couldn't find my copy of Shadows of the Mind

but the previous link contains a summary of Penrose's suggested proof. Some commentary on the proof is below and a response to the commentary by Penrose is available.

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe?  It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything.  Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it.  Sentience is just an accident of increased brain power?

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

I guess I would conclude that it was a result of the randomness of evolution. If I believed this way though I might start to conclude that dividing the world into "me and everything else" doesn't make much sense and I might start to lose the notion of "I."

that seems a reasonable conclusion... how reasonable is this one? God created you in his image, said image including (of necessity) sentience... he preordained all things having to do with his creatures (you included)... would you possess free will or simply the perception of free will? if perception, how does that differ from reality?

 

i don't really want to get a religious conversation going, so feel free to ignore this if you want..

Link to comment
Share on other sites

The statement didn't exist before I typed it...that is true but that doesn't mean it is creative.  I think you are confusing new and creative. 

 

I couldn't find my copy of Shadows of the Mind

but the previous link contains a summary of Penrose's suggested proof.  Some commentary on the proof is below and a response to the commentary by Penrose is available.

I can hear the sound of hairs splitting in the distance..... :)

 

Creativity is what you make (of it) :)

 

As for religious arguements......I have no problem with pre-ordination but my creative spark will ALWAYS be able to "remake" what I was able to make "initially" which is why it is important to realize (make real) this. B)

Link to comment
Share on other sites

There is no way that so many sentient posters would have devoted so much time to such a meaningless thread if any of us possessed a shred of free will. The mere fact that we have written and read so much with respect to a subject that, if it exists, cannot ever be proved (free will) suggests that we had no choice.... no sentient with free will would waste this much time. Therefore, free will does not exist, at least in BBF.

 

QED

 

Or maybe we ain't that sentient? :)

Link to comment
Share on other sites

What is the evolutionary benefit to sentience in a deterministic universe?  It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything.  Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it.  Sentience is just an accident of increased brain power?

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

I guess I would conclude that it was a result of the randomness of evolution. If I believed this way though I might start to conclude that dividing the world into "me and everything else" doesn't make much sense and I might start to lose the notion of "I."

that seems a reasonable conclusion... how reasonable is this one? God created you in his image, said image including (of necessity) sentience... he preordained all things having to do with his creatures (you included)... would you possess free will or simply the perception of free will? if perception, how does that differ from reality?

 

i don't really want to get a religious conversation going, so feel free to ignore this if you want..

I can, and do, accept a God who would create beings with free will and sentience. What I could not accept is a God who created beings with sentience but not free will, especially if he gave them immortal souls, some of which will be tormented for deeds over which they were powerless to not do.

 

At this point, I see no reason to be pessimistic about the likelihood that this issue will be resolved scientifically someday. Is this discussion a waste of time? Probably, but what else could we be doing and why is that any more meaningful than debating this topic? Meaning has no meaning in a deterministic universe so perhaps this is the first thing we should decide. Does anything have meaning? If no, become a hedonist. If yes, then do something meaningful.

Link to comment
Share on other sites

I still liked his book "The emperor's new mind" a lot, though. Natural scientists can sometimes write refreshing stuff abut problems that usually belongs to the humanities. I like what he writes about the role of language in consciousness:

I too agree w/ your comments about Penrose. I really liked his first book, but can't really say I care too much about "Shadows of the Mind". Besides, the whole book sounded like a tirade against AI and seemed to be bent on proving that (silicon) machines are incapable of becoming conscious. I went to one of his lectures and while it was very good for the most part, it did in the end get derailed by a prolonged discussion of how "quantum processes" in the neurons on our brains give rise to a "emergent phenomena" like consciouness.

 

Granted, it's plausible, but how is this different from a conjecture that say claims that the quantum processes in our heart (or pick an organ of choice) gives rise to the "soul"? Of course, the concept of consiousness is more slighly more tangible and secular than the soul, but it once we starting blending physics with meta-physics, it's a slippery sliding slope.

 

So, do I think that we are nothing but carbon machines and that the silicon ones will become sentient at some point in the future? Frankly, I don't know and doubt I will find out in my lifetime either. Anyway, enough of this and back to working on Skynet in due earnestness -- after all, we need those androids from the future who will help answer some of these questions :D...

Link to comment
Share on other sites

"....too agree w/ your comments about Penrose. I really liked his first book, but can't really say I care too much about "Shadows of the Mind". Besides, the whole book sounded like a tirade against AI and seemed to be bent on proving that (silicon) machines are incapable of becoming conscious. I went to one of his lectures and while it..."

 

 

I repeat even if we assume an AI cannot be conscious, we assume it is against the laws of known science, however you define it, can it be 100 million times more "intelligent" than the entire human race by 2050? If so does it matter?

 

Just sidestep such terms as conscious, alive, or freewill.

 

Even if we assume that the AI does not posses or cannot posses any of the above, it does not follow we can assert 100% control over it or understand it fully.

Link to comment
Share on other sites

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

Strange assumptions, IMHO. There are plenty of theories about the evolutionary advantages of perception, empathy, memory and mental images . I'm not aware of any that involve free will. Whether those psychological phenomena constitute sentience may be a matter of semantics or metaphysics but they do seem related to sentience in some way. Note that while jtfanclub's scenario involves what I would call "the illusion of free will", he talks about "sentience" rather than "free will". That is quite typical I think. (Btw, I'm sceptical as to jtfanclub's final remarks about Turing machines which seems to based on Penrose's interpretation of consciousness. Then again, what do I know about computability).

 

Also, if you talk about evolutionary advantage you should try to specify: advantage to whom? There are plenty of genetic traits that evolved not because of the advantage to the individual or species that possesses the gene, but to something else, such as to the gene itself, or to a parasite that induced the selective advantage of the gene.

 

Or even have no advantage to anyone. Aging, for example, gives probably net selective disadvantages to the genes that "cause" aging in the sense that alternative alleles would have given the individual a longer life span. This doesn't make aging an evolutionary mystery, of course. The same genes may protect the individual against cancer by shortening the telomeres, thereby ultimately letting the individual succumb to aging. Or the seletive pressure against aging may be too weak to overcome random decay. After all, a healthy living body decays much slower than a dead body so the "trait" of aging is something like the "trait" of not being able to jump to Casiopeia.

 

All this notwithstanding, one might be tempted to draw the conclusion that under your assumptions, sentience does not seem to have a genetic basis. That would be interesting, but not shocking. Some see sentience as a cultural trait.

 

Meaning has no meaning in a deterministic universe
Sic! You must have a completely different notion about those concepts than I have, since to me this is utterly absurd. You said the same about creativity. I would say that the issue of "determinism" should be confined to the ivory tower of theoretical physics or maybe even to that of metaphysics. While "creativity" and "meaning" are down-to-Earth concepts that exist in a cultural context and can be discussed without reference to neurophysiology, let alone physics. I can barely think of concepts more distant from each other than "determinism" versus "creativity" and "meaning".

 

That's just me, I'm sure your notion is as coherent as mine. But I do find it difficult to empatize with that notion. I can empatize a little bit with the belief in "free will" because I have been brought up in a culture where many people seem to believe in free will.

Link to comment
Share on other sites

Even if we assume that the AI does not posses or cannot posses any of the above, it does not follow we can assert 100% control over it or understand it fully.

Good point. Mikeh said that if we really possesed free will we would not be discussing this topic. Maybe some day someone will invent a computer that really posseses free will, and that computer would not worry about sentience and free will but spent its time philosofying about control and understanding instead.

Link to comment
Share on other sites

Even if we assume that the AI does not posses or cannot posses any of the above, it does not follow we can assert 100% control over it or understand it fully.

Good point. Mikeh said that if we really possesed free will we would not be discussing this topic. Maybe some day someone will invent a computer that really posseses free will, and that computer would not worry about sentience and free will but sepnt it's time philosofying about control and understanding instead.

I remain convinced that some, a few, highly improbable events will occur between now and 2051. Events of such improbable importance we will be unprepared. :o

 

They will be extremely important events in human history and they will be highly improbable. :D

 

On the scale if not more so of a few, very few guys with box cutters in 2001.

Link to comment
Share on other sites

Note that while jtfanclub's scenario involves what I would call "the illusion of free will", he talks about "sentience" rather than "free will". That is quite typical I think. (Btw, I'm sceptical as to jtfanclub's final remarks about Turing machines which seems to based on Penrose's interpretation of consciousness. Then again, what do I know about computability).

To me, the definition of sentience is irrational (or maybe I should say non-rational) likes and dislikes. Preferences that are based not on logic or randomness.

 

A Turing machine is a very simple model...you have an input, and a state (or table or whatever you want to call it). All computers can be replicated by input-and state. You can even make a Turing machine that will mimic a particular person. However, in a Turing machine, if you give me the state, and the input, I can tell you the output. But with a person, even if you make the state the entire universe and the input everything the person is getting, you cannot perfectly predict what the output will be.

 

It is that unpredictablility, which I claim is inherent in the mind, that makes it non-Turing. However, that doesn't mean that we have free will. And while I believe that no computer can perfectly predict a human (even if the computer is infinitely large), that doesn't mean that we cannot create a computer that is also impossible to perfectly predict.

 

Would such a computer be sentient? I don't know- I guess it would be.

Link to comment
Share on other sites

A Turing machine is a very simple model...you have an input, and a state (or table or whatever you want to call it). All computers can be replicated by input-and state. You can even make a Turing machine that will mimic a particular person. However, in a Turing machine, if you give me the state, and the input, I can tell you the output. But with a person, even if you make the state the entire universe and the input everything the person is getting, you cannot perfectly predict what the output will be.

 

It is that unpredictablility, which I claim is inherent in the mind, that makes it non-Turing. However, that doesn't mean that we have free will. And while I believe that no computer can perfectly predict a human (even if the computer is infinitely large), that doesn't mean that we cannot create a computer that is also impossible to perfectly predict.

 

Would such a computer be sentient? I don't know- I guess it would be.

For very large systems, you can get "unpredictability" through chaos theory even though everything is completely deterministic. Obviously, you can also get unpredictability through sources of randomness such as at the quantum level. If you think the brain does not use quantum effects then I think you are then forced to believe it could be simulated by a computer of a finite size. It is also very likely a chaotic system such that knowing the initial state perfectly is terribly important and nearly impossible. Those who believe the brain does not have quantum effects I think must believe that a computer performing a full simulation of a brain would also be sentient.

Link to comment
Share on other sites

Thank you, Dr Todd -- I was just thinking about Chaos Theory myself in this regard. It's the reason why deterministic doesn't mean predictable, and it allows for the illusion of free will. Another example is weather -- it's deterministic, since it's just the result of fluid dynamics and energy transfer; but there are so many components to the system that it's not predictable to any fine degree. Terms like "brain storm" may be more meaningful than the coiners imagined.

 

Don't feel bad that free will and consciousness may just be illusions. Life and society work just fine with this illusion, so go with it. Many aspects of our physiology and psychology evolved as a result of the macroscopic nature of physics. We see colors, not wavelengths, because evolution discovered that this was a useful and efficient way to categorize objects in the world. We expect continuity in objects, because that's the way things work at the macro level; quantum processes are confusing precisely because nothing in our experience, or that of all our ancestors, is like them. And we evolved to believe in free will because it's a useful approximation of what happens.

Link to comment
Share on other sites

Thank you, Dr Todd -- I was just thinking about Chaos Theory myself in this regard.  It's the reason why deterministic doesn't mean predictable, and it allows for the illusion of free will.  Another example is weather -- it's deterministic, since it's just the result of fluid dynamics and energy transfer; but there are so many components to the system that it's not predictable to any fine degree.  Terms like "brain storm" may be more meaningful than the coiners imagined.

 

This may help.

 

Solomonoff Induction

June 25th, 2007 – Nick Hay

 

 

The problem of prediction is: given a series of past observations, what future observations do you expect? When we are rigorous about expectations we assign probabilities to the different possibilities. For example, given the weather today we assign 50% probability to a rainy day tomorrow, 30% probability to a cloudy day, and 20% probability to a sunny one.

 

How can we determine the probability of a future given the past? Solomonoff induction is a solution to this problem. Solomonoff induction has a strong performance guarantee: any other method assigns at most a constant factor larger probability to the actual future. This constant is equal to the complexity of that predictor.

 

Solmononoff induction itself is uncomputable, but there are computable analogs. It serves as a simple method of specifying a device which accurately predicts a series of observations. Were such a device to exist we would think it highly intelligent as it correctly predicted any patternful sequence we entered with little error.

 

Below the fold I describe some of the machinary behind Solomonoff induction. I describe a computable approximation which can be exactly and efficiently solved. Although this computable predictor is not particularly intelligent, it shares the same structure as Solomonoff induction.

 

Suppose we are predicting a series of observations, and that we assign:

 

probability 0.2 (i.e. 20%) to the series beginning with 0 (we denote this p(0) = 0.2),

probability 0.4 to the series beginning with 1 (we denote this p(1) = 0.4).

This means we think the series is twice as likely to begin with a 1 as a 0, but there is a 40% chance (1-p(0)-p(1) = 1-0.2-0.4 = 0.4) that it begins with nothing at all i.e. there are no observations. We also have:

 

p(00) = 0.1, i.e. probability 0.1 that the series begins with 00,

p(01) = 0.1,

p(01000) = 0.05.

The first two entries mean if that the series begins with 0, it is equally likely to be followed by either a 0 or a 1. We say the probability of a 0 given a 0 is 0.5 (p(0|0) = p(00)/p(0) = 0.1/0.2 = 0.5) and similarly the probability of a 1 given a 0 is 0.5 (p(1|0) = p(01)/p(0) = 0.1/0.2 = 0.5). Finally, given that the series begins with 01 there is a 50% chance the sequence 000 follows (p(000|01) = p(01000)/p(01) = 0.05/0.1 = 0.5) and a 50% chance that something else happens.

 

Determining the probability that a series begins with a certain sequence is enough to predict everything else.

 

Underlying Solomonoff induction is a programming language for describing sequences. Consider the following simplified language. Programs are sequences of the 4 commands: {0,1,L,E}. Examples:

 

00110101E: outputs the finite sequence 00110101.

L01E: outputs the infinite sequence 01010101….

111L0E: outputs the infinite sequence 1110000….

The program executes from left to right. The commands 0 and 1 output 0 and 1 respectively. If it reaches an L it records the start of a loop and continues. Upon reading a second L it jumps back to the start of the loop. If it reads an E it either ends the sequence or jumps back to the start of a loop.

 

Solomonoff induction predicts sequences by assuming they are produced by a random program. The program is generated by selecting each character randomly until we reach the end of the program. In the above example, there are 4 different characters so each is chosen with probability 1/4. The program L0E is 3 characters long so is generated with probability (1/4)*(1/4)*(1/4) = 1/64.

 

The probability the series begins with a given sequence is the probability the random program’s output begins with that sequence. This means if a sequence is generated by short program it is likely, and if it is only generated by long programs it is unlikely. This is a form of Occam’s razor: simpler sequences (i.e. those described by short progams) are given higher prior probability than complex (i.e. long description) ones.

 

To compute the probability the series begins with 0, consider all the programs whose output begins with 0. These are exactly the programs which begin with either 0 or L0: if a program outputs 0, its first character cannot be either 1 or E, and if its first character is L its second must be 0. The probability the series begins with 0 is therefore (1/4) + (1/4)*(1/4) = 5/16. Similarly, the probability it begins with 1 is 5/16. This doesn’t sum to 1 because the programs E, LE, and LL output nothing. They together have probability 1/4 + 2/16 = 6/16, and reassuringly 6/16 + 5/16 + 5/16 = 1 i.e. either the series is empty, or it begins with either 0 or 1.

 

For this simple language we can compute the probability the series begins with a sequence by studying that sequence’s structure. For example, if the series starts with 1010 then its program must begin with either:

 

1010: probability 1/256,

101L0, 10L10, 1L010, L1010: probability 4/1024,

L10E, L10L, 1L01E, 1L01L: probability 2*(1/256) + 2*(1/1024) = 10/1024.

So the probability the sequence begins with 1010 is 1/256 + 4/1024 + 10/1024 = 18/1024. The first two lines of programs are routine: every sequence has a description of this form. The last line only holds because of the pattern: 1010 is 10 repeated twice.

 

Solomonoff induction has this same structure. For any given sequence, there is a finite set of programs which generate it (actually, program prefixes e.g. 1010 is not a complete program and can be completed in different ways). The probability the series begins with this sequence is the probability any of these programs are generated, formed by the kinds of sums we have above.

 

To be continued….

 

Comment by Nick Tarleton

Jun 25, 2007 2:42 pm

What programming language is used for “real” Solomonoff induction, and why that one?

 

Reply to this comment

Comment by Nick Hay

Jun 25, 2007 3:47 pm

Real Solomonoff induction uses any Turing-complete language. Turing-completeness is required to prove that it assigns at most a constant factor less probability to the actual series than any other predictor.

 

Turing-complete isn’t quite enough. You need a language L with the following property: for any other language, you can reprogram all of its program into L with only a constant increase in length. Intuitively, you can write an interpreter in L for any other language, and you can quote that language’s programs in constant space.

 

Machine code (if you allowed unbounded memory) would be a suitable language. You can write an interpreter for any language in machine code, and you can directly embed that language’s programs into its data area.

 

Reply to this comment

Comment by Nick Tarleton

Jun 26, 2007 4:57 am

So does Solomonff induction give different probabilities for different choices of language? (Say you were using machine code with a primitive instruction for one particular sequence with ridiculously high information content.) Or do they all converge to the One True Probability in the uncomputable limit?

(Comments wont nest below this level)

Comment by Nick Hay

Jun 26, 2007 2:20 pm

It will give different probabilities, but the difference is bounded.

 

Roughly, if your magic instruction contains N bits of information relative to the original language (i.e. requires a program of length N to implement), and this is useful information about the world, then magic-Solomonoff can perform at most N bits better than regular Solomonoff i.e. assign probability at most 2^N higher to the true sequence.

 

You can make deliberately pathological languages where Solomonoff induction isn’t very powerful for all practical lengths of time. Just as you can make pathological Turing-complete langauges where it’s really hard to write useful programs.

 

 

Reply here

 

 

 

Comment by Sebastian Hagen

Jun 27, 2007 1:37 pm

“and reassuringly 1/4 + 5/16 + 5/16 = 1″

 

Actually, 1/4 + 5/16 + 5/16 = 14/16 = 0.875 != 1.

 

At least some of the missing 1/8 of probability goes to programs that are not “E” and don’t output anything.

Obviously LE gets a probability of 1/16, and there’s infinitely more of them: LLE, LLLE, LL0E, LL1E, etc.

 

Reply to this comment

 

Comment by Nick Hay

Jun 27, 2007 4:56 pm

Thanks! I forgot the infinite empty loops LE and LL. LLE, LLLE etc aren’t proper programs, since the decompresor stops reading symbols after the first two L’s. They are included under LL since they begin with it: we don’t want to double count our probability.

 

So, (1/4 + 2/16) + 5/16 + 5/16 = 1.

 

 

© 2007 Singularity Institute for Artificial Intelligence, Inc.

Design by Helldesign

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...