I read I am a Strange Loop by Douglas Hofstadter several weeks ago, and it contains an extremely good explanation of something called Gödel's Incompleteness Theorem. This is a result in the foundations of mathematics that says, essentially, that there are true mathematical facts that cannot be proven. The heart of the proof is a method for reading a second encoded meaning into arithmetical statements, one which is consistent with the axioms of arithmetic, but which actually means something else to us. (Similar to the way a string of zeroes and ones underlies the entire operation of your computer, and yet you are reading a different meaning into it.) Hofstadter then points out that this is the way consciousness works. There's the laws of physics which underlie the operation of our brain (kind of like the zeroes and ones), but then there's our conscious experience which arises from symbols interacting at a higher level.
So far, I have to say "Right on!" However, here's where things get sticky. Hofstadter argues that once you have a set of symbols in whatever substrate, the human brain, a computer (not too hard to imagine) or anywhere else (hmm), which is sufficiently complicated then there lies consciousness. John Searle argues against that point of view. He says that computers will never be conscious, even if they act like it, because they won't understand what they're doing. He doesn't explain what makes humans different, though.
I personally believe that the truth lies somewhere in the middle. I don't see any reason why computers can't be conscious. Maybe they can, maybe they can't. There definitely seems to be something special about conscious awareness. But here's a question. Could I act like myself, going about my usual daily activities, in the same way I always do, with my conscious awareness shut off? When I phrase it that way, it seems more intuitive that I could not, and that actual conscious awareness is concomitant with behavior of a certain kind.
If a computer were made that acted conscious, would we ever be able to know that it is conscious? It seems unlikely that that could ever be settled scientifically.
And what about the Buddhist perspective? After all, in Buddhism we seek to understand the nature of mind. Well, I don't see any reason why, within the Buddhist framework, computers can't be conscious. There are all types of sentient beings, in all different types of states. (In fact, in one of the hell realms, mention is made of metallic beings.) But, just because I don't see a reason why not, doesn't mean I see a reason why. :)
And now, I must go eat lunch...
7 comments:
What about when a drunk goes into a blackout, and a Dr. Jeckyll persona takes over? The usual self has been shut off and some lower being, or program, is conscious, or anyway, in control for a while.--S29
Yes, when one goes into an alcoholic stupor, the light of consciousness seems to switch off. I don't think another program becomes conscious, or even in control, though. I think instead we lose what ability we had to sift through the confusion of appearances, and instead act according to our deep-rooted instincts of selfishness.
Experiments with brain-injured individuals -- left-brain/right-brain lesion -- have demonstrated that two distinct personalities can co-exist in the same brain. If one personality is induced to do something and then the second personality is asked how that happened, the second personality will remember a false history to account for the act.
I don't recall the details of the experiments at this point.--S29
And here's something to consider. Split-brain patients whose corpus calossums have been severed, often have the two sides of their brain at cross purposes. In such cases, would we say there are two personalities? If an experienced meditator undergoes such an operation, would they be able to maintain a unified awareness of the two separate streams of input?
It is often said that consciousness is a defining trait of humans. It is said that animals lack it, or lack self awareness and thus the type of consciousness possessed by humans. But what is a satisfactory definition of consciousness? I know that what consciousness is a debated philosphical conundrum. But how does one proceed to the question of whether machines can possess a consciousness without a prcise definition of consciousness itself?
And how do we know we have consciousness apart from the mechanical operations of our brains? What if it is true that our rational decision-making thought process is actually a fantasy ; a running commentary to justify decisions already made (a concept supported by some evidence, according to a NY Times article).
What I find perplexing about Hofstadter and his camp is that they do not explain the point: How does an "emergent property" "recognize" that "it" is "conscious"? Implicit in this problem is the aspect of identity. A conscious entity examining itself "purposefully" and declaring that it is just an emergent property and nothing more is one of the most ridiculous thoughts! It is as if some unknown chance interactions in Hofstadter's atoms produced the book and that Hofstadter did not will it consciously. He also mixes up concepts like "soul", "mind", "intellect", "consciousness" "awareness" without a moment's thought that they could all be different. It appears as if he is trying to hood wink the readers and take them where he wants to, rather than engaging with them to persuade them to accept his view point. I am posting a link of Martin Gardner's criticism of Hofstadter (and camp) and this book here: http://www.ams.org/notices/200707/tx070700852p.pdf
Another aspect is that opinions in physics have also veered around to admitting that an "observer" (or consciousness) is an ineluctable part of reality and has a vital role in creating the reality it experriences
Post a Comment