I read
I am a Strange Loop by Douglas Hofstadter several weeks ago, and it contains an extremely good explanation of something called Gödel's Incompleteness Theorem. This is a result in the foundations of mathematics that says, essentially, that there are true mathematical facts that cannot be proven. The heart of the proof is a method for reading a second encoded meaning into arithmetical statements, one which is consistent with the axioms of arithmetic, but which actually means something else to us. (Similar to the way a string of zeroes and ones underlies the entire operation of your computer, and yet you are reading a different meaning into it.) Hofstadter then points out that this is the way consciousness works. There's the laws of physics which underlie the operation of our brain (kind of like the zeroes and ones), but then there's our conscious experience which arises from symbols interacting at a higher level.
So far, I have to say "Right on!" However, here's where things get sticky. Hofstadter argues that once you have a set of symbols in whatever substrate, the human brain, a computer (not too hard to imagine) or anywhere else (hmm), which is sufficiently complicated then there lies consciousness. John Searle argues against that point of view. He says that computers will never be conscious, even if they act like it, because they won't understand what they're doing. He doesn't explain what makes humans different, though.
I personally believe that the truth lies somewhere in the middle. I don't see any reason why computers can't be conscious. Maybe they can, maybe they can't. There definitely seems to be something special about conscious awareness. But here's a question. Could I act like myself, going about my usual daily activities, in the same way I always do, with my conscious awareness shut off? When I phrase it that way, it seems more intuitive that I could not, and that actual conscious awareness is concomitant with behavior of a certain kind.
If a computer were made that acted conscious, would we ever be able to know that it is conscious? It seems unlikely that that could ever be settled scientifically.
And what about the Buddhist perspective? After all, in Buddhism we seek to understand the nature of mind. Well, I don't see any reason why, within the Buddhist framework, computers can't be conscious. There are all types of sentient beings, in all different types of states. (In fact, in one of the hell realms, mention is made of metallic beings.) But, just because I don't see a reason why not, doesn't mean I see a reason why. :)
And now, I must go eat lunch...