Tuesday, February 13, 2007

Free Will?

When I was growing up, I came to the conclusion, heavily influenced by various things I read, that the universe was effectively deterministic, and that therefore free will is an illusion. A thought experiment by Douglas Hofstader made the case even more convincing to me. "If you really have free will," says one person to another, "are you free to choose to kill me right now?" On one level you might say he is free to kill the questioner, but on the other hand, it's pretty convincing that since he is strongly predisposed not to kill him, that in fact, it is beyond the realm of possibility, and not a possible choice. An interesting wrinkle in the debate just came to my attention in an essay of John R. Searle, where he says the following:


Well, what's wrong with epiphenomenalism [the idea that conscioussness is not causally relevant]? As we come to understand better how the brain works, it may turn out to be true. In the present state of our knowledge, the main objection to accepting epiphenomenalism is that it goes against everything we know about evolution. The processes of conscious rationality are such an important part of our lives, and above all such a biologically expensive part of our lives, that it would be unlike anything we know in evolution if a phenotype of this magnitude played no functional role at all in the life and survival of the organism. In humans and higher animals an enormous biological price is paid for conscious decision making, including everything from how the young are raised to the amount of blood flowing to the brain. To suppose this plays no role in inclusive fitness is not like supposing the human appendix plays no role. It would be more like supposing that vision or digestion played no evolutionary role.


I find this argument simultaneously compelling and suspect. Clearly the brain evolved to "make decisions," but the question is whether there are genuine choices (the decision might turn out differently under the same conditions) or whether the decision is algorithmically determined. Now Searle would probably say, then why is there consciousness? The brain could do all that without being conscious, more like a computer. Well I know many computer scientists would argue that consciousness is an "emergent phenomenon" that just happens when the brain gets too complex. It's not that the brain evolved to have consciousness, it evolved to make decisions, and this has the accidental byproduct of producing consciousness. However, I can't say I like that formulation, because it doesn't really explain what consciousness is, or how it emerges from complexity. So Searle's argument may carry some force. If consciousness doesn't necessarily go hand-in-hand with brain complexity, then the consciousness must have evolved to have some survival value, and the only way that would seem to make sense is if it actually was able to influence the body. I.e. we have free will.

No comments: