So, we’ve established in part 1 that, among other things, the zombie argument can only have any weight if the conceivability of zombies can be justified. Of course, there is a justification offered. The argument goes roughly like so:
1. You cannot refute the proposition that zombies are conceivable
2. Zombies intuitively seem to be conceivable
3. Therefore, zombies are conceivable
The reason this argument is considered a semi-valid one is that most philosophers accept that if a proposition “intuitively seems” to be true, and it has not been disproven by reason or evidence, this means that that proposition is true or at least probable.
Philosophers’ willingness to go from “we intuitively feel X to be true” to “X is true” or “X is probably true,” even on matters as complicated, misunderstood, and bizarre as philosophical zombies, is a completely unjustified arrogance. One thing you have probably already noticed if you’ve read my other pieces is that I am very arrogant, at least philosophically speaking – and yet I still would never even DREAM of proposing that my intuition is some magical window into the way things really are; that it can determine deep metaphysical truths on matters which (like zombies) my conscious, reasoning mind cannot even begin to understand. Yet in philosophy this “magic window into fairlyland” model of intuition (that is the accepted nomenclature) seems to be accepted by basically the entire discipline – if a philosopher can find any way at all of making an idea “intuitively plausible,” then the argument is considered compelling, no matter how tortuous or absurd the method of achieving intuitive plausibility is. And if a philosopher can find some way of making an idea “counterintuitive” then this is considered a compelling objection. This blind faith in our own intuition is both arrogant and completely unjustified.
The first reason this faith in our intuition is unjustified is that our intuition cannot somehow generate information that we do not already have. It follows that if our intuition does not have the right information to work with then any conclusion it gives will necessarily be based on bad information (what else would it be based on?) and therefore be unreliable. There is still very little understanding of the brain and how it produces (or does not produce) consciousness – and yet the zombie argument asks you to picture a working brain and decide whether it is sufficient to produce consciousness, and philosophers happily close their eyes and say “yes, I am picturing a working brain right now,” when in fact what they are picturing more likely resembles an opaque, grey, and squishy box, rather than an actual working brain.
Put another way: if you can accurately picture a zombie, complete with a working brain with all the massively complicated processes and subprocesses that the brain consists of, then you and your miracle of an intuition have surpassed all of humanity’s work on trying to understand the brain. On the other hand, if your picture of a working brain is massively incomplete, inconsistent, and on many counts just plain wrong – as it inevitably will be, due to humanity’s extremely limited understanding of the brain – then why would you expect your intuitive conclusions from such a picture to mean anything? How can your intuition give a good conclusion when it is working off of incompete, inconsistent, and inaccurate information? The only way you could believe that your intuition can give a reliable answer in a situation where reason and evidence have no claim is if you believe that intuition has supernatural properties.
Perhaps you will object to my assertion that there isn’t enough information about the brain to come to a conclusion regarding the zombie example. You may say that you don’t need a completely precise picture to draw conclusions – you don’t need to conceive a universe in order to conceive of baking an applie pie, in fact, you don’t even have to know how an oven works beyond the fact that it’s that hot thing the pie goes in. So why should you need to have a precise understanding of the brain to decide whether or not it is sufficient for consciousness? Isn’t just our everyday commonsense regarding the brain enough?
The problem with this is that I’m not asking you to invent a universe, or even know how an oven works. I’m just asking you to have some vague idea what the hell you are talking about before you start talking – and if this is impossible then DON’T TALK. If we had a broad, schematic understanding of what the brain does then we would be in good shape to talk about whether it is sufficient for consciousness, even if we didn’t know the physical details; but we don’t, and so we can’t. To return to the pie example, if someone really had no clue how to bake a pie beyond “well you mix some stuff together then put it in a hot thing,” would you really trust that person’s “intuitive feelings” regarding whether a specific ingredient is included? Of course not. Similarly, why would you trust your “intuitive feelings” about whether the brain is sufficient for consciousness when you don’t know what the brain does?
Anyway, relying on our intuition isn’t just bad because we don’t properly understand the situation. Even setting aside the fact that humans don’t understand the brain well enough to draw conclusions about it, philosophers are still wrong in assuming that their intuitive feeling is indicative of truth – if we had a good understanding of the brain it would be lazy and unreliable to say “well intuitively I think this is sufficient for consciousness” or “intuitively it seems like this isn’t sufficient for consciousness” – we could use that understanding, in combination with a thing called “reason,” to actually determine reliably whether or not the brain was sufficient for consciousness.
This intuition, that we are expecting to determine whether the human brain is sufficient for consciousness, is the same intuition that can’t figure out the Monty Hall problem: everyone who encounters the Monty Hall problem will find it intuitively obvious that the odds are the same whether you switch or not, whereas reason and evidence both inarguably show that it is best to switch. I’m curious; why haven’t philosophers used this counterintuitive finding to refute mathematics yet? “You mathematicians’ ‘axioms’ lead to a counterintuitive result; if your axioms are true then it is best to switch in the Monty Hall problem, but it is intuitively obvious that it doesn’t matter whether or not you switch.” Of course, the reason no philosopher has done this is because they know that here their intuition is in the wrong; philosophers easily recognize that their intuition is wrong when it can be demonstrated so, but they seem to think that if they can just find an area where you can’t prove their intuition wrong beyond all possible doubt, their intuition shifts from being a quick, useful, bundle of heuristics into being some sort of magical revealer of truth.
This intuition that philosophers seem to think allows them to know the truth on matters as bizarre, alien, and massively complex as the situations given in thought experiments is the same intuition that proves itself to be extremely unreliable whenever it can be put to the test. Even in everyday use it is suspect to confirmation bias, it commits the gambler’s fallacy, and any number of other errors: there is nothing magic about intuition that gives it power where evidence and reason are not present. If we can’t figure something out through reason and evidence then we can’t figure it out with intuition either. Any argument that relies on “intuitive plausibility” relies on something that is fundamentally unreliable, and thus is a fatally flawed argument.