Dropped out of university, ending the blog

I “withdrew” from this university semester, and don’t plan on going back. Although it is unfortunate to quit before I can figure out whether (the vast majority of) philosophers are actually as staggeringly incompetent as they appear to be, or are simply playing an elaborate practical joke, I simply couldn’t stand it either way.

It’s technically possible that I’ll post more to here, but it’s unlikely. In any case, thanks to everyone who was supportive. No thanks to this discipline for turning what should be a worthwhile pursuit into a massive waste of time.

Bye!

Posted in Uncategorized | 17 Comments

What’s the point of Naming and Necessity?

I’ve got some questions about Kripke’s Naming and Necessity. Basically, I’ve only heard good things about this book, so I’ve read it and tried as best I can to understand it, but it seems like complete fluff to me. So here I’ll lay out Naming and Necessity as I understand it, and why I think that “Naming and Necessity as I understand it” is so unnoteworthy. Hopefully someone can correct me if I go wrong somewhere, and shower me with praise if I don’t.

So Kripke is trying to argue that proper names are rigid designators. A rigid designator is something that designates the same object in every possible world.

Isn’t he arguing for a completely circular definition of proper names?

“In these lectures, I will argue, intuitively, that proper names are rigid designators, for although the man (Nixon) might not have been the President, it is not the case that he might not have been Nixon (though he might not have been called ‘Nixon’).”

So he is arguing that Nixon rigidly designates Nixon. The problem should be immediately obvious: he is arguing for a definition of the word Nixon, and in that definition, includes the word Nixon. He is arguing that proper names refer to what they refer to.

When he says that proper names refer to “the same object in all possible worlds,” as best I can tell, this is still a regress, as it is a standin for saying that “proper names refer to what they refer to.” This can be seen by how, whenever he talks about a specific case, he cannot explain how we are supposed to determine what this “same object” we are referring to is, and instead resorts to saying that the name refers to what it refers to:

“Don’t ask: how can I identify this table in another possible world, except by its properties? I have the table in my hands, I can point to it, and when I ask whether it might have been in another room, I am talking, by definition, about it, I don’t have to identify it after seeing it through a telescope. If I am talking about it, I am talking about it.”

I don’t believe that arguing for a conception of names that begs the question, then saying “don’t ask that question,” is a philosophically sound argument.

Two asides: one, even this unambitious point – that proper names refer to what they refer to – seems to be false, because it tries to force a binary kind of “it either refers or it doesn’t” mentality onto proper names that doesn’t really exist. I can guarantee you that there will be borderline cases where some people will say that the proper name refers to that object, and others will say it won’t (and my intuitive reaction to a lot of these borderline cases is neither affirmative nor negative, further proving the fuzziness of so called ‘rigid’ designators). And in these borderline cases, it doesn’t make sense to say “well that either IS Nixon or it ISN’T.” It would be more accurate to say that it’s “kind of Nixon.”

Two, it’s true that Kripke makes a halfhearted attempt at presenting an alternative ‘picture’ of how names work, but it is not the main point of Kripke’s lecture (and Kripke himself had reservations about the picture he presented), so I won’t be talking about it.

So anyway, how is this supposed to be a revolutionary piece? As best I can make out, its salient point is that proper names refer to what they refer to. Is there something more to his definition of proper names that I’m missing?

The standard line about why Naming and Necessity was so important is that Kripke’s amazing idea that proper names are rigid designators (i.e. that proper names refer to the same object in all possible worlds, i.e. that proper names refer to what they refer to) has ‘surprising implications’. But I haven’t seen any implication of rigid designators that impressed me.

For example, one thing I’m supposed to be excited about is that this discovery implies that there is necessary a posteriori knowledge.

According to the Stanford Encyclopedia article on rigid designators, this implication is exciting because “prior to discussion about rigid designation, the necessary a posteriori was generally thought to be an empty category.”

Really? People thought that was an empty category? Necessary a posteriori knowledge is extremely easy to construct. You can turn every contingent a posteriori piece of knowledge you discover into necessary a posteriori with ease. This is because it is true in all possible worlds that in this possible world at this time, that contingent piece of knowledge is true. (e.g., it is necessarily true that “in this world at this time there was a tree standing over there”) Clearly just being “necessary” does not imply that it is any more important than contingent knowledge.

One more example of how easy it is to make necessary a posteriori knowledge: suppose I flip a coin and do not look at what face it landed on. I then decide that the word “morps” will refer to black swans if the coin landed heads, and white swans if it landed tails. Then I look down and see that the coin landed heads. I just discovered the necessary a posteriori fact that morps are black. Again, necessary and a posteriori, but philosophically worthless.

So, for Kripke’s “discovery” to have worth, it has to imply something beyond that there can be necessary a posteriori knowledge. But I don’t see how the discovery that there are rigid designators can imply anything meaningful.

To try to illustrate why I don’t think this discovery can imply anything meaningful, let’s stipulate a possible world which is similar to ours, except that proper names aren’t rigid designators. Instead they are, let’s say, description clusters. So, the people there still use names, but instead of using them like we do they use them as description clusters which don’t designate the same object in all possible worlds (although of course, to them they would be designating the same object in all possible worlds).

In this possible world, is the set of possible worlds going to be any different? Of course not. So, discovering that there are rigid designators cannot equal discovering anything about what is or is not possible, as what is or is not possible is not determined by the idiosyncrasies of our language.

He gave a brief discussion of a possible application at the end, but it seemed to assume a knowledge of identity theory that I don’t have. He never really defined the terms he used, so there isn’t much I can say without further research there. But, again, it is logically impossible for the way proper names work to have implications about what is or is not possible, so it’s not like his ‘discovery’ can e.g. suddenly imply that epiphenomenalism is correct.

So what is supposed to be useful about proper names referring to what they refer to? Can someone fill me in here?

Posted in Uncategorized | 14 Comments

Intuition Pandering vs Actual Moral Philosophy

Moral philosophy has come to rely on moral intuition as an arbitrator of truth. Ethical theories live and die by their ability to match our moral intuitions. The result of this strange worship of intuition is that the moral theories that survive are nothing more than restatements of what philosophers find intuitively plausible, with no justification for their foundation besides their intuitive plausibility.

The problem with this approach is shown most clearly in the case of utilitarianism versus deontological philosophies. Utilitarianism, at least in its simplest form, is a perfect example of moral philosophy done the right way – its premise (that happiness is good and suffering is bad) is observably true, rather than justified “intuitively” (which is not any justification at all). Deontological philosophies are the opposite – their premises rely on things like “rights” and “duties” which can only be justified intuitively.

Since philosophers see moral intuition as the sole determinant of what is right and wrong, utilitarianism is of course criticized on the grounds that it can give counterintuitive conclusions. So for example critics will proudly point out that utilitarianism cannot guarantee against slavery. Of course, it isn’t actually possible to construct a realistic situation where utilitarianism recommends slavery – it’s pretty damn difficult to construct an UNrealistic one – but that’s beside the point. Slavery is always wrong, because our moral intuitions say so. So if utilitarianism says it’s wrong 99.999% of the time then that is not enough, because our moral intuition says that it is wrong 100% of the time. And since it’s impossible that moral intuition could be wrong 0.0001% of the time, utilitarianism must be false.

After having  so thoroughly proven utilitarianism false the critic will then go on to argue for his own deontological moral philosophy which, he will proudly note, CAN guarantee against slavery in ALL cases. Deontological philosophy matches our moral intuition that humans should never be enslaved because one of its premises is that humans have a right to freedom, i.e. that humans should never be enslaved. This premise is justified because intuitively it seems obvious that humans should never be enslaved.

Utilitarianism’s ability to go against our moral intuition is not a weakness. It is a strength – no, not even that – it’s a prerequisite for an ethical theory to be able to make any progress at all. If an ethical theory is lauded for its inability to violate our intuitions then it is being lauded for being a reflection of our beliefs rather than for being able to improve our beliefs. It is congratulated for being unable to produce progress.

The progressive nature of utilitarianism versus the sycophantic nature of deontology is born out in philosophical history. Bentham and Mill, the Big Two utilitarians, both worked against racism and sexism at a time when these were dominant ideologies. Why? Because utilitarianism is an actual ethical theory which can make progress; it helps us move beyond our unjustified prejudices and see what is actually true. It is capable of coming to counterintuitive conclusions and of challenging our beliefs.

Deontology – rights and duty-based ethics – is supposed to be superior to utilitarianism because it guarantees against oppression and prejudice. And yet Kant, the great deontologist, was extremely racist and inexcusably sexist. The greatest deontological philosopher of all time had ethical views that were simply unconscionable. This is because the forces deontological philosophies appeal to – ‘rights’ or ‘duties’  – are invisible (and nonexistent), and so what our rights and duties consist of is ultimately determined by intuition, which of course is where our biases and prejudices show themselves most prominently. As such the only thing a deontological philosophy can truly guarantee is not that it will protect against oppression but that it will conform to your prejudices and biases.

Let’s look at an example from the 20th century. Now that intellectuals have reached an agreement that sexism and racism are definitely bad, a deontologist comes out of the woodwork and condemns utilitarianism for not guaranteeing against slavery… even though utilitarian reasoning is the only ethical system that consistently opposes it. This new deontologist is massively successful – this is John Rawls and his Principles of Justice.

But John Rawls’ philosophy is not an improvement. It is again simply an iteration of his generations’ status quo, carrying his generations’ biases and preconceptions along with it. His philosophy does indeed guarantee that it will frown upon the kind of oppressions philosophers were already frowning upon. What it fails to do is fight an oppression that is still considered acceptable – in fact, Rawls’ principles of justice actually work to reinforce that oppression. Specifically, he leaves animals entirely out of his original position thought experiment. This is inexcusable. Bentham, who lived in the eighteenth century, was an animal rights activist. Yet Rawls – a modern 20th century philosopher, who criticizes utilitarianism for failing to protect aganist oppression – cheerfully leaves animals out entirely of his principles of justice. As far as his principles of justice are concerned there is absolutely nothing wrong with factory farms, dog fighting, or any other kind of horrifying mistreatment of animals.

Once animal rights have been established – through utilitarian reasoning – perhaps some fashionable deontologist will condemn utilitarianism for not guaranteeing strongly enough against the oppression of animals; that it allows oppression of animals to occur in an obscure thought experiment that she will fail to define very well. And perhaps she will have dictated this condemnation to an artificially intelligent computer which she cheerfully keeps bound as her personal assistant, because after all, it just doesn’t seem intuitively plausible that computers can have rights.

Posted in Uncategorized | 21 Comments

Originality in Philosophy Courses is Punished Instead of Encouraged

Markers (at least, markers of undergraduate philosophy papers, though this is probably applicable elsewhere) punish originality, rather than encourage it as they should. This is a difficult claim to establish as it’s mostly based on my own experience, but I’ll give two examples of this happening.

First off, I’ll give the only example I have that’s short enough to be reproduced entirely in a blog post. This was the second philosophy assignment I ever did, and the absolute silliness of the comments pissed me off enough that I ended up typing the entire assignment and commentary out in an email to my parents explaining why the comments were silly (hence why I still have it). Little did I realize this strange commentary would not be an isolated incident; it would just be one case of a larger trend. Now, I’m sure some markers are better about this than others, but it is a startlingly consistent finding: the more original my arguments are, the more likely I will find stupid objections scrawled in the margins when I get the essay back.

For the assignment, we were supposed to make a (step form) argument that rebuts a “friend” who criticizes utilitarianism on the grounds that it recommends disposing members of society who are not “useful,” e.g. cripples and homeless people. The idea was that we make the easy, boring argument that utilitarianism would not recommend this. Instead I decided to argue that we shouldn’t reject a philosophy solely because it is counterintuitive; this was (and still is) a pet peeve of mine. Here’s the argument I handed in:

  1. The moral intuition of the majority of people in the past strongly supported racism and slavery
  2. The moral intuition of the majority of the people in the present strongly opposes racism and slavery
  3. So, the moral intuition of the majority is not always correct
  4. The friend’s objection to utilitarianism assumes that dispensing with unproductive members of society is wrong
  5. The friend assumes this because the majority of people intuitively agree that dispensing with unproductive members of society is wrong
  6. So, the friend’s objection to utilitarianism assumes the moral intuition of the majority is always correct
  7. So, the friend’s argument assumes something that isn’t true
  8. An argument that assumes something that isn’t true is invalid
  9. So, the friend’s argument is invalid

Again, I don’t want to single out this TA – this is just the only example short enough to fit in a blog post. Now, on to the comments. On steps 1-3 he wrote:

“Confusing. What do you mean by ‘correct’ here? This can defend racism in present just as much as racism in the past is incorrect [sic]. Both are majority”

A very nice example of a startlingly stupid comment that you simply will not find attached to unoriginal assignments. What do I mean by ‘correct’? What I meant by ‘correct’ is what every other person who speaks English means by ‘correct’ when they say ‘correct.’ The moral intuition of the majority could defend racism in the present just as much as it could attack racism in the past? Why on earth does he say that as if it’s somehow a counterpoint that I hadn’t considered? That was my point.

When I complained to him about the comments/mark, he admitted that the argument I’d made was in fact logically sound, but – like all graders I’ve complained to – refused to admit that he may have made a mistake. Instead he said that his mark was fair because, to make my argument more clear, I should have added a premise that states that if something contradicts itself then it is not always correct. Should I also add a premise stating that if all x’s are y and all y’s are z, then all x’s are z? Should I add a premise explaining that one plus one is TWO, rather than THREE?

Anyway, for the next comment he highlighted steps 6&7 and said:

“Do not have two conclusions back to back. You need premises that show conclusions.”

Another startling case of marker blindness that never seems to occur except when I am making original arguments. Perhaps there was a better way of structuring the argument (although, since both of 7’s premises need to be supported I can’t think of one barring using a diagram instead of plain text) but he is somehow under the impression that number 7 doesn’t have supporting premises, even though it quite clearly does – steps number 3 & 6. This should be evident to anyone who can remember what they read 30 seconds ago. One symptom of original assignments is that instead of receiving the quite charitable reading that unoriginal assignments do, you receive the kind of confused reading that you would normally only get from five year olds. If I’d been arguing something unoriginal he would have said “Oh, I know what he means,” but because the argument is original it is held to a bizarrely high standard that demands everything be made impossibly obvious.

His overall comment was:

“This is a creative effort. However, you need to tighten up the argument. Do not discuss + back up position with concepts that have nothing else to do w/ your argument (i.e. racism + slavery). You need to focus on making a more straight-forward, clear, convincing argument. Come by my office hours w/ your third argument if you want. 78 / 100”

My previous argument had got 84/100 with a minimum of effort, because I did it in boring and conventional manner. In this one I tried something original and ambitious, put more time in than I had for the first, overall liked the argument much more than the first – and instead of getting any credit for this originality (besides a reprimand that I should argue in a more “straight-forward, clear, convincing” fashion, which is code for “stop making creative efforts”) I got taken to town for his own logical failings.

The next example of punishment of originality is one from my philosophy of the 21st century class. Basically, around the time I first wanted to write this post, I decided to put my money where my mouth is. At the time I was still sore over the comments I received for an essay in my philosophy of the 21st century class, where my professor claimed I misunderstood Kripke when in fact I had anticipated and addressed his primary objection RIGHT IN MY ESSAY; and yet made the counterpoint anyway, completely ignoring my… countercounterpoint. It was surreal. But of course it was a 3000 word essay, so I knew there was no way I could really explain this sufficiently to convince anyone; I would essentially be putting a biased undergraduate’s word against a not-too-biased professor’s, and so the natural and understandable reaction people would have would be to side with the professor.

I had one more essay left to hand in in this philosophy of the 21st century course, so I decided that for the next essay I’d argue something that I didn’t believe at all, in an unoriginal fashion (i.e. relying on arguments used in the course), and see the results.

The second essay question I chose asked me to compare and contrast Mark Johnston’s account of religion with Alvin Plantinga’s; describe them and say how they differed, were similar, and which was more persuasive. I found both of their “accounts” to be pathetic – I’ll write about why later. In any case, I realized that arguing “both these philosophers suck” would be a good way to guarantee some confusing, irrational notes scrawled in the margins. So instead, I decided to make what my TA would call a “more straight-forward, clear, convincing argument” – in other words, I said what my marker wanted to hear instead of what I believed, and relied on recombining the flawed arguments taught in class rather than on using my own good ones. I decided to argue the most “straight-forward” thing possible: one of them would be bad and the other would be good. Plantinga (or, as I misspelled him throughout the entire essay, “Platinga” – not intentional, I just hadn’t read the Plantinga section of “What Philosophers Know” very carefully) would be BAD and Johnston would be AMAZING. Although I hate Johnston’s account of religion with a passion, my conclusion in the essay was that “Johnston’s account of religion is an elegant, compelling one that sidesteps many of the great problems that face religion today.” Note I never said that it was right – “right” seems to be a bad word in philosophy.

Needless to say, I got higher on the essay on Johnston and “Platinga” than I got on my first essay – in fact, it got a higher grade than any philosophy assignment I’ve got to this point. (for what it’s worth, the first essay got 83 and the second got 89)

For the first essay I spent a good amount of time constructing the essay and I read the relevant sections of Gutting’s book multiple times to ensure I was understanding the claims correctly. I was informed that I had misunderstood Kripke. For the second essay I spent less time overall, I made arguments I knew were wrong, and I hadn’t even done all the reading relevant to the topic – in fact I’d done less than half of the reading assigned for Johnston. The final comment was that it was a “Very good discussion of these ideas, and effective well-argued comparison of Plantinga and Johnston.”

If I had to say why original essays provoke bizarre, uncharitable grading, I’d guess that the unfamiliarity of the arguments creates a number of problems. One is that when reading familiar arguments the marker can easily fill in missing gaps in the arguments, and may well do this automatically and subconsciously; but when reading unfamiliar arguments this is impossible, and so unfamiliar arguments will naturally be harder to understand and will naturally seem less clear. Another problem is defensiveness and bias, which again everyone will suffer from to a greater or lesser extent. Another is that unoriginal arguments are to a certain extent immune to criticism; for my second essay I relied on rearranging and recombining the arguments of Gutting, Plantinga, and Johnston, and so he could not really criticize anything I said as it was what I had been taught in class. Original arguments invite the marker to make whatever random objection occurs to them, whether that objection makes sense or not. Lastly, original arguments are, of course, harder to create than ones that are based off of what you’ve already been taught, and yet credit is not given for this even though it really should – a philosopher who is unable to think for themselves is not a philosopher at all. Philosophy shouldn’t be about arbitrarily recombining arguments you’ve already heard, but this is what is encouraged.

Posted in Uncategorized | 13 Comments

Are Philosophers Giving Up on Reason?

I recently read What Philosophers Know, by Gary Gutting, for one of my philosophy courses. The book is, in essence, an attempt at rebutting skeptics of philosophy who say that philosophers have not established any disciplinary knowledge; the idea is that he will rebut these skeptics by showing what knowledge philosophy has gained. Unfortunately, the book ends up justifying skeptics rather than rebutting them – the entire book consists of Gutting describing a philosophical dispute, reassuring everyone involved that they are entitled to their opinion, and then moving on. His “philosophical knowledge” is simply that “everyone is entitled to their opinion” repeated over, and over, and over.

His reason for this relativistic assertion is that he believes that this is necessary in order to get any interesting results. He argues that since a strictly foundationalist approach (basically, only accepting as premises for arguments points that are either extremely obvious or extremely well justified) fails to produce interesting results, we must loosen these criteria for accepting premises; and Gutting loosens his criteria enough that on every issue he discusses he ends up simply telling each side of the argument that they are “warranted” in holding their position, and that neither one should give way to the other. Attacking premises, or asking for them to be justified, is outlawed in the name of “anti-foundationalism”.

There is a continuum of how stringent the criteria for accepting premises can be – on the one side of the spectrum we have radical skepticism, which has criteria so stringent that no knowledge can be established, and foundationalism, which has criteria stringent enough that establishing useful or interesting knowledge is extremely difficult, and on the far other end of the spectrum, we have relativism, where nearly any belief can be seen as “warranted” because its only objective criterion is that the premise is logically coherent with itself and with the other premises of the argument. Gutting’s book indicates that because of fear of foundationalism and radical skepticism, philosophers have moved further and further away from a harsh foundationalist viewpoint, and have ended up on the complete other side of the spectrum. The result is that no true philosophical progress can be made, because the premises of arguments are considered virtually immune to criticism – and so arriving at any definitive answer to a philosophical question is impossible, and the use of reason is relegated to the sidelines in favour of subjective arguments based on pathos and “intuitive plausibility.”

Think I’m being too hard on Gutting? That philosophy really isn’t all that relativistic? Let’s look at what philosophers know. Gutting on “what philosophers know” about analytic knowledge: “Knowing that analyticity cannot be defined in essentially different terms means that we must either accept it (or another term in its immediate family) as basic or else reject it.” (p73) Ah, thanks to our reasoned analysis of analytic knowledge, we have established that we must either accept it or reject it. But, which one? Up to you! Take your pick!

Gutting on “what philosophers know” about Kripke’s claims that there is such a thing as necessary a posteriori knowledge: “[Kripke] did not establish that his essentialist theses were true, but that the picture they presented was worthy of attention.” (78) Do philosophers know whether Kripke’s right? Nope.

Gutting on “what philosophers know” regarding free will, determinism, and so forth: philosophers haven’t established anything for sure, but “we’re going to [hold people responsible for their actions] no matter what. Perhaps we can take this very result as an important piece of philosophical knowledge. Couldn’t it be plausibly claimed that one outcome of philosophical anti-foundationalism, applied to the case of freedom, has been that the practice of holding responsible is in order even without philosophical justification?” (p148) Yes, because that’s knowledge that non-philosophers will find useful. “Hey, laypeople, you know that thing you’re already doing? You can keep doing that.” Thanks, philosophy!

Gutting’s complete unwillingness to favour one view over another is highlighted when he considers Alvin Plantinga’s book “The Warrantedness of Christian belief.” He mentions a possible counterargument to Plantinga’s defense of the warrantedness of Christian belief; that “to the extent that Plantinga’s book supports the warrant of Christianity, [one could use these same arguments to] support the warrant of, for example, conservative Islamic views on the status of women, Catholic views on papal infallibility, and perhaps even Jehovah’s Witnesses’ views on blood transfusions and Aztec views on the need for human sacrifice.” (p117) It seems like he is finally understanding the flaws of this relativistic approach to philosophy; but no! Gutting is so committed to avoiding telling anyone that they are wrong that he asks, “But why should any of this bother Plantinga or other Christians who rely on his defense?” (p117) Hmm. Yes, why should this bother Plantinga? maybe because “Aztec views on the need for human sacrifice” are clearly wrong, and so if they can be defended by the same arguments Plantinga uses to defend Christianity, this constitutes a reductio ad absurdum refutation of Plantinga’s defense of Christianity? Or maybe this should just bother Plantinga for the same reason that it should bother any other philosopher, i.e. that it means that if Plantinga’s arguments (which Gutting endorses) are successful then philosophy is doomed to accomplishing basically nothing at all!

Gutting’s idea of a conclusive, useful piece of philosophical knowledge, that he uses in the conclusion to his entire book as an example of what philosophy can do for non-philosophers, is: “no standard popular version of a theistic or atheistic argument makes an adequate case for its conclusion.” (p232) Congratulations, the grand message of philosophers to nonphilosophers is: “Laypeople! Keep doing what you’re doing, but with the understanding that no one is really wrong or right about anything.” And to think that those silly skeptics thought that philosophers might not have accomplished anything worthwhile!

Gutting sets out to show “what philosophers know.” But all that he ends up “showing” is that philosophers have become too afraid of radical skepticism to excercise any skepticism at all, too afraid of having their own false beliefs exposed to expose the false beliefs of others, and too distrustful of their own reason to accept it when it leads them counter to their intuition. Now, maybe Gutting is wrong about the state of philosophy and of philosophers. But if his take on the state of philosophy is accurate then the majority of the philosophical community is using an overly lax approach that fundamentally undermines the philosophical enterprise.

Posted in Uncategorized | 7 Comments

Why I didn’t fill out the Student Evaluations

So, I realize that student evaluations of their professors are, in principle, great. Still, I didn’t fill out any, for a number of reasons. Most of these were ones that can’t be avoided by those handing out the evaluations – for example, laziness, exams & papers due, and so on – but one reason stuck out, as it was a stupid, easily fixable problem.

This evaluation consisted entirely of rating our professor’s performance in one aspect or another  (e.g. “How would you rate your professor’s preparedness for class?”) except for one space at the end where you can write any additional comments. Not necessarily bad, except that these were the 5 ratings we had to choose from: Excellent, Very Good, Good, Fair, and Poor.

What you’ll notice is that 4 out of the 5 options are good ones. In other words these questions are designed to encourage students to lie about our professors’ performance, presumably in order to make the professors feel good. This is one of my pet peeves in education – I hate this “everyone is wonderful at everything” mentality that sometimes crops up in evaluation. Sure, only give criticism that is constructive, and sure, try to point out what was done well in addition to what was done badly; but if you’re going to flat-out lie (say something was good when it was mediocre, for example) then you might as well not evaluate at all?

What’s more, because of this lopsidedness, the questions introduce a huge amount of ambiguity – if I answer honestly, and circle “fair” when the professor did a fair job at something, will this be interpreted as actually meaning he did a poor job? It’s the second worst option available, after all. If I answer honestly while most other people answer under the assumption that “fair” really means “poor,” will I be punishing my professors solely for having me in their class? (these evaluations are, according to an email from my psych professor, “the  major component in the evaluation of teaching for decisions regarding promotion, tenure, salary increases, and teaching awards” – yikes!)

If I answer honestly my answers will predominantly be in the bottom 3– Good, Fair, and Poor should cover like 70-90% of your answers unless the professor was just outstanding. This isn’t a huge problem, but it’s just so easily fixable (change the options to “Very Good, Good, Moderate, Poor, and Very Poor” and the problem is solved) that it really frustrates me to find it in a university, which is supposed to be a place where people will excercise things like “judgement” and “critical thinking.” I don’t know how common this kind of lopsided evaluation is in universities, but wherever a lopsided evaluation is present it will make results unreliable and make thoughtful students not want to respond.

Posted in Uncategorized | 3 Comments

Refuting the Zombie Argument, part II

So, we’ve established in part 1 that, among other things, the zombie argument can only have any weight if the conceivability of zombies can be justified. Of course, there is a justification offered. The argument goes roughly like so:

1.      You cannot refute the proposition that zombies are conceivable

2.      Zombies intuitively seem to be conceivable

3.      Therefore, zombies are conceivable

The reason this argument is considered a semi-valid one is that most philosophers accept that if a proposition “intuitively seems” to be true, and it has not been disproven by reason or evidence, this means that that proposition is true or at least probable.

Philosophers’ willingness to go from “we intuitively feel X to be true” to “X is true” or “X is probably true,” even on matters as complicated, misunderstood, and bizarre as philosophical zombies, is a completely unjustified arrogance. One thing you have probably already noticed if you’ve read my other pieces is that I am very arrogant, at least philosophically speaking – and yet I still would never even DREAM of proposing that my intuition is some magical window into the way things really are; that it can determine deep metaphysical truths on matters which (like zombies) my conscious, reasoning mind cannot even begin to understand. Yet in philosophy this “magic window into fairlyland” model of intuition (that is the accepted nomenclature) seems to be accepted by basically the entire discipline – if a philosopher can find any way at all of making an idea “intuitively plausible,” then the argument is considered compelling, no matter how tortuous or absurd the method of achieving intuitive plausibility is. And if a philosopher can find some way of making an idea “counterintuitive” then this is considered a compelling objection. This blind faith in our own intuition is both arrogant and completely unjustified.

The first reason this faith in our intuition is unjustified is that our intuition cannot somehow generate information that we do not already have. It follows that if our intuition does not have the right information to work with then any conclusion it gives will necessarily be based on bad information (what else would it be based on?) and therefore be unreliable. There is still very little understanding of the brain and how it produces (or does not produce) consciousness – and yet the zombie argument asks you to picture a working brain and decide whether it is sufficient to produce consciousness, and philosophers happily close their eyes and say “yes, I am picturing a working brain right now,” when in fact what they are picturing more likely resembles an opaque, grey, and squishy box, rather than an actual working brain.

Put another way: if you can accurately picture a zombie, complete with a working brain with all the massively complicated processes and subprocesses that the brain consists of, then you and your miracle of an intuition have surpassed all of humanity’s work on trying to understand the brain. On the other hand, if your picture of a working brain is massively incomplete, inconsistent, and on many counts just plain wrong – as it inevitably will be, due to humanity’s extremely limited understanding of the brain – then why would you expect your intuitive conclusions from such a picture to mean anything? How can your intuition give a good conclusion when it is working off of incompete, inconsistent, and inaccurate information? The only way you could believe that your intuition can give a reliable answer in a situation where reason and evidence have no claim is if you believe that intuition has supernatural properties.

Perhaps you will object to my assertion that there isn’t enough information about the brain to come to a conclusion regarding the zombie example. You may say that you don’t need a completely precise picture to draw conclusions – you don’t need to conceive a universe in order to conceive of baking an applie pie, in fact, you don’t even have to know how an oven works beyond the fact that it’s that hot thing the pie goes in. So why should you need to have a precise understanding of the brain to decide whether or not it is sufficient for consciousness? Isn’t just our everyday commonsense regarding the brain enough?

The problem with this is that I’m not asking you to invent a universe, or even know how an oven works. I’m just asking you to have some vague idea what the hell you are talking about before you start talking – and if this is impossible then DON’T TALK. If we had a broad, schematic understanding of what the brain does then we would be in good shape to talk about whether it is sufficient for consciousness, even if we didn’t know the physical details; but we don’t, and so we can’t. To return to the pie example, if someone really had no clue how to bake a pie beyond “well you mix some stuff together then put it in a hot thing,” would you really trust that person’s “intuitive feelings” regarding whether a specific ingredient is included? Of course not. Similarly, why would you trust your “intuitive feelings” about whether the brain is sufficient for consciousness when you don’t know what the brain does?

Anyway, relying on our intuition isn’t just bad because we don’t properly understand the situation. Even setting aside the fact that humans don’t understand the brain well enough to draw conclusions about it, philosophers are still wrong in assuming that their intuitive feeling is indicative of truth – if we had a good understanding of the brain it would be lazy and unreliable to say “well intuitively I think this is sufficient for consciousness” or “intuitively it seems like this isn’t sufficient for consciousness” – we could use that understanding, in combination with a thing called “reason,” to actually determine reliably whether or not the brain was sufficient for consciousness.

This intuition, that we are expecting to determine whether the human brain is sufficient for consciousness, is the same intuition that can’t figure out the Monty Hall problem: everyone who encounters the Monty Hall problem will find it intuitively obvious that the odds are the same whether you switch or not, whereas reason and evidence both inarguably show that it is best to switch. I’m curious; why haven’t philosophers used this counterintuitive finding to refute mathematics yet? “You mathematicians’ ‘axioms’ lead to a counterintuitive result; if your axioms are true then it is best to switch in the Monty Hall problem, but it is intuitively obvious that it doesn’t matter whether or not you switch.” Of course, the reason no philosopher has done this is because they know that here their intuition is in the wrong; philosophers easily recognize that their intuition is wrong when it can be demonstrated so, but they seem to think that if they can just find an area where you can’t prove their intuition wrong beyond all possible doubt, their intuition shifts from being a quick, useful, bundle of heuristics into being some sort of magical revealer of truth.

This intuition that philosophers seem to think allows them to know the truth on matters as bizarre, alien, and massively complex as the situations given in thought experiments is the same intuition that proves itself to be extremely unreliable whenever it can be put to the test. Even in everyday use it is suspect to confirmation bias, it commits the gambler’s fallacy, and any number of other errors: there is nothing magic about intuition that gives it power where evidence and reason are not present. If we can’t figure something out through reason and evidence then we can’t figure it out with intuition either. Any argument that relies on “intuitive plausibility” relies on something that is fundamentally unreliable, and thus is a fatally flawed argument.

Posted in Uncategorized | 11 Comments