Brief update

Hi guys,

It’s been about 3 years since my last post. Since this blog is still getting some views I thought I’d give a brief update on how I feel now about what I wrote.

I dislike that I often slipped into a sarcastic, mocking, or insulting tone. In a discussion, sarcasm, mockery, and insults really have no place at all, but I did it pretty often. It lowers the quality of discussion on both sides. The person who wrote in an insulting tone now feels committed to their view and will be much less willing to admit that they were incorrect. The person whose view was mocked will obviously also feel much less sympathetic to the argument being made. All it does is make so that no one’s going to learn anything from the discussion, at which point it stops being a discussion and starts being a rant or heated argument. Also sarcasm mockery etc. can serve as a crutch to avoid explaining yourself properly – if a person gets stuck in a spot where it’s hard to explain what they mean they can slip into saying something along the lines of “obviously X is ridiculous, and Y is correct”, where X often ends up being a strawman.

That said the nastiness mostly wasn’t as bad as I feared, except for the last post I made, that I was dropping philosophy, which was made when I was very frustrated and upset. Not surprisingly I got some bitterness in return in that post (although most people were much more well-mannered than I was). Obviously it was a mistake – as frustrating as it was for me to feel like there’s no place for me in philosophy unless I become a sycophant, taking it out on people and ranting doesn’t help anyone.

That said I’m happier than ever with my decision to drop out of philosophy. I was someone who felt that I was right and everyone else was wrong. Regardless of whether I was right or wrong about that, what’s best for someone with that attitude is to go into a field where right and wrong and good and bad is measured objectively. It removes a big source of stress & frustration. As it turns out the new area I’m pursuing is going well for me, which is pretty vindicating although obviously it doesn’t prove anything regarding philosophy.

To be honest I still basically feel that the field of philosophy is pretty far off-base and so in that sense I still think that “I was right and they were wrong”. People usually equate this to thinking that I’m better than everyone at philosophy but it’s not quite that arrogant a statement. I think that the way philosophy is now will mostly drive out people who don’t take a certain approach to philosophy, and that approach is IMO quite flawed. So I still think there will be some people who better at thinking about these kinds of issues than me, I just think most of them will avoid philosophy. Still pretty arrogant I know.

Anyway, sorry about the rude exit post, best of luck to everyone!

Posted in Uncategorized | 6 Comments

Dropped out of university, ending the blog

I “withdrew” from this university semester, and don’t plan on going back. Although it is unfortunate to quit before I can figure out whether (the vast majority of) philosophers are actually as staggeringly incompetent as they appear to be, or are simply playing an elaborate practical joke, I simply couldn’t stand it either way.

It’s technically possible that I’ll post more to here, but it’s unlikely. In any case, thanks to everyone who was supportive. No thanks to this discipline for turning what should be a worthwhile pursuit into a massive waste of time.

Bye!

Posted in Uncategorized | 33 Comments

What’s the point of Naming and Necessity?

I’ve got some questions about Kripke’s Naming and Necessity. Basically, I’ve only heard good things about this book, so I’ve read it and tried as best I can to understand it, but it seems like complete fluff to me. So here I’ll lay out Naming and Necessity as I understand it, and why I think that “Naming and Necessity as I understand it” is so unnoteworthy. Hopefully someone can correct me if I go wrong somewhere, and shower me with praise if I don’t.

So Kripke is trying to argue that proper names are rigid designators. A rigid designator is something that designates the same object in every possible world.

Isn’t he arguing for a completely circular definition of proper names?

“In these lectures, I will argue, intuitively, that proper names are rigid designators, for although the man (Nixon) might not have been the President, it is not the case that he might not have been Nixon (though he might not have been called ‘Nixon’).”

So he is arguing that Nixon rigidly designates Nixon. The problem should be immediately obvious: he is arguing for a definition of the word Nixon, and in that definition, includes the word Nixon. He is arguing that proper names refer to what they refer to.

When he says that proper names refer to “the same object in all possible worlds,” as best I can tell, this is still a regress, as it is a standin for saying that “proper names refer to what they refer to.” This can be seen by how, whenever he talks about a specific case, he cannot explain how we are supposed to determine what this “same object” we are referring to is, and instead resorts to saying that the name refers to what it refers to:

“Don’t ask: how can I identify this table in another possible world, except by its properties? I have the table in my hands, I can point to it, and when I ask whether it might have been in another room, I am talking, by definition, about it, I don’t have to identify it after seeing it through a telescope. If I am talking about it, I am talking about it.”

I don’t believe that arguing for a conception of names that begs the question, then saying “don’t ask that question,” is a philosophically sound argument.

Two asides: one, even this unambitious point – that proper names refer to what they refer to – seems to be false, because it tries to force a binary kind of “it either refers or it doesn’t” mentality onto proper names that doesn’t really exist. I can guarantee you that there will be borderline cases where some people will say that the proper name refers to that object, and others will say it won’t (and my intuitive reaction to a lot of these borderline cases is neither affirmative nor negative, further proving the fuzziness of so called ‘rigid’ designators). And in these borderline cases, it doesn’t make sense to say “well that either IS Nixon or it ISN’T.” It would be more accurate to say that it’s “kind of Nixon.”

Two, it’s true that Kripke makes a halfhearted attempt at presenting an alternative ‘picture’ of how names work, but it is not the main point of Kripke’s lecture (and Kripke himself had reservations about the picture he presented), so I won’t be talking about it.

So anyway, how is this supposed to be a revolutionary piece? As best I can make out, its salient point is that proper names refer to what they refer to. Is there something more to his definition of proper names that I’m missing?

The standard line about why Naming and Necessity was so important is that Kripke’s amazing idea that proper names are rigid designators (i.e. that proper names refer to the same object in all possible worlds, i.e. that proper names refer to what they refer to) has ‘surprising implications’. But I haven’t seen any implication of rigid designators that impressed me.

For example, one thing I’m supposed to be excited about is that this discovery implies that there is necessary a posteriori knowledge.

According to the Stanford Encyclopedia article on rigid designators, this implication is exciting because “prior to discussion about rigid designation, the necessary a posteriori was generally thought to be an empty category.”

Really? People thought that was an empty category? Necessary a posteriori knowledge is extremely easy to construct. You can turn every contingent a posteriori piece of knowledge you discover into necessary a posteriori with ease. This is because it is true in all possible worlds that in this possible world at this time, that contingent piece of knowledge is true. (e.g., it is necessarily true that “in this world at this time there was a tree standing over there”) Clearly just being “necessary” does not imply that it is any more important than contingent knowledge.

One more example of how easy it is to make necessary a posteriori knowledge: suppose I flip a coin and do not look at what face it landed on. I then decide that the word “morps” will refer to black swans if the coin landed heads, and white swans if it landed tails. Then I look down and see that the coin landed heads. I just discovered the necessary a posteriori fact that morps are black. Again, necessary and a posteriori, but philosophically worthless.

So, for Kripke’s “discovery” to have worth, it has to imply something beyond that there can be necessary a posteriori knowledge. But I don’t see how the discovery that there are rigid designators can imply anything meaningful.

To try to illustrate why I don’t think this discovery can imply anything meaningful, let’s stipulate a possible world which is similar to ours, except that proper names aren’t rigid designators. Instead they are, let’s say, description clusters. So, the people there still use names, but instead of using them like we do they use them as description clusters which don’t designate the same object in all possible worlds (although of course, to them they would be designating the same object in all possible worlds).

In this possible world, is the set of possible worlds going to be any different? Of course not. So, discovering that there are rigid designators cannot equal discovering anything about what is or is not possible, as what is or is not possible is not determined by the idiosyncrasies of our language.

He gave a brief discussion of a possible application at the end, but it seemed to assume a knowledge of identity theory that I don’t have. He never really defined the terms he used, so there isn’t much I can say without further research there. But, again, it is logically impossible for the way proper names work to have implications about what is or is not possible, so it’s not like his ‘discovery’ can e.g. suddenly imply that epiphenomenalism is correct.

So what is supposed to be useful about proper names referring to what they refer to? Can someone fill me in here?

Posted in Uncategorized | 15 Comments

Intuition Pandering vs Actual Moral Philosophy

Moral philosophy has come to rely on moral intuition as an arbitrator of truth. Ethical theories live and die by their ability to match our moral intuitions. The result of this strange worship of intuition is that the moral theories that survive are nothing more than restatements of what philosophers find intuitively plausible, with no justification for their foundation besides their intuitive plausibility.

The problem with this approach is shown most clearly in the case of utilitarianism versus deontological philosophies. Utilitarianism, at least in its simplest form, is a perfect example of moral philosophy done the right way – its premise (that happiness is good and suffering is bad) is observably true, rather than justified “intuitively” (which is not any justification at all). Deontological philosophies are the opposite – their premises rely on things like “rights” and “duties” which can only be justified intuitively.

Since philosophers see moral intuition as the sole determinant of what is right and wrong, utilitarianism is of course criticized on the grounds that it can give counterintuitive conclusions. So for example critics will proudly point out that utilitarianism cannot guarantee against slavery. Of course, it isn’t actually possible to construct a realistic situation where utilitarianism recommends slavery – it’s pretty damn difficult to construct an UNrealistic one – but that’s beside the point. Slavery is always wrong, because our moral intuitions say so. So if utilitarianism says it’s wrong 99.999% of the time then that is not enough, because our moral intuition says that it is wrong 100% of the time. And since it’s impossible that moral intuition could be wrong 0.0001% of the time, utilitarianism must be false.

After having  so thoroughly proven utilitarianism false the critic will then go on to argue for his own deontological moral philosophy which, he will proudly note, CAN guarantee against slavery in ALL cases. Deontological philosophy matches our moral intuition that humans should never be enslaved because one of its premises is that humans have a right to freedom, i.e. that humans should never be enslaved. This premise is justified because intuitively it seems obvious that humans should never be enslaved.

Utilitarianism’s ability to go against our moral intuition is not a weakness. It is a strength – no, not even that – it’s a prerequisite for an ethical theory to be able to make any progress at all. If an ethical theory is lauded for its inability to violate our intuitions then it is being lauded for being a reflection of our beliefs rather than for being able to improve our beliefs. It is congratulated for being unable to produce progress.

The progressive nature of utilitarianism versus the sycophantic nature of deontology is born out in philosophical history. Bentham and Mill, the Big Two utilitarians, both worked against racism and sexism at a time when these were dominant ideologies. Why? Because utilitarianism is an actual ethical theory which can make progress; it helps us move beyond our unjustified prejudices and see what is actually true. It is capable of coming to counterintuitive conclusions and of challenging our beliefs.

Deontology – rights and duty-based ethics – is supposed to be superior to utilitarianism because it guarantees against oppression and prejudice. And yet Kant, the great deontologist, was extremely racist and inexcusably sexist. The greatest deontological philosopher of all time had ethical views that were simply unconscionable. This is because the forces deontological philosophies appeal to – ‘rights’ or ‘duties’  – are invisible (and nonexistent), and so what our rights and duties consist of is ultimately determined by intuition, which of course is where our biases and prejudices show themselves most prominently. As such the only thing a deontological philosophy can truly guarantee is not that it will protect against oppression but that it will conform to your prejudices and biases.

Let’s look at an example from the 20th century. Now that intellectuals have reached an agreement that sexism and racism are definitely bad, a deontologist comes out of the woodwork and condemns utilitarianism for not guaranteeing against slavery… even though utilitarian reasoning is the only ethical system that consistently opposes it. This new deontologist is massively successful – this is John Rawls and his Principles of Justice.

But John Rawls’ philosophy is not an improvement. It is again simply an iteration of his generations’ status quo, carrying his generations’ biases and preconceptions along with it. His philosophy does indeed guarantee that it will frown upon the kind of oppressions philosophers were already frowning upon. What it fails to do is fight an oppression that is still considered acceptable – in fact, Rawls’ principles of justice actually work to reinforce that oppression. Specifically, he leaves animals entirely out of his original position thought experiment. This is inexcusable. Bentham, who lived in the eighteenth century, was an animal rights activist. Yet Rawls – a modern 20th century philosopher, who criticizes utilitarianism for failing to protect aganist oppression – cheerfully leaves animals out entirely of his principles of justice. As far as his principles of justice are concerned there is absolutely nothing wrong with factory farms, dog fighting, or any other kind of horrifying mistreatment of animals.

Once animal rights have been established – through utilitarian reasoning – perhaps some fashionable deontologist will condemn utilitarianism for not guaranteeing strongly enough against the oppression of animals; that it allows oppression of animals to occur in an obscure thought experiment that she will fail to define very well. And perhaps she will have dictated this condemnation to an artificially intelligent computer which she cheerfully keeps bound as her personal assistant, because after all, it just doesn’t seem intuitively plausible that computers can have rights.

Posted in Uncategorized | 22 Comments

Originality in Philosophy Courses is Punished Instead of Encouraged

Markers (at least, markers of undergraduate philosophy papers, though this is probably applicable elsewhere) punish originality, rather than encourage it as they should. This is a difficult claim to establish as it’s mostly based on my own experience, but I’ll give two examples of this happening.

First off, I’ll give the only example I have that’s short enough to be reproduced entirely in a blog post. This was the second philosophy assignment I ever did, and the absolute silliness of the comments pissed me off enough that I ended up typing the entire assignment and commentary out in an email to my parents explaining why the comments were silly (hence why I still have it). Little did I realize this strange commentary would not be an isolated incident; it would just be one case of a larger trend. Now, I’m sure some markers are better about this than others, but it is a startlingly consistent finding: the more original my arguments are, the more likely I will find stupid objections scrawled in the margins when I get the essay back.

For the assignment, we were supposed to make a (step form) argument that rebuts a “friend” who criticizes utilitarianism on the grounds that it recommends disposing members of society who are not “useful,” e.g. cripples and homeless people. The idea was that we make the easy, boring argument that utilitarianism would not recommend this. Instead I decided to argue that we shouldn’t reject a philosophy solely because it is counterintuitive; this was (and still is) a pet peeve of mine. Here’s the argument I handed in:

  1. The moral intuition of the majority of people in the past strongly supported racism and slavery
  2. The moral intuition of the majority of the people in the present strongly opposes racism and slavery
  3. So, the moral intuition of the majority is not always correct
  4. The friend’s objection to utilitarianism assumes that dispensing with unproductive members of society is wrong
  5. The friend assumes this because the majority of people intuitively agree that dispensing with unproductive members of society is wrong
  6. So, the friend’s objection to utilitarianism assumes the moral intuition of the majority is always correct
  7. So, the friend’s argument assumes something that isn’t true
  8. An argument that assumes something that isn’t true is invalid
  9. So, the friend’s argument is invalid

Again, I don’t want to single out this TA – this is just the only example short enough to fit in a blog post. Now, on to the comments. On steps 1-3 he wrote:

“Confusing. What do you mean by ‘correct’ here? This can defend racism in present just as much as racism in the past is incorrect [sic]. Both are majority”

A very nice example of a startlingly stupid comment that you simply will not find attached to unoriginal assignments. What do I mean by ‘correct’? What I meant by ‘correct’ is what every other person who speaks English means by ‘correct’ when they say ‘correct.’ The moral intuition of the majority could defend racism in the present just as much as it could attack racism in the past? Why on earth does he say that as if it’s somehow a counterpoint that I hadn’t considered? That was my point.

When I complained to him about the comments/mark, he admitted that the argument I’d made was in fact logically sound, but – like all graders I’ve complained to – refused to admit that he may have made a mistake. Instead he said that his mark was fair because, to make my argument more clear, I should have added a premise that states that if something contradicts itself then it is not always correct. Should I also add a premise stating that if all x’s are y and all y’s are z, then all x’s are z? Should I add a premise explaining that one plus one is TWO, rather than THREE?

Anyway, for the next comment he highlighted steps 6&7 and said:

“Do not have two conclusions back to back. You need premises that show conclusions.”

Another startling case of marker blindness that never seems to occur except when I am making original arguments. Perhaps there was a better way of structuring the argument (although, since both of 7’s premises need to be supported I can’t think of one barring using a diagram instead of plain text) but he is somehow under the impression that number 7 doesn’t have supporting premises, even though it quite clearly does – steps number 3 & 6. This should be evident to anyone who can remember what they read 30 seconds ago. One symptom of original assignments is that instead of receiving the quite charitable reading that unoriginal assignments do, you receive the kind of confused reading that you would normally only get from five year olds. If I’d been arguing something unoriginal he would have said “Oh, I know what he means,” but because the argument is original it is held to a bizarrely high standard that demands everything be made impossibly obvious.

His overall comment was:

“This is a creative effort. However, you need to tighten up the argument. Do not discuss + back up position with concepts that have nothing else to do w/ your argument (i.e. racism + slavery). You need to focus on making a more straight-forward, clear, convincing argument. Come by my office hours w/ your third argument if you want. 78 / 100”

My previous argument had got 84/100 with a minimum of effort, because I did it in boring and conventional manner. In this one I tried something original and ambitious, put more time in than I had for the first, overall liked the argument much more than the first – and instead of getting any credit for this originality (besides a reprimand that I should argue in a more “straight-forward, clear, convincing” fashion, which is code for “stop making creative efforts”) I got taken to town for his own logical failings.

The next example of punishment of originality is one from my philosophy of the 21st century class. Basically, around the time I first wanted to write this post, I decided to put my money where my mouth is. At the time I was still sore over the comments I received for an essay in my philosophy of the 21st century class, where my professor claimed I misunderstood Kripke when in fact I had anticipated and addressed his primary objection RIGHT IN MY ESSAY; and yet made the counterpoint anyway, completely ignoring my… countercounterpoint. It was surreal. But of course it was a 3000 word essay, so I knew there was no way I could really explain this sufficiently to convince anyone; I would essentially be putting a biased undergraduate’s word against a not-too-biased professor’s, and so the natural and understandable reaction people would have would be to side with the professor.

I had one more essay left to hand in in this philosophy of the 21st century course, so I decided that for the next essay I’d argue something that I didn’t believe at all, in an unoriginal fashion (i.e. relying on arguments used in the course), and see the results.

The second essay question I chose asked me to compare and contrast Mark Johnston’s account of religion with Alvin Plantinga’s; describe them and say how they differed, were similar, and which was more persuasive. I found both of their “accounts” to be pathetic – I’ll write about why later. In any case, I realized that arguing “both these philosophers suck” would be a good way to guarantee some confusing, irrational notes scrawled in the margins. So instead, I decided to make what my TA would call a “more straight-forward, clear, convincing argument” – in other words, I said what my marker wanted to hear instead of what I believed, and relied on recombining the flawed arguments taught in class rather than on using my own good ones. I decided to argue the most “straight-forward” thing possible: one of them would be bad and the other would be good. Plantinga (or, as I misspelled him throughout the entire essay, “Platinga” – not intentional, I just hadn’t read the Plantinga section of “What Philosophers Know” very carefully) would be BAD and Johnston would be AMAZING. Although I hate Johnston’s account of religion with a passion, my conclusion in the essay was that “Johnston’s account of religion is an elegant, compelling one that sidesteps many of the great problems that face religion today.” Note I never said that it was right – “right” seems to be a bad word in philosophy.

Needless to say, I got higher on the essay on Johnston and “Platinga” than I got on my first essay – in fact, it got a higher grade than any philosophy assignment I’ve got to this point. (for what it’s worth, the first essay got 83 and the second got 89)

For the first essay I spent a good amount of time constructing the essay and I read the relevant sections of Gutting’s book multiple times to ensure I was understanding the claims correctly. I was informed that I had misunderstood Kripke. For the second essay I spent less time overall, I made arguments I knew were wrong, and I hadn’t even done all the reading relevant to the topic – in fact I’d done less than half of the reading assigned for Johnston. The final comment was that it was a “Very good discussion of these ideas, and effective well-argued comparison of Plantinga and Johnston.”

If I had to say why original essays provoke bizarre, uncharitable grading, I’d guess that the unfamiliarity of the arguments creates a number of problems. One is that when reading familiar arguments the marker can easily fill in missing gaps in the arguments, and may well do this automatically and subconsciously; but when reading unfamiliar arguments this is impossible, and so unfamiliar arguments will naturally be harder to understand and will naturally seem less clear. Another problem is defensiveness and bias, which again everyone will suffer from to a greater or lesser extent. Another is that unoriginal arguments are to a certain extent immune to criticism; for my second essay I relied on rearranging and recombining the arguments of Gutting, Plantinga, and Johnston, and so he could not really criticize anything I said as it was what I had been taught in class. Original arguments invite the marker to make whatever random objection occurs to them, whether that objection makes sense or not. Lastly, original arguments are, of course, harder to create than ones that are based off of what you’ve already been taught, and yet credit is not given for this even though it really should – a philosopher who is unable to think for themselves is not a philosopher at all. Philosophy shouldn’t be about arbitrarily recombining arguments you’ve already heard, but this is what is encouraged.

Posted in Uncategorized | 19 Comments

Are Philosophers Giving Up on Reason?

I recently read What Philosophers Know, by Gary Gutting, for one of my philosophy courses. The book is, in essence, an attempt at rebutting skeptics of philosophy who say that philosophers have not established any disciplinary knowledge; the idea is that he will rebut these skeptics by showing what knowledge philosophy has gained. Unfortunately, the book ends up justifying skeptics rather than rebutting them – the entire book consists of Gutting describing a philosophical dispute, reassuring everyone involved that they are entitled to their opinion, and then moving on. His “philosophical knowledge” is simply that “everyone is entitled to their opinion” repeated over, and over, and over.

His reason for this relativistic assertion is that he believes that this is necessary in order to get any interesting results. He argues that since a strictly foundationalist approach (basically, only accepting as premises for arguments points that are either extremely obvious or extremely well justified) fails to produce interesting results, we must loosen these criteria for accepting premises; and Gutting loosens his criteria enough that on every issue he discusses he ends up simply telling each side of the argument that they are “warranted” in holding their position, and that neither one should give way to the other. Attacking premises, or asking for them to be justified, is outlawed in the name of “anti-foundationalism”.

There is a continuum of how stringent the criteria for accepting premises can be – on the one side of the spectrum we have radical skepticism, which has criteria so stringent that no knowledge can be established, and foundationalism, which has criteria stringent enough that establishing useful or interesting knowledge is extremely difficult, and on the far other end of the spectrum, we have relativism, where nearly any belief can be seen as “warranted” because its only objective criterion is that the premise is logically coherent with itself and with the other premises of the argument. Gutting’s book indicates that because of fear of foundationalism and radical skepticism, philosophers have moved further and further away from a harsh foundationalist viewpoint, and have ended up on the complete other side of the spectrum. The result is that no true philosophical progress can be made, because the premises of arguments are considered virtually immune to criticism – and so arriving at any definitive answer to a philosophical question is impossible, and the use of reason is relegated to the sidelines in favour of subjective arguments based on pathos and “intuitive plausibility.”

Think I’m being too hard on Gutting? That philosophy really isn’t all that relativistic? Let’s look at what philosophers know. Gutting on “what philosophers know” about analytic knowledge: “Knowing that analyticity cannot be defined in essentially different terms means that we must either accept it (or another term in its immediate family) as basic or else reject it.” (p73) Ah, thanks to our reasoned analysis of analytic knowledge, we have established that we must either accept it or reject it. But, which one? Up to you! Take your pick!

Gutting on “what philosophers know” about Kripke’s claims that there is such a thing as necessary a posteriori knowledge: “[Kripke] did not establish that his essentialist theses were true, but that the picture they presented was worthy of attention.” (78) Do philosophers know whether Kripke’s right? Nope.

Gutting on “what philosophers know” regarding free will, determinism, and so forth: philosophers haven’t established anything for sure, but “we’re going to [hold people responsible for their actions] no matter what. Perhaps we can take this very result as an important piece of philosophical knowledge. Couldn’t it be plausibly claimed that one outcome of philosophical anti-foundationalism, applied to the case of freedom, has been that the practice of holding responsible is in order even without philosophical justification?” (p148) Yes, because that’s knowledge that non-philosophers will find useful. “Hey, laypeople, you know that thing you’re already doing? You can keep doing that.” Thanks, philosophy!

Gutting’s complete unwillingness to favour one view over another is highlighted when he considers Alvin Plantinga’s book “The Warrantedness of Christian belief.” He mentions a possible counterargument to Plantinga’s defense of the warrantedness of Christian belief; that “to the extent that Plantinga’s book supports the warrant of Christianity, [one could use these same arguments to] support the warrant of, for example, conservative Islamic views on the status of women, Catholic views on papal infallibility, and perhaps even Jehovah’s Witnesses’ views on blood transfusions and Aztec views on the need for human sacrifice.” (p117) It seems like he is finally understanding the flaws of this relativistic approach to philosophy; but no! Gutting is so committed to avoiding telling anyone that they are wrong that he asks, “But why should any of this bother Plantinga or other Christians who rely on his defense?” (p117) Hmm. Yes, why should this bother Plantinga? maybe because “Aztec views on the need for human sacrifice” are clearly wrong, and so if they can be defended by the same arguments Plantinga uses to defend Christianity, this constitutes a reductio ad absurdum refutation of Plantinga’s defense of Christianity? Or maybe this should just bother Plantinga for the same reason that it should bother any other philosopher, i.e. that it means that if Plantinga’s arguments (which Gutting endorses) are successful then philosophy is doomed to accomplishing basically nothing at all!

Gutting’s idea of a conclusive, useful piece of philosophical knowledge, that he uses in the conclusion to his entire book as an example of what philosophy can do for non-philosophers, is: “no standard popular version of a theistic or atheistic argument makes an adequate case for its conclusion.” (p232) Congratulations, the grand message of philosophers to nonphilosophers is: “Laypeople! Keep doing what you’re doing, but with the understanding that no one is really wrong or right about anything.” And to think that those silly skeptics thought that philosophers might not have accomplished anything worthwhile!

Gutting sets out to show “what philosophers know.” But all that he ends up “showing” is that philosophers have become too afraid of radical skepticism to excercise any skepticism at all, too afraid of having their own false beliefs exposed to expose the false beliefs of others, and too distrustful of their own reason to accept it when it leads them counter to their intuition. Now, maybe Gutting is wrong about the state of philosophy and of philosophers. But if his take on the state of philosophy is accurate then the majority of the philosophical community is using an overly lax approach that fundamentally undermines the philosophical enterprise.

Posted in Uncategorized | 7 Comments

Why I didn’t fill out the Student Evaluations

So, I realize that student evaluations of their professors are, in principle, great. Still, I didn’t fill out any, for a number of reasons. Most of these were ones that can’t be avoided by those handing out the evaluations – for example, laziness, exams & papers due, and so on – but one reason stuck out, as it was a stupid, easily fixable problem.

This evaluation consisted entirely of rating our professor’s performance in one aspect or another  (e.g. “How would you rate your professor’s preparedness for class?”) except for one space at the end where you can write any additional comments. Not necessarily bad, except that these were the 5 ratings we had to choose from: Excellent, Very Good, Good, Fair, and Poor.

What you’ll notice is that 4 out of the 5 options are good ones. In other words these questions are designed to encourage students to lie about our professors’ performance, presumably in order to make the professors feel good. This is one of my pet peeves in education – I hate this “everyone is wonderful at everything” mentality that sometimes crops up in evaluation. Sure, only give criticism that is constructive, and sure, try to point out what was done well in addition to what was done badly; but if you’re going to flat-out lie (say something was good when it was mediocre, for example) then you might as well not evaluate at all?

What’s more, because of this lopsidedness, the questions introduce a huge amount of ambiguity – if I answer honestly, and circle “fair” when the professor did a fair job at something, will this be interpreted as actually meaning he did a poor job? It’s the second worst option available, after all. If I answer honestly while most other people answer under the assumption that “fair” really means “poor,” will I be punishing my professors solely for having me in their class? (these evaluations are, according to an email from my psych professor, “the  major component in the evaluation of teaching for decisions regarding promotion, tenure, salary increases, and teaching awards” – yikes!)

If I answer honestly my answers will predominantly be in the bottom 3– Good, Fair, and Poor should cover like 70-90% of your answers unless the professor was just outstanding. This isn’t a huge problem, but it’s just so easily fixable (change the options to “Very Good, Good, Moderate, Poor, and Very Poor” and the problem is solved) that it really frustrates me to find it in a university, which is supposed to be a place where people will excercise things like “judgement” and “critical thinking.” I don’t know how common this kind of lopsided evaluation is in universities, but wherever a lopsided evaluation is present it will make results unreliable and make thoughtful students not want to respond.

Posted in Uncategorized | 3 Comments

Refuting the Zombie Argument, part II

So, we’ve established in part 1 that, among other things, the zombie argument can only have any weight if the conceivability of zombies can be justified. Of course, there is a justification offered. The argument goes roughly like so:

1.      You cannot refute the proposition that zombies are conceivable

2.      Zombies intuitively seem to be conceivable

3.      Therefore, zombies are conceivable

The reason this argument is considered a semi-valid one is that most philosophers accept that if a proposition “intuitively seems” to be true, and it has not been disproven by reason or evidence, this means that that proposition is true or at least probable.

Philosophers’ willingness to go from “we intuitively feel X to be true” to “X is true” or “X is probably true,” even on matters as complicated, misunderstood, and bizarre as philosophical zombies, is a completely unjustified arrogance. One thing you have probably already noticed if you’ve read my other pieces is that I am very arrogant, at least philosophically speaking – and yet I still would never even DREAM of proposing that my intuition is some magical window into the way things really are; that it can determine deep metaphysical truths on matters which (like zombies) my conscious, reasoning mind cannot even begin to understand. Yet in philosophy this “magic window into fairlyland” model of intuition (that is the accepted nomenclature) seems to be accepted by basically the entire discipline – if a philosopher can find any way at all of making an idea “intuitively plausible,” then the argument is considered compelling, no matter how tortuous or absurd the method of achieving intuitive plausibility is. And if a philosopher can find some way of making an idea “counterintuitive” then this is considered a compelling objection. This blind faith in our own intuition is both arrogant and completely unjustified.

The first reason this faith in our intuition is unjustified is that our intuition cannot somehow generate information that we do not already have. It follows that if our intuition does not have the right information to work with then any conclusion it gives will necessarily be based on bad information (what else would it be based on?) and therefore be unreliable. There is still very little understanding of the brain and how it produces (or does not produce) consciousness – and yet the zombie argument asks you to picture a working brain and decide whether it is sufficient to produce consciousness, and philosophers happily close their eyes and say “yes, I am picturing a working brain right now,” when in fact what they are picturing more likely resembles an opaque, grey, and squishy box, rather than an actual working brain.

Put another way: if you can accurately picture a zombie, complete with a working brain with all the massively complicated processes and subprocesses that the brain consists of, then you and your miracle of an intuition have surpassed all of humanity’s work on trying to understand the brain. On the other hand, if your picture of a working brain is massively incomplete, inconsistent, and on many counts just plain wrong – as it inevitably will be, due to humanity’s extremely limited understanding of the brain – then why would you expect your intuitive conclusions from such a picture to mean anything? How can your intuition give a good conclusion when it is working off of incompete, inconsistent, and inaccurate information? The only way you could believe that your intuition can give a reliable answer in a situation where reason and evidence have no claim is if you believe that intuition has supernatural properties.

Perhaps you will object to my assertion that there isn’t enough information about the brain to come to a conclusion regarding the zombie example. You may say that you don’t need a completely precise picture to draw conclusions – you don’t need to conceive a universe in order to conceive of baking an applie pie, in fact, you don’t even have to know how an oven works beyond the fact that it’s that hot thing the pie goes in. So why should you need to have a precise understanding of the brain to decide whether or not it is sufficient for consciousness? Isn’t just our everyday commonsense regarding the brain enough?

The problem with this is that I’m not asking you to invent a universe, or even know how an oven works. I’m just asking you to have some vague idea what the hell you are talking about before you start talking – and if this is impossible then DON’T TALK. If we had a broad, schematic understanding of what the brain does then we would be in good shape to talk about whether it is sufficient for consciousness, even if we didn’t know the physical details; but we don’t, and so we can’t. To return to the pie example, if someone really had no clue how to bake a pie beyond “well you mix some stuff together then put it in a hot thing,” would you really trust that person’s “intuitive feelings” regarding whether a specific ingredient is included? Of course not. Similarly, why would you trust your “intuitive feelings” about whether the brain is sufficient for consciousness when you don’t know what the brain does?

Anyway, relying on our intuition isn’t just bad because we don’t properly understand the situation. Even setting aside the fact that humans don’t understand the brain well enough to draw conclusions about it, philosophers are still wrong in assuming that their intuitive feeling is indicative of truth – if we had a good understanding of the brain it would be lazy and unreliable to say “well intuitively I think this is sufficient for consciousness” or “intuitively it seems like this isn’t sufficient for consciousness” – we could use that understanding, in combination with a thing called “reason,” to actually determine reliably whether or not the brain was sufficient for consciousness.

This intuition, that we are expecting to determine whether the human brain is sufficient for consciousness, is the same intuition that can’t figure out the Monty Hall problem: everyone who encounters the Monty Hall problem will find it intuitively obvious that the odds are the same whether you switch or not, whereas reason and evidence both inarguably show that it is best to switch. I’m curious; why haven’t philosophers used this counterintuitive finding to refute mathematics yet? “You mathematicians’ ‘axioms’ lead to a counterintuitive result; if your axioms are true then it is best to switch in the Monty Hall problem, but it is intuitively obvious that it doesn’t matter whether or not you switch.” Of course, the reason no philosopher has done this is because they know that here their intuition is in the wrong; philosophers easily recognize that their intuition is wrong when it can be demonstrated so, but they seem to think that if they can just find an area where you can’t prove their intuition wrong beyond all possible doubt, their intuition shifts from being a quick, useful, bundle of heuristics into being some sort of magical revealer of truth.

This intuition that philosophers seem to think allows them to know the truth on matters as bizarre, alien, and massively complex as the situations given in thought experiments is the same intuition that proves itself to be extremely unreliable whenever it can be put to the test. Even in everyday use it is suspect to confirmation bias, it commits the gambler’s fallacy, and any number of other errors: there is nothing magic about intuition that gives it power where evidence and reason are not present. If we can’t figure something out through reason and evidence then we can’t figure it out with intuition either. Any argument that relies on “intuitive plausibility” relies on something that is fundamentally unreliable, and thus is a fatally flawed argument.

Posted in Uncategorized | 16 Comments

Refuting the Zombie Argument, part I

Recently while I was reading Philosophy, etc., I noticed that one of his featured posts was on “Understanding (zombie) conceivability arguments,” and so I took a look at his summation & two-part defense of (zombie) conceivability arguments. He argues that dismissive reactions to the zombie argument are generally due to misunderstanding it, and that if you accurately understand the zombie argument you will realize that it forces materialists to, at the very least, make serious concessions or commit themselves to ad hoc arguments. Now, I’d always been extremely dismissive of the zombie argument, and so I was curious to hear why I shouldn’t be; but after reading his pieces, I remain as dismissive as ever.

He never addressed the central problem of the zombie argument, which is what makes the zombie argument representative of one of the most embarassing tendencies in philosophy:  it uses the assumption that intuitive attractiveness indicates truth (or “probability” or “plausibility,” which are just weaker forms of the same error). Here that assumption comes into play as the basic argument for accepting the premise that zombies are conceivable; it roughly goes, “it intuitively seems like zombies are conceivable, and you can’t prove that they aren’t conceivable, therefore they are conceivable.”

Unfortunately, before I can explain why this assumption that intuitive attractiveness always indicates truth is so wrong there are some preliminary matters to settle; so this first post will preempt a few possible objections and clear up a few possible misconceptions.

First, when I say “conceivable” I mean it in the sense that Richard Chapell describes in his summary – that by saying something is conceivable I do not just mean something we can imagine or something that might be possible “for all we know” (as we can conceive, in either of these senses, of countless things that are in fact logically impossible – so saying zombies are conceivable in either of these senses is irrelevant), but instead, something that is conceivable is a coherent, logically possible concept.

A quick note on the intuitive attractiveness of zombies – personally I don’t find the conceivability of zombies intuitively obvious at all; in fact it seems very counterintuitive to me. But I don’t need to press that point, and so for the purposes of these posts I will assume that zombies intuitively seem like a plausible idea.

A possible objection I wish to preempt is that even if intuition is somewhat unreliable, it is sometimes “all we’ve got” – that it’s needed to determine what are good premises for arguments and whatnot. The idea is that foundationalism – the attempt to use only self-evident or similarly obvious and easily agreed upon premises – has failed to produce any useful or interesting results, and because of this failure philosophers need to use more controversial premises, based more on intuitive plausibility (“zombies seem conceivable”) rather than extreme obviousness (“if x = y and y = z then x = z,” or, “the sun will rise tomorrow morning”). Now, I agree that there’s a balance to strike – if you only accept the most obvious, uncontroversial premises imaginable, you’re not going to get very far. But philosophy seems to have gone very far towards the other end of the spectrum, to the point where the results may be interesting (although often they aren’t), but they are also completely wrong. What’s more, I think that much more can be done with extremely obvious premises and sound reasoning than philosophers seem to think; but that’s a matter for another post.

The second possible objection is that no good justification needs to be given for the zombie premise because, as David Chalmers says, “in general, a certain burden of proof lies on those who claim that a given description is logically impossible.” (Chalmers, The Conscious Mind: in search of a fundamental theory, page 96) In other words, Chalmers thinks that because he is arguing simply that zombies are “logically possible” or “ideally conceivable” the burden of proof is no longer on him; that the physicalists have to disprove or cast doubt on the idea of zombies or else physicalism is somehow refuted by default. This is not the case. The fact that there is a proposition (“zombies are conceivable”) which, if true, would render dualism true, is no more evidence for dualism than the fact that there is another proposition (“zombies are inconceivable”) which, if true, would render dualism false, is evidence against dualism. The only way either proposition can be used as evidence is if a convincing argument can be made for accepting it.

Chalmers seems to mistakenly think that conceivability is somehow “more likely” than inconceivability, as he thinks that the burden of proof is on the one arguing for inconceivability. This is obviously not the case; the negative (here, inconceivability) always outnumbers the positive (here, conceivability). Inconceivable ideas are by definition less restricted than conceivable ones; conceivable ideas have to conform to the rules of logic while inconceivable ones do not. There are more irrational numbers than rational, more possibilities than actualities, and there are more inconceivable ideas than conceivable ones.

Since it is easier for a proposition to be inconceivable than conceivable, if we are to accept that a given proposition is conceivable then we need a reason for doing so. The burden of proof is on the proponents of the zombie argument, not on the materialists. Of course, Chalmers (and other zombie proponents) do offer a justification: an intuitive one. I will explain why this justification is a bad one in part 2.

Posted in Uncategorized | Leave a comment

Problems With How Philosophy is Taught

As I’ve hinted before, I have a lot of problems with how philosophy is taught. The biggest one is that students are taught a vast amount of philosophy that is just wrong. I know it is difficult in philosophy to prove something objectively correct, but what we are taught should at the very least not be demonstrably wrong and/or worthless… and yet much of what I’m taught can’t stand up to even this very easy criterion.

For example, in an Ethics class we were taught Kant’s views on morality. He argues we should only do things that we can make into universal maxims. Does anyone take this seriously? Well, I know some people do, but why? I just hate how completely irrational the idea of “it’s only right to do something if it’s right for everyone to do it all the time” is – circumstances play a huge role in everything else we do, why the hell shouldn’t they in ethics? Face it, consequentialism is the only reasonable ethical system… If we aren’t actually doing good then how are we being good?  Furthermore, any moral system that recommends helping murderers and Nazis has got some serious issues.

Another, much less controversial example: in my Contemporary British & American Philosophy class we were taught  about Russell’s thoughts on “mnemic causation.” Mnemic causation is the idea of an event being caused by a past event; for example if a kid acts scared of fire now, because of him getting burned by fire 2 months ago, then that is mnemic causation (as opposed to a person throwing a bowling ball, where the movement of the bowling ball is caused by something directly preceeding it). Russell tries to argue that mnemic causation may be “fundamental”; that when a kid acts scared of fire, this may not be caused by his brain state (which was altered when he got burnt) but may somehow be caused directly by the past event; that him getting burnt by fire is somehow reaching through to the future and making him act scared of it. He says he leans against the possibility himself, and stresses that it is only a possibility, but he nonetheless devotes a whole lecture to it and tells us about how it could make psychology fundamentally different from physics.

Why should I have to learn about this? Mnemic causation is absurd; there’s no more reason to posit it than there is to posit any other kind of supernatural influence on our behaviour, and it’s been basically completely disproven by now.

Another example: In a Philosophy of the Environment class, we learned about “Deep Ecology” which argues that our sense experience (e.g. walking into a forest and feeling that it’s a beautiful, interconnected thing) is “more fundamental” than science, and because of this our sense experience should be trusted over science. We should not have to learn this. I don’t care if there are philosophers who really uphold this view: it’s still ridiculous and obviously wrong. Even if we concede that sense experiences are “more fundamental” than science, that has nothing to do with which is more accurate.

And of course we’re still taught Aristotle and Plato for some reason I can’t fathom.

So, why are we taught wrong philosophy? Usually because professors insist on teaching by philosopher, rather than by philosophy. I realize this is necessary to some extent because a philosopher’s ideas will tend to rely on and assume knowledge of their other ideas, but I refuse to accept that because of this we need to hear every single part of their philosophical picture; some parts of it will inevitably just be wrong (nobody’s perfect) and the rest of their picture will stand up just fine with the wrong parts cropped out; and if it doesn’t, then the entire picture is flawed anyway in which case it shouldn’t be taught at all.

There are three primary arguments in defense of teaching philosophers, wrong bits and all: these philosophers should be taught because they are are historically important, that the philosophers may be wrong on some points but they can still help “teach us how to think”, and that we don’t read philosophers for their arguments per se.

Given how often I hear the first reply, I feel I have to address it – that for example Plato should be taught because he was the first recorded philosopher, and he dealt with many of the problems philosophy deals with today, or that Kant should be taught because of the profound influence he has had on philosophical thought. I am disappointed that I need to reply to this at all; it should be obvious that being first or being influential does not make what someone has said any better or worse than it actually is. It may be important from a historical perspective, but if their sole importance is historical then that is no reason to spend years learning their ideas or reading their texts. Physics students do not spend years studying Aristotle’s account of physics. I’m sure Kant was influential, but this doesn’t make his ethics any less silly – and if there is a modification or development of some of his ethical ideas that isn’t silly then sure, teach me that, but you can safely leave out the bits where Kant claims that circumstances don’t matter, or where Kant says that lying is always wrong, and so on.

The second reply is that learning these wrong philosophers can help us learn how to think. This is wrong. You get good at something primarily by learning from or engaging with people who are already very good. This is universally true. From violin, to chess, to soccer, to physics, the teachers should always be ones who have a good understanding of the subject (even if they aren’t able to put that into practice well themselves, e.g. a coach who isn’t as fit as his players). How could learning and discussing bad philosophy possibly be a better teaching tool than learning and discussing good philosophy? I don’t improve at chess by playing against someone significantly worse than me, and similarly I don’t feel like I improve at philosophy by being repeatedly confronted with crappy ideas, especially not when I’m forced to learn all the intricate details that stem from the initial, obviously crap idea. Maybe if we just learned the bad idea and then learned why it’s wrong the excercise could have some small use since it could help us avoid common philosophical mistakes; this would be a lot faster too, since we’d get to skip all the intricacies that are based on a wrong foundation.

Another problem with teaching philosophers instead of philosophy is that students are forced to read from the original text of philosophers even though they were usually not good writers (by modern standards). For example, pretty much any philosopher born before the 19th century goes on, and on, and on, and on without getting to the point (or simply repeats the point about 8 times). Obviously it isn’t their fault; they are a product of their time. But since we can massively condense their work without losing any content, please, let’s do so. Forcing students to read 500-year-old english is no more excusable than forcing them to read Hegel in the original German. Plato can take ten pages to make an argument that should take 5 sentences; if we are keeping up the pretense that we are actually reading Plato for his philosophy then there is no good reason not to simply read summaries of his important arguments and ideas, rather than strict translations which preserve his excessive wordiness. If there are differing interpretations that are both good arguments then give a summary of each translation; not hard. And if you think that it’s important for philosophy students to be good at interpreting ancient, unclear writing then you have lost sight of what philosophy is supposed to be about.

And this leads us to the next problem with the approach to teaching philosophy, which is that we don’t actually read Plato for his arguments, do we? We read him like literature (in fact, my Ancient Greek Philosophy professor has even said this in class; although I’m sure he’d disagree with what I’ll say next). We don’t read him for the individual arguments he makes, which tend to be quite bad, but instead, because Plato is vague and extremely prolific in the ideas and frameworks he proposes, we see what we can read into Plato, like a game. I find it extremely troubling when I hear that there are disagreements over how to interpret some philosopher or other. The focus should not be what the philosopher thought, nor should finding a unique interpretation of a work be considered worthwhile in itself. The point of philosophy is to come up with useful ways of understanding life, good, and the world we live in, not to waste time wondering whether Hegel was right wing or left wing.

Ultimately it just isn’t worth it to put gargantuan effort into finding a way of interpreting Plato that seems pretty. The proper format for arguing philosophical ideas is broadly like this: you tell people your idea and why you think it’s good, then critics tell you why they think it’s bad, then you respond to that. Is there going to be any of this in an interpretation of Plato? No. It suddenly becomes more about finding interpretations of what Plato was saying rather than whether what Plato was saying is right, which is when we stop being philosophers and start being literary critics. As long as an interpretation isn’t obviously wrong, objecting to it seems to be considered unsporting. Ultimately the ideas that arise out of this kind of approach will be largely ineffectual, more aesthetic than practical.

So if I had to sum this up: Stop teaching me wrong, obsolete philosophy, and stop teaching it from the original text, which in addition to being wrong and obsolete will also be about five times as long as it needs to be and not nearly as clear as it could be.

Posted in Uncategorized | 14 Comments