Thursday, July 1, 2010

The Most Basic Philosophy Question

I'm not sure if this actually would be the most basic philosophy question (a more basic one might be "Why do philosophy in the first place?") but it is certainly a fundamental one: namely, do we even ask our questions the right way?

If you read a bit further in the link to the trolley car problems that I put up, then you'll notice that there is a lot of discussion on exactly why the moral answer changes through apparently irrelevant changes in the question. For example, quoting wikipedia:
Unger also considers cases which are more complex than the original trolley problem, involving more than just two results. In one such case, it is possible to do something which will (a) save the five and kill four (passengers of one or more trolleys and/or the hammock-sleeper), (b) save the five and kill three, (c) save the five and kill two, (d) save the five and kill one, or (e) do nothing and let five die. Most naïve subjects presented with this sort of case, claims Unger, will choose (d), to save the five by killing one, even if this course of action involves doing something very similar to killing the fat man, as in Thomson's case above.
What's interesting about this difference is that most people will avoid choosing to kill the fat man above when the choices are phrased as being between killing the fat man or letting 5 die. But when you phrase the question this way, with a continuum of answers, people seem to choose killing the fat man (or a situation analogous to killing the fat man). Does the moral calculus change? Not from what I can see. But the answers do.

As the wikipedia article states, Unger draws the conclusion that these questions don't enlighten moral questions, but rather just explore psychology. I don't think we need abandon the philosophical merit of these discussions (after all, there's an argument for the idea that morality is derived from the way human beings are inclined to treat each other and wish to be treated by others), but Unger's problem is illuminating. When we do philosophy, we often try to seek at deep truths. We do this by trying to discover a pertinent question and drive towards it, isolating it and discovering why it's important. Philosophy is packed full of such thought games, ranging from the trolley car problem to discussions about what it would mean to copy one's self and how that relates to the identity of the mind.* But all these thought games, while allowing us to drive deep at an aspect of a problem, have another aspect which we need to keep in mind as philosophical inquirers: the limitation of perspective.

When we use a thought game, we explicitly throw out variables and questions that we don't want to think about in a given experiment. One example (the one brought up by my interlocutor) is this: "Why push the fat man? Why not jump off in front of the bridge yourself?" My immediate response was "Well, pretend you can't jump off; we're trying to focus on the difference between the choice of throwing the fat man off and letting the workers die." But in doing that, I was shutting down a path of inquiry. In deciding not to explore that other option, I was getting myself closer to discussing what I found interesting about the thought game, but I was also distancing myself from the reality of the choice. Another example of where this has happened is when I was discussing a problem of voting (Do you cast your votes for the policies which would best benefit you, or do you cast them for the policies which you feel would best benefit society?). Talking with my brother, I asked him that question and he responded with "Well, you cast your votes for yourself, since most people are ill equipped to know what's best for all of society but better equipped to understand what's best for themselves. If everyone votes for their own interest, then it will aggregate out to be best for society." This didn't address the problem I wanted to talk about, so I responded with "Well, just ignore that for a moment. As a voter, are you supposed to use your vote as a shareholder of society's welfare, or as a representative of yourself?" To which my brother replied "What's the point of the distinction?" Obviously, if you don't accept as a premise the idea that your vote could be different voting for yourself than for society, then the distinction will seem pointless. My brother's first answer showed that he didn't accept my premise, and thus couldn't see the point of the rest of my inquiry. The restrictions I wanted to put on it seemed too artificial.

It frustrated me at the time, but I realized that his question, and the question of my reader about the trolley problem, are both very important. What they serve to do is remind the philosopher that the inquiry can be made to look at the problem from any perspective I want to, given arbitrary parameters. I can certainly eliminate possible perspectives as much as I please, but each time I do that I distance myself further and further from a broader understanding. Moral choices are always more complicated than kill 5 men or kill the fat man. We as philosophers need to be careful that we don't lose sight of that complication in our search for the fascinating nuggets of philosophical dilemma.

*This is a little bit complicated, but the idea is to refute Locke's theory of identity, which is based on the idea that a person is identical to the person they remember being. The idea for Locke is that a person's identity is not a matter of their body being the same (as person X grows older and the body changes, does person X not share a continuous identity with himself? While some would argue for a radical identity that person X is a different person every time anything changes about person X, most believe that there is some kind of continuity which transcends physical change). If it's not a bodily matter, then perhaps it has to do with memory? Locke argued that since you remember being you in the past, you're the same person (allowing changes). However, one way to argue against Locke here is to propose the thought game of cloning. If I were to be put in a replicator machine and a clone of me were to be made, having all my memories and physical characteristics, we both be identical to the me before cloning under Locke's rules, but we couldn't be identical to each other, since we were having different unshared experiences now that we were clones. This is a logical impossibility, as can be shown using variables. Call me before the replication X. Call the two me's after replication Y and Z. If both Y and Z are identical to X, that's represented as Y=X and Z=X. But then Y must be identical to Z, which can't be true if we're separate clones having separate unshared experiences. Since Y /= Z, then the contradiction disproves Locke.


  1. Moral Choices are more complicated than killing 5 vs 1. However, I am not convinced that a moral theory should (has to?) be more complicated. If we are looking for a Normative Moral rule (i.e., An Act is moral iff and because blah blah blah), then that theory should apply and give guidance in all situations, even those with severely limited options like the trolley car. Thought experiments, while not directly providing us with the three blah's, can provide damning evidence against our proposed moral rules. Part of the appeal of Utilitarianism is the absolute simplicity of it--"How can the best possible action be immoral?" However, I think the trolley thought experiment does contradict any hope of moral theory being as simple as Utilitarianism (although there is an argument to be made that Utilitarianism is right and our commonsense is just dead wrong). But simplicity is still needed; any moral theory that is as complicated as moral reality can't hope to give guidance. Limiting of perspective in thought experiments, then, is exactly what we want while looking for a moral theory which we assume has to apply in all cases. By picking the most extreme examples, we test the limits of a theory, while still hoping that the correct normative moral theory has no such limits.

  2. I should note that I did not mean that thought experiments were useless. Quite the contrary, they're probably the most useful tool in philosophy for giving theories the smell test. Like you say, you need to simplify the situation in order to see what you're testing. It is also clearly true, I think, that you need to be keenly aware that the parameters you set and the way you set them affect your experiment. Giving parameters to a thought experiment is like adding premises to an argument: you absolutely need to do it in order to make your argumentation work, but you also need to recognize that the more things you take as granted, the harder the work will be to relate what you find back to the real world.

    With the trolley car example, it's clear that the choices you offer give clear clues as to what your answer should be. While you feel like you might be simplifying your question by only offering two options (kill the fat man or let the workers die), you might very well be hiding your true answer (giving the other options of killing 5 and saving 4, so on). Limiting your options is the only way to keep things simple enough to understand, but that understanding isn't going to be worth much if you're understanding merely your own experiment. It should be able to relate to some kind of real world truth. Hence, a philosopher needs to be very careful when designing a thought experiment. They're dead useful, but they can also be misleading.

  3. Also, for another post, there are ways that Utilitarians deal with the trolley car problem aside from just biting the bullet, such as Rule Utilitarianism, which accepts Utilitarianism in general but has certain rules that aren't subject to cost-benefit analysis (like you can't ever kill anyone without their permission).