Morality and Probability September 30, 2012 | 08:33 pm

I’ve been noodling around with a response to Robert about god-less morality for a while now, and I’ve come to the conclusion that a large part of my problem is that the subject is so large, that condensing the whole thing down into a single blog post is impossible. So I’ve decided to start splitting it up into multiple blog posts (as the muse moves me), each dealing with a small corner. And I’ve decided to tackle the “gotcha” question for utilitarian morality: whether you would kill someone if you knew that you’d make $1 doing so, and that there were no other consequences of said action.

The idea here is that, to make a choice like this, you’d draw up a little diagram like this:




Result
Perform actionB1 – C1
Don’t perform actionB2 – C2

Where here, “perform action” means “kill this person”, and Bn is the benefit of performing (or not) said action, and Cn is the cost. Simplistic utilitarian philosophy says that if B1 – C1 > B2 – C2, you should perform the action. The trick question has us set B1 – C1 = +1 dollar, and B2 – C2 = 0, and thus show that those with utilitarian morality are evil people who’d kill someone for a single lousy buck.

The first thing I’d like to point out is religious moralists are also utilitarian moralists, they just have some additional potiential costs and benefits added to the equation, based on the reactions of their deity, that atheistic utilitarian moralists don’t have. The argument is that no earthly reward can compensate for an eternity of punishment, so $1, $1 million dollars, $1 trillion dollars, it doesn’t matter what the reward is, it’s not worth it.

But deist-based costs can change as well. One can fairly ask the religious people if they would kill someone for $1, if they knew that God wouldn’t punish them for doing so? That’s the equivalent to the question first posed in this post. Indeed, you can go much further along this spectrum, and ask if they would kill someone, even if the face of extreme corporial cost, if they knew God would reward them for doing so? What earthly, temporary, punishment isn’t worth suffering for eternal reward? Abraham proved his faith by being willing to kill even his own son for eternal reward. And, more recently, this is exactly the logic that suicidal terrorists use. If God wanted you to blow yourself up in a crowded market place, and would reward you for all eternity, would you do it?

Of course, there’s a key word, and all it implies, I’ve been throwing around with impunity, which is “know”. We’ve been assuming that the benifits and the consequences of both committing and not committing the action are known with absolute certainity. Once we open up the possibility that we might be wrong, the situation becomes a little more complex. Our table above now becomes:




We’re rightWe’re wrong
Perform actionB1 – C1B3 – C3
Don’t perform actionB2 – C2B4 – C4

And if the probability that we’re right is P (as a fraction), and thus the probability that we’re wrong is (1-P), the equation to determine if we should perform the action becomes: P*(B1 – C1) + (1-P)*(B3 – C3) > P*(B2 – C2) + (1-P)*(B4 – C4). We’ve been implicitly assuming that P = 1, and that thus (1-P) = 0, in which case this equation simplifies to the one above. But once we accept the possibility that P can be less than 1, that there is a possibility that we’re wrong, the equation literally changes.

So let’s take a look at the gotcha question a second time, using the full equation with the assumption that we could be wrong. Now, if we’re right, things remain the same- so B1 – C1 = 1 dollar, and B2 – C2 = 0. Furthermore, we assume that B4 – C4 = 0 as well, that if I pass on killing the person, and I’m wrong, there are still no consequences. Now, lets consider B3 – C3, the result if I kill the person and am wrong about there being no repercussions. In this case, even absent any divine retribution, I’m looking at serious negative consequences- a trial for certain, followed probably by either a lengthy and unpleasant prison stay, or possibly even the death penalty. B3 – C3 is a very large negative number.

So it all comes down to P. If P is close enough to 1, if I have enough confidence that I’m right, then the utilitarian argument is in favor of committing murder. Note that I would argue that it is impossible for P to be equal to 1. You can’t know for certain that, over my future life expectency, that we won’t suddenly develop remote time viewing technology, and that once the historians and paparrazi have had their field days, the cops don’t decide to go through their backlog of unsolved crimes and disappearances to determine what actually happened, and suddenly I’m up for murder one again. Current physics says such a capability is impossible, but current physics doesn’t have a workable theory of quantum gravity, and has aboslutely no idea what 96% of the universe is made of (the dark matter and dark energy). So neither you nor anyone else can rule out such a possibility. And such technological leaps are happening- there has been a spate of rape and murder convinctions recently of very old cases, based on recently discovered DNA evidence. Evidence that, at the time the crime was committed, wasn’t known to exist. So people who had thought they had literally gotten away with murder are now discovering that they were mistaken.

The next thing to understand is that we humans suck at probability. We suck badly enough in abstract, more or less purely mathematical situations where P can be calculated quite accurately- ask any serious poker player about how often suckers draw to an inside straight. But we suck even worse in amorphous, real world situations where P can not be calculated exactly. Like what the probability is that we won’t leave any incriminating evidence at the scene of the crime. We wildly over to underestimate probabilities all the time. This is because the heuristics are brains use to calculate probabilities, which served us well on the Serengeti, fail spectacularly in the modern world. Witness how many people are terrified of flying, when it’s much more likely you’ll be killed driving to the airport.

Given that P can not be mathematically deduced, and given that our intuitions are probability are prone to wild inaccuracies, the only logical course of action is to assume that P, the probability that we’re right, is much lower than we think. This will tend to drive our decisions towards choices that avoid catastrophic downsides if we’re wrong, even to the point of missing potential opportunities. To pass up the opportunity to make a quick buck, to avoid the possibility of being hauled up on murder charges. A rule of thumb might be that any P value greater than about 0.85 (5/6th) should be assumed to be 5/6th. That you can’t be 99% sure about anything. At which point, the gotcha question has a simple, obvious answer- the downside to being wrong is greater than $6, so it’s not worth the risk of being wrong.

This does raise the issue of why crimes happen, given that it’s not rational, given then logic I just gave. And the answer is that people are often irrational. And we not only suck at probability, we suck at math in general (especially when our emotions are involved).

This allows me to raise another point. I have, in other debates, called Communism a religion. Part of this is that it shares the trappings of religion- it has it’s own ten commandments, it hates all other religions (“You shall not make for yourself any carved image, or any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth; you shall not bow down to them nor serve them. For I, the Lord your God, am a jealous God”), you go to a special building or room every weekend where you listen to specially elected person who reads passages from the special books and gives a lecture on their meaning, interspersed with chants and songs, and so on.

But the key commonality between Communism and religion is just this: the illusion of certainity. Communism’s illusion of certainity came from a misunderstanding or deliberate perversion of both the theory and practice of science. Religion’s illusion of certainity comes from the claimed communication with the omnipitent omnipresent omnipotent creator of all, who (by definition) can not be wrong. And this is the great danger in both religion and communism- all of the great crimes, all of them, throughout history, all the wars and genocides, were committed by people who firmly believed that it was inconceivable that they were wrong. Inconceivable, I say!

In this sense, the current atheist/skeptic philosophy is diameterically opposite of both communism and religion. It’s response is literally “I do not think that word means what you think it means.” Not only is it conceivable, history has shown time and again that we humans are never more likely to be wrong than when we are certain we are right. As Oliver Cromwell said, “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”

  • Marc

    Where does he talk about godless morality?

  • Sean

    I’m certain you are right, therefore you are never more likely to be wrong.

  • http://robertcfischer.com/ Robert Fischer

    We got into it a number of times back in the day. It came up at one point in a conversation about why Freemasonry (at least in the Grand Lodge that I joined) required you to believe in a higher power. Working backwards, the idea comes up big in the following posts and (more frequently) their comments:

    http://blog.enfranchisedmind.com/2012/01/faithiness/
    http://blog.enfranchisedmind.com/2009/04/cooperation-and-morality-without-god/

    http://blog.enfranchisedmind.com/2008/08/real-meaning/

    http://blog.enfranchisedmind.com/2009/01/experience-of-a-freemason

    My basic stance — which I hold to — is that if you’re an atheist, you should be a Nietzschean who views empathy as a weakness, if not an outright LaVeyan Satanist.

  • http://twitter.com/alexey_r Alexey Romanov

    Surely you should start with saying that the question is incoherent: there certainly _are_ other consequences, such as losing all utility you could obtain interacting with your victim in the future.

  • salient1

    As an atheist, I find your stance remarkably offensive. Please explain to me why I should believe that empathy is a weakness? And don’t throw a philosophical maze at me. I want a straight up, simple explanation.

  • http://robertcfischer.com/ Robert Fischer

    It’s not my view — it’s Nietzsche.

    Basically, the argument runs like this. Given that there is no extrinsic meaning, the only thing that matters is your own experience of reality, which we presume we want to enjoy. Your only fundamental goal in life is therefore maximizing your own pleasure and minimizing your own pain. Empathy, however, causes you to do things which are not advancing your own pleasure, as well as bringing pain on yourself that you otherwise would not experience. Therefore, empathy is counter to the fundamental goal in life.

    There’s a more pop and vitriolic version of the argument buried in LaVey’s “Satanic Bible”.

  • http://robertcfischer.com/ Robert Fischer

    The presumption in the core hypothetical is that the benefits outweigh the potential utility. You can assume that you would never interact with the victim ever again.

  • http://robertcfischer.com/ Robert Fischer

    There is a lot wrong with this post, but at the end of it, I’m not sure what you’re actually disagreeing with me about.

    Yes, we agree that if the risk outweighs the benefits, then the utilitarian ethics say that you should not do it. On the other hand, there’s plenty of bad stuff that you can get away with in your life — and the utilitarian ethics say that you should. The exact case — killing another (innocent) human being without consequence — is, for instance, the case that happened with the death of the reporters dubbed “Collateral Murder”. The military reviewed the case and gave it an A-OK. https://en.wikipedia.org/wiki/July_12,_2007_Baghdad_airstrike#Military_legal_review (Bradley Manning, meanwhile, has spent over 900 days in prison without trial.) For the gunner in the helicopter, the benefits of killing these reporters outweighed any penalty. Therefore, killing the reporters was ethical.

    Similarly, you get cases of shady salespeople (including evangelists/missionaries if you presume an atheist worldview), the bankers who collapsed our economy and walked away with huge profits, short-changing cashiers, Presidential candidates deploying calculated lies and flip-flops… There are all kinds of cases where the utilitarian benefit outweighs the risk/cost. That’s why people who are intentionally doing cost/benefit analyses engage in this kind of behavior: it works. And, in your system, they’re ethical.

    Adding probability and indeterminacy simply raises the required benefits necessary to justify the behavior. But that threshold is demonstrably reached in the real world. Once that threshold is reached, your system declares those actions ethical.

    And now, to make sure we get to Nazis early — the ethical system you’re advocating is also the ethical system of the Loyal German. The consequence of any given German to resisting the Reich was incredible, so resisting the Reich was not the ethical choice. If you aren’t going to resist the Reich, then the maximum benefit came from supporting it and being the Loyal German.

    So nothing has really changed in our conversation, and I still think that the utilitarian ethical system is horrific.

  • http://robertcfischer.com/ Robert Fischer

    There are a number of assertions in your post that are just plain wrong, and they deserve a distinct thread, so here goes.

    The biggest problem is that your assumption that all human beings do this cost-benefit analysis is just plain wrong. The idea that we’re just bad at heuristics is just wrong. Despite its popularity, the idea that we are energy-miserly optimizers and naive scientists has been empirically disproven. The book to read on this is White’s “Psychological Metaphysics”. He’s got the actual experimental citations for you in there, along with some underlying philosophical work and a proposal for an alternative paradigm that is more coherent to evidence. More directly, you assert that religious people are engaged in this same project, yet I do not recall any conversation of morality among Quakers boiling down to this kind of cost-benefit analysis, which brings me to my second point…

    I do not participate in a religion of certainty. The kind of certainty you’re asserting as definitive of religion is totally foreign to my religious practice and tradition. And it’s not just Quakerism that shies away from this certainty — there are plenty of religions throughout the world where that kind of certainty isn’t a requisite part of the religion. You seem to be working from a position that religion is somehow identical to or simple variations on conservative protestantism, which is a common failing of atheists. (Atheists who are, BTW, also just as certain in their stance. Otherwise, they’d be agnostics.) (Don’t feel too bad about this, though — it’s also a common failing of conservative protestants.)

    Finally and least significantly, I don’t believe that a benefit metric exists. I’ve never seen a way to compare the relative benefits of scenarios except in the most hand-wavey sense. (Note the hand-wavey-ness of benefits vs. costs in my other post.) If this benefit metric does not exist, however, this kind of calculation (and, hence, this entire ethical system) is just an academic flight of fancy, and cannot be applied to the real world. There may be some other kind of approximation to it applied, but this is certainly not it.

  • http://twitter.com/bhurt42 Brian Hurt

    Any time you say “We should not do X, because God will punish us”, you are *explicitly* doing cost-benefit analysis. That God will punish you for doing X (for any X) is saying that there is a very large (infinite) cost with doing X, which gets plugged in to the cost-benefit analysis. Likewise saying “God will reward you for doing Y” is giving a large (infinite) benefit to doing Y. You’re still deciding what to do based upon what’s best for you, there are just a few new terms in the equation. I believe I stated this in the original article. I also believe I addressed the issue of humans not always being rational.

    But this does raise the question: if people are not (as you claim) using cost-benefit analysis (however poorly applied), what algorithm are they using to make moral decisions? Flipping coins?

  • http://twitter.com/bhurt42 Brian Hurt

    This leaves open the question of what other method for determining what is right is there. But you haven’t had the opportunity to answer that question yet.

    Here’s the problem: the consequences of the action, and there for the morality of it, is determined by the reactions of others. WE have decided not to punish the soldiers who killed the reporters, WE have decided to not punish those who torture or lie us into war, WE have decided to not punish those who punished Bradley Manning even before his trial, WE have decided that these things are moral. These are a consequence of OUR actions or inactions. If WE changed our minds, and decided to punish these actions, then their morality would change too. If there is a problem here, it lies not in their decision making process, but in ours.

    And note, the cost and benefits of performing an action or lack of action have to be summed up over all time. It’s still possible to punish the wrong doers. Note also this works both directions- there are actions that, even if they lead to (for example) a 20 year prison term, are still worthwhile. If you don’t believe me, ask Nelson Mandela.

  • salient1

    Ok, you said you “should be a Nietzschean” if you’re an atheist for some reason. I just don’t understand that. I think everything you know about atheism came out of some book somewhere.

  • http://twitter.com/bhurt42 Brian Hurt

    The flaw in the logic here is that I’m not a sociopath. Pain in others causes pain in me- a much lesser pain, granted, but still pain. Even without a direct causal link. The next time some guy in a movie or tv show gets kicked in the nuts, watch how every guy in the theater winces. Ask yourself why that is, given it’s not the watcher getting kicked in the nuts, and even the guy on screen is probably a stunt man with a special steel jock strap and isn’t in any real pain?

    But even ignoring that, way more often than not there are direct causal links. Reducing pain and suffering in others causes rewards to accrue to me, via a million different potential channels. The problem most people have with enlightened self interest is that the people claiming they’re practicing it have forgotten the “enlightened” part.

  • http://twitter.com/bhurt42 Brian Hurt

    Except that what I’m explicitly saying is that no, you can’t assume that.

  • http://robertcfischer.com/ Robert Fischer

    Do you have a basis for morality aside from your own experience? If not, where’s the flaw in the logic?

    What I know about atheism came from being an atheist (of the LaVeyan Satanist flavor) and spending a lot of time figuring this stuff out.

  • http://robertcfischer.com/ Robert Fischer

    Yes, you’re not a sociopath. Few people are. That’s the Original Sin in the Nietzschean worldview, and it’s something to be sought to be overcome, at least insofar as it limits your ability to maximize your pleasure.

    (As an aside, I’ve always been curious to see a comparison of Buddhism’s detachment and the Nietzschean Will to Power individualism, and where — if anywhere — they differ. I’m assuming Buddhists would want to distance themselves from Nietzschean individualism, but I’m not sure how/where they can.)

    And don’t be confused in what I am saying — if there is a gain to be had in being nice to other people, then by all means: be nice to them! The leader of Anton LeVay’s Church of Satan is an eloquent, nice, charming individual. (You can meet him here: http://outthereradio.net/episode-25-church-of-satan/ ) There’s a reason for that: being eloquent, nice, and charming is a great way to get people to sacrifice themselves for your benefit. They’ll be happy to do it!

    The fact that we pay to go see people hurt and killed speaks contrary to the idea that empathy is some kind of ubiquitous state, even in the most pure physical version you’re laying out. The success of the UFC is an interesting case to consider.

  • http://robertcfischer.com/ Robert Fischer

    I have never heard a Quaker say “We should not do X, because God will punish us”, nor “We should do X, because God will reward us”.

    There are a variety of systems for making moral decisions which aren’t utilitarian cost-benefit analyses: that is, they aren’t optimizing for the best case for someone’s own well-being (even if you expand “well-being” to include some kind of post-mortal “well-being”-ness). You’ve got legalistic modes of morality, where you do X because That’s What You Do, and there’s no real analysis of benefits or costs beyond that point. You’ve got other-centric/group-centric cost-benefit analyses, where you do what’s best for the group. You’ve got hierarchical modes of morality, where you do what someone above you tells you to do. And you’ve got wishy-washy affective modes of decision making, where you do what feels right at the time, without actually performing a utilitarian analysis to see if it is advantageous to your well-being. And, beyond all of this, there is the straight up arbitrary mode of decision making, which is rationally incoherent with regards to moral decision-making.

  • http://robertcfischer.com/ Robert Fischer

    Sure. “*IF* there is a problem here…”. The point is that in your system, *there is no problem.* There is no way that you can say the bankers or the soldiers or the Loyal German is bad. They’re acting totally in alignment with your ethical system. You have left yourself no way to get outside of your ethical system to critique those decisions, so you have to affirm them. Your ethical systems backs the Loyal German, backs the bankers, and backs the soldiers who kill the innocent reporters. And I have a problem with that.

  • http://robertcfischer.com/ Robert Fischer

    To answer more explicitly what was probably the underlying question — namely, “What algorithm are *you* using to make moral decision?” — let me say this: I’m not using one. I reject the whole project of having an algorithm to decide between right and wrong as fundamentally irrelevant and unnecessary for me. (The anti-nominalism is strong in this one: http://blog.enfranchisedmind.com/2012/01/faithiness/ )

    I have a sense of right and wrong which is ultimately arbitrary and innate. I have a community of people who encourage me to become what I identify (non-algorithmically) as better than as I am, and they are my Friends, my friends, and my family. I have the story of Jesus and Israel which provides a way for me to challenge myself and reflect on my own actions and decisions.

    In some cases (such as getting involved in Quaker House of Fayetteville), I feel compelled by my sense of right to perform actions and join into communion with a group. In some cases (such as when I was editing some writing for Hugh Hollowell), I feel convicted, and I strive to improve myself based on those convictions.

    But there’s ultimately no algorithm for it. It’s something written on the heart, as per Jeremiah 31:31-34: http://www.biblegateway.com/passage/?search=jeremiah%2031:31-34&version=TNIV

  • salient1

    My basis for morality is human born just like the religious kind. I believe in the golden rule. I believe that you create the world that you live in. I don’t want to live in a world filled with selfish assholes so I don’t act like a selfish asshole. If that’s somehow inferior to a morality derived from believing in a supreme being, please explain it to me.

    Finally, LeVey and Rand were selfish assholes.

  • Shane Stephens

    Sorry to butt in.

    Typically utilitarian ethics looks at overall happiness, not the just the happiness of the individual deciding to take (or avoid) an action. Hence, in order for the benefit of killing someone to be positive, that person would have to actually want to die, and have nobody else around that feels that person would be better off alive than dead, as well as there to be no consequences imposed on the killer. This consideration, I think, brings the central question of the post much more in line with the debate on euthanasia.

    Although it’s not strongly supported by common usages of the two words, I tend to feel that “ethics” and “morals” should be distinguished in the following way:
    – a system of ethics is a systematic approach towards determining what actions a society would consider to be morally appropriate in various situations
    – a moral decision is a choice made by an individual based on their sense of what is right or wrong

    At any rate I think these are two distinct concepts. In this particular discussion it seems they’re being used interchangeably – for example Brian’s grandparent post to this response suggests that we (society) decides which things are moral; and that a failure here leads to a failure in the morality of the individuals making the decisions. But why do (how can) we feel that it’s wrong to imprison Manning or kill reporters, even though society seems to disagree?

  • http://robertcfischer.com/ Robert Fischer

    I’m using “utilitarian” here simply to mean “most beneficial”, and leaving it to context to specify who receives the benefits. This isn’t big-U Utilitarianism, which I’m very well aware of.

    However, big-U Utilitarianism isn’t a rationally defensible position for an atheist to take—what justifies an atheist to sacrifice themselves for someone else?

    The immediacy of the atheist’s experience of themselves and their own life provides the justification for the kind of Enlightened Hedonism that Brian advocates, and which I actually advocate for atheists, too—I just follow Nietzsche to its rational conclusion, and note that this does mean that an atheist, given a sense of more benefit than harm to screwing you over, should go ahead and screw you over. And an atheist should also be a Loyal German.

    I distinguish morality as being a personal sense of right and wrong, and ethics as being a systematic approach to right and wrong. I don’t necessarily add an additional layer of “individual” vs. “society” on it. I think I’ve been using the terms consistent with those definitions. If not, sorry for the confusion.

  • http://robertcfischer.com/ Robert Fischer

    What makes you think the world will be filled with selfish assholes if you’re acting selfishly?

    Note that—as I said before—you can be selfish and also be polite, charming, and charismatic. Just because you would ultimately screw someone if it came to your benefit doesn’t mean you’re an apparent asshole—in fact, you’re pretty bad at maximizing your benefit from other people if you are an apparent asshole.

    I don’t think that the morality you’re laying out is inferior to one derived from believing in a supreme being. To say it’s “human born”, though, is nonsense and betrays a lack of self-reflection. You’re a product of your culture and your surroundings—you even describe your morality in Christian terms: “the golden rule”. So there’s no primacy to your morality except that it’s the one you happened to be born into. The fact that you cling to the Golden Rule is ultimately irrational, un-empirical, and generally all those things which atheists deride believers for being.

  • salient1

    I don’t see what being an “apparent asshole” has to do with it since I never used that word. You’re re-framing my argument to suit your purposes. And I don’t see how me declaring my morality as “human born” is nonsense and lacks self-reflection. All morality is human born. If I’m wrong about that, please explain. As far as I can tell, humans have been crafting morality throughout recorded history and this “morality” changes over time. If you have some objective form of morality, please clue me in because I have yet to hear of it.

    Also, please explain how clinging to the Golden Rule is irrational and unempirical? In my experience, the way you get treated is largely dictated by how you have treated others. I’ve even experimented with this in my younger years to see how people respond to my behavior. The Golden Rule may be simplistic, and you can find exceptions to almost every rule, but it works the vast majority of the time. In my book, anything that’s simple and works the vast majority of the time is a HUGE win. I also believe that we humans are largely pretty simple in spite of our seemingly boundless hubris and self-importance, so the simplest solution that works is usually the best.

    Finally, I would be interested in hearing a better plan that is somehow more rational and less nonsensical. I go with the Golden Rule because it’s the best option I know of right now but I’m not married to it. Or is that too rational?

  • Bartosz Milewski

    Your observation about Communism is right on the spot. I was born and raised in Communist Poland. At some point I had to prepare a presentation about Marxist philosophy for a class and I decided to compare it to religion. Less than a minute into my presentation the professor stopped me abruptly. If that happened in the Soviet Union or East Germany, I probably wouldn’t have gotten away with it. Yes, in Communist countries Marxism was treated as a dogma.