What Kind of Robots are We? - Morality and volition

Actually written JUNE 6, 2005 -

Debates over human nature often employ criticisms of the form “If humans are that way, then we are just robots!” The implication being that someone's theory has stripped us of some characteristic vital to our humanity.

I here want to focus on a particular human nature debate known as “determinism versus volitionalism (“free choice”)”. Determinists say we have no volition/choice. Volitionalists say we do.

We can well imagine how determinism regards us as “robots”.  Robots just follow programs, making no decisions for themselves. No choice at all.

But there is a kind of robot-ism associated with volitionalism too. The notion of free choice implies that we can decide to act in complete disregard of our motivations (our desires and aversions). In this state of pure choice, we act in complete apathy – as robots called “apathoids”.

In short, we are stuck with two kinds of robots to represent humanity:

Determinism removes our volition and makes us “automaton robots”.

Volitionalism removes (disconnects) our motivation and makes us “apathoid robots”.

Well, this certainly is an odd way to look at the debate. I mean, does volitionalism really imply we are just apathoid robots? Does free choice really mean we don't care what happens?

I suppose the real “meat” of this writing is to make clear that, yes, volition does mean we act as apathoids who literally don't care what happens. So let me get into that.

Volition means the power to act/think/decide independently of antecedent facts. That is, our actions/thoughts/decisions are not determined by conditions preceding them. No matter what forces have influenced or changed us, we can still decide our actions in complete defiance of those forces.  This means we can decide in complete defiance of our desires and aversions. Desire and aversion are among the antecedent facts that volition empowers us to defy.

Before I go on, let me make clear that I have no argument for or against the actuality of volition. My position is more precisely this: If volition exists (and it indeed may), it is not the fundamental motor of our behavior. Pure volition, by itself, is ultimately useless. There’s literally no point in deciding by means of volition if no motivations are consequently gratified. Literally.

If we have volition, what’s so great about it? Given that happiness consists of gratified desires, such happiness by gratification is like a target at which volition never aims except by random, rare accident. Volition isn’t governed by desire. It’s just useless randomness. Sarcastically put: “Oh, goodie! Now I can make choices that may or may not satisfy me by complete accident! Yae!”

Perhaps volition can be handy, if used as a tool to gratify some overarching motivation. But then, it becomes quite the gray area as we try to decipher whether or not volition was actually used. Was it really a selective use of volition or was it just a hierarchy of motivations in which lesser gratifications were abandoned? Hmmm. It may just be that volition at the service of motivation is a contradiction in terms.

Anyway, I’m just saying that volitionalism reduces us to yet another kind of robot, perhaps a robot less human than determinism's automaton. Perhaps the issue can now be explored by asking ourselves: “Which would you care to be – a (potentially) happy automaton or a randomly-choosing apathoid?” Or, re-phrase the question in terms of choice, like this: “Which would you choose to be – a (potentially) happy automaton or a randomly-choosing apathoid?” So now of course, if you choose which robot you’d be according to which you'd care to be, then you’re decision is determined by your care, in which case you’re already the potentially happy automaton. On the other hand, if you are going to choose which to be without being governed by your motivations, then you really don’t care which you choose. You’re already the apathoid. (Well, you’re either the apathoid or you’re a desirous person trapped in a mind and/or body that chooses random things by its free choice. Oi!)

Personally, I’d rather be a potentially happy automaton, and I sense that I already am. (Snicker snicker.)

It may seem somehow limiting to be an automaton. But consider this: being “programmed” by our motivations is something quite different from being “programmed” by other things like “divine command” or something. In a very real sense, you are your motivations. Considered negatively: Try imagining who or what you’d be without your motivations. You simply would not care who or what you were at that point. Against that, try to realize that such programming makes you care whether you’re programmed in the first place. If you object to being helpless against your motivations, consider that your objection is in fact another of your motivations. To be interested in whether you are programmed by your motivations is just another aspect of that programming. To be free of that programming is like having everything you care about removed from all memory. You’d be left with a world of things that mean nothing to you.

Even if your motivations are in turn programmed by antecedent facts, your motivations are your only means of giving a rat’s ass about that very fact. If antecedent facts make you want something, your disdain for those antecedent facts can only be the product of some other antecedent facts. And many of us want to re-author our own programming (and do so successfully). But there is a core part of our programming making us so want. At some point we just relax and let at least some of our motivations be what they are for whatever reasons they are. We generally accept this and get on with life – as partially re-programmable automatons whose programming is to care – sometimes about the programming itself.

-------------------

I will here add some supplementary yet related thought. Many moralists will recognize the problem of randomness in volition. That’s why they claim we need morality in the first place. Morality simply is the goals to which volition is directed to prevent randomness. (Ayn Rand, for example, was particularly clear on this.) But here the loaded term is “goal”. What is a “goal”? Dig deep enough, and one discovers that a “goal” as a matter of motivation – a desire or aversion. A goal is something wanted. Now we’re back to the problem of deciphering whether a choice at the service of motivation is a choice freely made. Most moralists evade the issue by more fundamentally evading the fact that a goal is wanted. (Ayn Rand, for example, was evasively and ambiguously un-clear on this.) They just stick to their desire-evasive guns and claim that moral goals are to be pursued regardless of desire, as apathoids. Well, there ya go. Be an apathoid, but let we moralists program your choices for you. Again, Oi!

< Previous Morality writing

Next Morality writing >

Comments

Popular posts from this blog

Benevolism Test Quiz

Against Metaphysical Continua

The Mythical Metaphorical Quest for Real Knowledge of How to Relieve Suffering