Free Will

The idea of “Free Will” can be a difficult one to pin down, and it seems like there’s an awful lot of disagreement over a term that’s so vaguely defined. What, exactly, does “Free Will” mean? What does it mean to say that something has free will?

Generally, when we say someone has free will, we mean they have the ability to choose. And not just choose, but choose in defiance of all outside influence. Often this is posed as a great moral dilemma, but to keep things simple, I’ll use the example of choosing between ice cream flavors. You could have been raised all your life to think that chocolate is the greatest flavor, but still choose vanilla in the end – this is what I like to call “social” free will, or,  the idea that no matter how deep the social conditioning, you can still choose to defy it. Or, you could have a very strong hunger for strawberry, but still in the end choose chocolate. This is “biological” free will, or, the idea that there is no one biological urge that sheer willpower cannot overcome.

The problem with both of these ideas is that the reality of social conditioning and biological urges tends to be simplified to downplay just how pervasive they are. Someone once gave me, as an example of the demonstration of “free will”, the fact that one of their friends had been pushed by their parents to become an engineer, but they had chosen to become an artist instead. Defying the wishes of their parents proved their capacity for free will.

But the reality about social conditioning is that it’s much, much more complex than simply “Your parents tell you what to do.” Her parents may have pressured her to become an engineer. But what about her friends? If you have a friend that’s an artist that extols the virtue of their craft, that’s social conditioning too. So is broader culture, which may romanticize the career of an artist, or glorify the idea of rebelling against authority figures, influencing people to defy their parents. All this is social conditioning, as well.

And then there’s the biological aspect as well. We’ve all probably felt the primitive flash of anger and imagined getting violent with someone who’s been frustrating us, particularly as children, and we’ve been able to resist that urge and calm ourselves down, so we may be sympathetic to the idea that there is nothing about our biology that we cannot defy. But biology works in a million more subtle ways than that. For example, the student who defied her parents to become an artist instead of an engineer may have inherited a number of biological traits that made engineering difficult for her. Let’s say that maybe, for genetic reasons, she lacked the ability to focus or was not able to understand abstract mathematical concepts. She might rationalize her idea to become an artist as a product of free will, but is it really? If she failed for biological reasons and was drawn to art as an alternative for social conditioning reasons, can we really call that “free will”?

I think the idea of free will is something humans came up with to describe a sort of happy, healthy balance. We are all essentially programmed, like a computer, by various biological and social factors, all interacting with each other. But we recognize it as unhealthy if someone has one overwhelming biological urge that controls everything they do, or if there is one social institution that has such a hegemony on society that they are the sole programmer of a person. “Free will” describes a happy state of balance between the various programming inputs on a human, so that no one factor sticks out, and all the background input sort of blends together into what you might consider your personality or identity. “Choice” is just a process of weighing all the different social and biological factors that might be influencing you at the moment. To the person who is in this state of happy balance, it feels like they are making a choice independent of all outside influence, because there is no one outside influence so overwhelming that it can be noticed among all the others. The outside influence is multifaceted, subtle, maybe even counter-intuitive, but it is there.

Because, ultimately, it’s difficult to see how we can reconcile the traditional idea of free will with what we know about the universe. We’re hardly masters of all knowledge, and there’s a lot we don’t know. But if traditional free will exists, that means certain arrangements of matter (that is, me and you) have the ability to act in defiance of all outside influence, which is a little like saying a rock can suddenly decide to stop obeying the laws of gravity. There’s a lot we don’t know about consciousness, but nothing about it suggests that we have the power to defy physics. If the universe is as mechanical as it seems to be, every action is predetermined, although it may never be possible to factor in all the preceding actions that influence the current action.Unless there is something about human consciousness that transforms entirely deterministic matter into something non-deterministic, which would be a very impressive trick.

Still, it seems very strange in some ways. When I go to pick out a book, it doesn’t feel like I’m making a predetermined choice. It feels like it’s very up in the air, like the idea is mine alone. Sometimes, maybe, I can point to biological or social factors behind my choice, but very often I can’t. It feels like the action is entirely mine, not something predetermined. I don’t ever go around saying “Well, all my actions are entirely the product of all preceding inputs”, everything feels like my choice. I might recognize that free will seems unlikely in the abstract, but I am unable to act as if I truly believe it doesn’t exist.  It seems like something very innately tied to our sense of identity leans heavily on the idea of free will. Which makes sense; without it, we’d be force to admit that all our actions are pretty much just an extension of the rest of the universe. But it still seems very odd that certain bits of matter in a deterministic universe would eventually arrange itself  in such a way that it would be capable of claiming that collectively, it was now non-deterministic.

 

 

 

 

 

 

 

 

Morality Engine

This post by Slate Star Codex raises an interesting notion, and I suggest you read the whole thing to see their line of thought, but the brief summary is that the question is raised:

What if ideology (or, at least, certain ideologies) aren’t necessarily the product of intellectual introspection, but are rather some sort of mechanism that encourages group cooperation? What if ideologies – as well thought-out and interesting as they may seem – are largely a mechanistic process?

To put things in simple terms, to abstract them:

Say that group A stands to benefit if they undertake action X – say, each of them will, if action X is undertaken, have a chance to receive five thousand dollars.

However, for action X to be pulled off successfully, it needs the cooperation and involvement of most members of group A. Let’s say, a member of group A will have to donate, on average, five hundred dollars. The problem here is that if every member of group A acts in naked self-interest, action X may not be pulled off successfully. The problem with that is, without centralized control, great communication, and direct order-giving, individual members of group A actually have very little motivation to pull their weight. The best bet, for an individual member of group A, is to not contribute to accomplishing action X at all, and just hope others pull some extra weight, so that you’ll get the five thousand dollars for essentially doing nothing. And it actually becomes a bad bet to contribute: If you contribute, you are five hundred dollars in the hole, and, not knowing how many free riders there are, you may well have wasted that money to accomplish nothing. Naked self-interest is simply not enough to benefit your group.

Wouldn’t it be interesting, though, if a system of morality sprang up that convinced most members of group A that action X was not only in their self-interest, but that it was morally wrong to fail to accomplish action X. Suddenly you find that most members of group A will coordinate with each other in an entirely organic manner, with little need for centralized control or communication. Not only that, but because now you have a reason to accomplish action X that is decoupled from the self-interest of members of group A, you may be able to pull off the impressive trick of getting members of other groups to fight for your own self-interest. Members of groups B, C, D may suddenly begin contributing to the cause of group A, despite the fact that action X won’t benefit them at all. Viewed from this angle, ideological morality is very powerful indeed.

And in fact, it lays the groundwork for more complex, intricate systems of morality. Groups whose self-interest do not necessarily harm each other could be bond together in ideology to benefit them both. For example, group A might convince group B to commit to their cause of action X on moral grounds, which benefits them, and group B might convince group A to commit to action Y on moral grounds, which benefits group B. As long as they’re both capable of this function of converting group interest into morality, and as long as both their populations are capable of moral reasoning, action X and action Y might form together into a new, more complex moral ideology.

Of course, this dynamic could introduce new problems. Which sort of group stands to benefit the most from this system? It would have to be a group that is:

  1. Good at rationalizing how its own naked self-interest is actually morality-based.
  2. Good at converting members of other groups to its morality system.
  3. Not susceptible to morality systems itself, so members of the group won’t invest, and will instead ride off the efforts of converted members of other groups.

Any group, call them group Z, that met these three requirements would invest nothing, and instead reap the benefits of other groups being converted to the moral standards that benefit group Z. It would be a pretty impressive feat, regardless of how such characteristics might strike us as hypocritical. Let’s think. What might such a group look like in real life?

Disconcertingly, such a group appears to be similar to a leadership caste in many respects. Political leaders often mask their naked self-interest behind moral justification. They are often very skilled at converting people to this morality system. And they are often hypocritical, not subscribing to this particular moral system behind closed doors. It’s true, that not all leaders are like this, but a significant portion of them certainly seem to be. Could a certain type of leadership personality be something that exists as a sort of parasite on the advantages of morality-based group coordination?

We can wonder how such a system could have arisen, but it seems like it might be obvious. Morality-capable groups have a significant advantage over groups that can only see things in terms of their self-interest, in the form of organic and decentralized cooperation. The most successful groups of humans were probably those which were capable of morality. It may well be bred into us.

But being capable of morality is only one part of the equation to take advantage of the organic cooperation for group interest. You also, vitally, need to be capable of describing your self-interest in moral terms. Slate Star Codex, linked above, gives the example of rich people tending to believe in libertarianism, or Objectivism, or prosperity gospel. None of these ideologies nakedly espouses the idea that actions are moral because they are in rich people’s interests. Instead, they use reasoning, aesthetic, and abstract ideals to come up with a value system in which rich people’s best interests just so happen to be the moral thing. But these value systems didn’t just spontaneously evolve. They were usually the work of many very bright thinkers.

But what was going on in the mind of these great thinkers? Were they openly saying to themselves: “Here, I will now justify why my group’s self-interest is actually a moral necessity, so my group will flourish”? I don’t think so, necessarily. If they were thinking that, I don’t think it was on a conscious level. Maybe it is some sort of grand ego-defense mechanism that the bright are capable of: to create a moral system that justifies their place in the world in the midst of an amoral, unjustifiable natural universe. I can’t really claim to know what was going on in their heads, though.

But it does seem to me that we can say that morality-susceptible groups hold a distinct advantage over groups whose members act in perfectly rational self-interest. But in order for it to work, it seems like each group needs to meet two requirements:

  1. The majority of its members must be morality-susceptible (can’t handle an overabundance of free riders)
  2. There needs to be an intellectual class capable of converting the group’s self-interest into a moral system.

 

Circles of Care

It is natural for us, as human beings, to separate the world according to how we favor it– assemble it into a sort of hierarchy of categories we care about.

For most people, the circle of human beings they care about the most will be their family. You can make distinctions – people will probably care more about their immediate nuclear family than their extended family – but for now we will just make that general point. It seems like a reasonable assumption. In general, if you were forced to choose between saving the life your own child, or a stranger’s child, you would choose to save the life of your own child. In addition, it seems obvious why we would have developed this way – favoring your family in situations like that ensures that your genes are more likely to be passed on than the genes of strangers.

Beyond family there are friends. Again, it is not so hard to see why we may have developed for this to be so, when looking beyond sentiment, or wondering how that sentimental feeling may have been encouraged in our ancestors. Friends, while they are not necessarily our genetic kin, are a social advantage. Friendships can bind together families in alliances, or simply be a sort of social insurance policy, something you can rely on in situations where you might not otherwise survive, and in turn your friends can rely on you. All things considered, you would save your friends before you would save a random stranger from death.

Circles of care beyond these two begin to get more and more artificial. That is not to say that they can’t offer significant advantages, but rather that humans did not necessarily develop the inclination to support them beyond abstractions. Favoring our family is genetically bred into us. Favoring those who we have social ties with is bred into us. Favoring anything beyond that is, most likely, not necessarily bred into us, but rather culturally instilled – for example, humans do not have it ingrained in them to necessarily form nations as they are. One could make the argument that, given our capacity for such political abstractions as “the nation”, that we are, in fact, bred for this. It may be that we are in fact bred for political abstractions of some sort, but my point is that it might not be anything as specific as a nation. We can imagine ourselves as part of large political abstractions that are not the modern nation-state. (And in face, much of our history has been spent in large political abstractions that are not the modern nation-state.) However, one can’t really imagine not being part of a family, or of socializing with someone regularly without some friendly feeling.

Probably the most controversial circle of care is the one that has one foot in biological grounding and one in political abstraction: race. There seems to be some biological foundation for us caring more about our own race than others. While many may feel uncomfortable with saying that they’d save the life of a member of their own race before the life of another member, this biological favoritism manifests itself in other ways. For example, people mostly prefer to date within their own race, and while some of that may be attributed to cultural factors, this pattern holds true even in areas where there is very little pressure against miscegenation. There may have been some evolutionary pressure to give us ingrained inclinations to trust members of our own race more than members of another, or to care about them more. Perhaps, back before the advent of larger political abstractions, like nations, it laid the foundations for different tribes to cooperate with each other against a larger threat. It does often seem that the first steps towards civilization were made along racial lines, and among a people with an explicit racial identity. However, the idea of race often goes far beyond this, often incorporating various groups that are very genetically disparate.

Then there are the political abstractions, the circles of care completely removed from ingrained tendency, products of more intellectual thought rather than feeling. Political abstractions can move from very local (a township) to less local (a county) to national (a state) to supra-national (a confederation of states.) These circles often contain smaller ones within themselves (for example, township inside county inside state inside confederation) and, although they may not completely eat each other, there are various ideologies that emphasize loyalty to one level over another. For example, nationalism emphasizes loyalty at the state level, and in many cases has actually been very good at eroding the care people may feel to their more local circles. For example, the USA has been very good at fostering care at a state and federation level as opposed to more local levels. Because these political abstractions are just that, abstractions, they often seek to bolster themselves by merging one circle of care with another. For example, a nation may consider itself an ethnostate, or a religious state, or a monocultural state, in order to bolster the loyalty people have to that particular layer of care. And this can apply to circles of care at levels lower than the national – a town can consider itself to have an identity of belonging to a particular religion, race, etc.

Running parallel to political abstractions are cultural abstractions, circles of care for people who share the same history, historically instilled attitudes, and traditional ideas. There can be subcultures within a larger cultural umbrella (such as regional cultures of the same state) and broader, more vague cultures that encompass a humongous number of people (such as the idea of a European cultural umbrella.) Generally, the larger these abstractions are, the less basis they have in reality – the less actual shared history and tradition the people share. And it is important to note that the cultural circles are actually separate from the state – German culture and the German state do not encompass the same circles of care. This may have been more difficult to perceive in the past, when each state was highly associated with a singular culture, but the distinction is much more clear in these days, when multiculturalism means that each state may encompass several cultural circles.

Religion is a type of cultural circle all its own. It may be used to merely reinforce the identity and strength of other circles, or it may subsume the other circles within itself. For example, while, for some people, Christianity may just be a component of their national identity (which may also be bolstered by other facets of their national culture), it is also possible to imagine people for whom being Christian comes first, and who care more for Christians in other nations than they do for non-Christians in their own.

The broadest, most vague type of cultural circle is that of a “civilization.” It is here, generally, that the utmost limit of considering someone an ‘insider’ can be found. While different peoples within the civilization might consider each other ‘outsiders’ in various ways, almost everyone within the civilization considers people outside of it to be outsiders. It’s the demarcation line between the somewhat familiar and the alien. All circles of care that encompass parts of two different civilizations are going to be inherently tenuous and utilitarian in nature.

The multi-civilizational circles of care are generally defined by the relationship between the two cultures. First is the relationship between friendly or sympathetic cultures that do not clash on any major issues. Further beyond that is the relationship between hostile or contradicting cultures. And finally, the outermost limit of human care, is the true “Alien” circle – the relationship between a culture and an unknown culture. All of humanity is encompassed within these circles. Further still beyond that we can see the beginning of non-human circles of care, encompassing first pets, then domesticated animals, familiar wildlife, alien wildlife, and invisible wildlife.

There is an idea of moral progress, as presented by some, who seem to believe that moral progress is a great flattening of this hierarchy of empathy, that the general improvement in the quality of life is due, at least in part, to us breaking down these circles, and casting ever wider ones when it comes to people that we care about equally. Some of the more extreme say that all these circles and distinctions should be broken down, that we should care for a stranger halfway across the globe as much as we might care for our own mother (see the utilitarian Peter Singer). The more reasonable advocates of this theory might say that well, of course we are always going to care about our family and friends above anyone else, but we should do our best to knock down all other distinctions, like race, nation, religion or civilization. And the history of moral progress has been just that: the knocking down of distinctions between groups.

I’m not so sure that this is the case. I think the distinctions are still there, and most likely always will be. I think the story of moral progress has been one of learning to treat the outgroup with more fairness. It is moral progress to go from wanting to murder all your neighbors to wanting to throw a barbecue with them, but that doesn’t mean those neighbors are suddenly members of your family. I think trying to erase all the distinctions, trying to flatten the empathy hierarchy, is a very poor idea. If we try the full version, where we try to forge ourselves to be as sympathetic to a total stranger as we are to our own family, that seems cold and totalitarian. If we try to go with the more realistic version – where we recognize we will always care more about friends and family, but try to knock down all other distinctions – that seems like a recipe for a world that will revert to tribalism. Flattening all other steps of the hierarchy just means that people will cleave closer to their families. Trying to make us care as much about a stranger in a faraway land as a stranger in our own nation just means we’re not going to care about either of them all that much.

Mental Traps

What makes totalitarianism?

Social ideas can be powerful things, and useful for organizing the world. A social idea is any idea that has to do, primarily, with how humans relate to each other. Among social ideas, there is a certain class of idea that attempts to explain all of social reality. Call them reality-complete social ideas. (A theory about something limited-say, about how people treat each other under duress-would be a reality-incomplete social idea.)

Reality-complete social ideas, when they become popular, can become the animating force behind religions, nations, civilizations. But in order to be reality-complete, a social idea must re-interpret history through the narrative of the idea. For example, communism was a reality-complete idea. It effectively broke down all of history into different periods of development leading up to the advent of communism. Various different forms of racialism are reality-complete ideas, often portraying reality as a struggle between the in-group race and other races. Modern narratives about democracy can be reality-complete ideas, portraying history as a struggle towards more freedom and autonomy for the people. Libertarianism and attendant Objectivism are reality-complete ideas, portraying history as a long struggle between collectivism and individualism.

These reality-complete ideas hold a strong allure, especially for the intelligent. A romantic notion of intelligence is that we have it in order to pursue truth and rationality. In reality, that (limited) ability for humans to pursue truth and rationality is a side-effect of what intelligence was truly meant for, which is social maneuvering. Intelligence is not meant for rationality, but instead rationalization – the post-fact justification of beliefs and opinions that you arrived at by often non-cognitive means. Reality-complete ideas can find ground among the less intelligent, but by and large they are usually more skeptical of them, because they lack the intelligence to consistently defend their particular reality-complete idea in the face of criticism. Reality-complete ideas find their most fertile ground in the more intelligent, who are more capable of defending their positions and justifying their beliefs to themselves. The less intelligent will often not be so committed, especially if they are exposed to many competing reality-complete ideas.

Once a reality-complete idea has taken root in a number of intelligent minds, it can become a mental trap. What is a mental trap? A mental trap is when a reality-complete idea, among a social group, creates conditions that so reinforce itself that it is nearly impossible to break through. Normally, a reality-complete idea can be tempered by exposure to competing reality-complete ideas, or reality-incomplete ideas that contradict the narrative of the reality-complete idea. This leaves the intelligent person whose mind is occupied by the reality-complete idea to still have room for subtlety, nuance, and the capability to consider things from the point of view of others. A mental trap overrides these tempering factors.

For example, let’s consider a somewhat intelligent young man, Michael, who believes in reality-complete idea called A. Reality-complete idea A describes history as a struggle between Michael’s country, let’s call it Funland, its allies, and an international cabal of diamond smugglers. Now, as it so happens, Michael’s country does suffer a lot of political meddling on the part of fabulously wealthy diamond merchants. It would be a poor reality-complete idea if it didn’t reflect reality a little bit. Michael joins an online discussion group based around reality-complete idea A. (Maybe it’s not explicitly based around A, but it serves its purpose if it’s dominated by A-believers.) From here, the trap closes in on him. You can gain status in the community by thinking up clever arguments that justify A. Moreover, the most clever, effective arguments on behalf of A are quickly distributed. Rarely does a member of this community encounter an advocate of a competing reality-complete idea and not have an effective retort.

Enough time in this community, and Michael will become incapable of understanding why EVERYONE doesn’t believe in reality-complete idea A. What’s more, he’ll see advocates of competing ideas only in terms of how they can serve reality-complete idea A. And he’ll see people who advocate contradictory reality-complete ideas as complete monsters.

If you think I’m describing a process that happens to only an unfortunate few who stumble onto the path of being zealots, I’m not. It happens to almost everyone in the modern age. The internet makes it far too simple for somewhat intelligent people to fall into mind traps of various different reality-complete ideas. It makes it too easy for effective arguments on behalf of reality-complete ideas to propagate quickly. And the open nature of the internet makes it too easy for people to whip themselves up into a siege mentality by viewing the off-hand comments of people from contradicting reality-complete ideas. For example, consider idea A again. Let’s say that B is an idea that directly contradicts reality-complete idea A, let’s say it’s sympathetic to the diamond smugglers. Advocates of idea A can easily go to see the public communications of the advocates of idea B. Imagine that advocates of idea B have a forum, where they are accustomed to talking among themselves about how great idea B is. In one discussion, they talk about how idiotic A-idea advocates are. One cheeky rogue suggests they shouldn’t be allowed to vote. A-advocates can take this public communication back to their OWN community, and hold it up as an example of how dastardly B-advocates are, and use it to whip people into a frenzy of doubling down on the obvious correctness of idea A.

People in the midst of a mind-trap see their reality-complete idea in everything. They are capable of spinning almost any event so that it fits the narrative of their idea. What’s more, they often operate under a siege mentality, seeing themselves surrounded by bloodthirsty opponents. This is what makes totalitarianism – for totalitarianism to work, you need an idea that reaches into every aspect of social relations, and you need a strongly motivated group of people who are capable of – independent of instruction – capable of seeing and rationalizing the idea’s narrative in any given situation. And most of all, these people need power. (Which most of them do not have.)

Reality-complete ideas can be dangerous, but I believe they are also necessary for the operation of any type of leadership caste or government. I plan on writing more on this thought in the future.