Cassandra and I have witnessed people within our social group who have listened to an opposing (and not always truthful) side of a dispute concerning us, choosing to believe that version without approaching us to hear the other side of the situation.
This subject has also been raised by our clients during consultations so these occurrences inspired me to do further research.
The nature of the human brain and state is basically to look for order and sense to the way they normally see the world. The reason people can so easily see faces in shapes or even pieces of toast, is the brain is attuned to look for faces. Every sense affects another sense in order in our head – so the sound of eating something or its colour affects taste (think about a crunchy food, or tasting a lemon dessert that is coloured red). Smell, interestingly is the least affected by other senses as it goes back more to basic instincts of fear or danger.
As we are using energy when making evaluations and there are an awful lot of evaluations and mental energy you can use – the natural tendency of the brain is to short cut and go for what it knows. That isn’t good for critical thinking. It takes real mental effort and adopting behaviours that disrupt our normal thinking patterns.
Core to all of this is a basic concept called cognitive dissonance. In short, if a piece of information is dissonant with our existing map of how things are the brain is placed in a dilemma or ambiguous situation. It does not like these dilemmas or dealing with ambiguity. Imagine I told you caffeine was more dangerous than tobacco (this is just an example to make a point). This statement creates cognitive dissonance with the view that tobacco is much worse.
So the brain has two choices. 1. Reject the new information. 2. Change the existing map or model in its head. 2. is of course possible and who knows it may be true – but very hard for anyone (even the best critical thinkers to do).
There are also of well researched psychological factors that contribute to this difficulty in seeing both sides of an argument. These include, but are not limited to:
Confirmation bias. We tend to look and assess information that confirms our existing beliefs. Research has shown, for example, if you have a belief that the death penalty is justified and someone shows you arguments against it – rather than changing your view you will look to see arguments within it that confirm the view you had.
Authority effect. We tend to trust and believe authority figures from governments (though this is declining), police and scientists. The Milgram experiment show when people are told to shock people in a fake experiment when answering questions wrong – just having a person have a white coat on makes them do scary things.
Loss aversion. In economics and decision theory loss aversion refers to people’s tendency to strongly prefer avoiding losses to acquiring gains. Most studies suggest that losses are twice as powerful psychologically as gains. This leads to risk aversion. When people, evaluate an outcome comprising similar gains and losses; since they would avoid the loss more than take the gain.
Peer pressure – or herd mentality. People tend to go with the crowd. Even if they may originally disagree they will often adapt their view and then justify it. In an ambivalent situation people are more likely to go with the status quo. If told that someone has already done something before, it makes it easier to do the same yourself. Let’s take recycling towels in hotel bedrooms. In an experiment, the simple action of adding the words to the message that “80% of people who stayed in the hotel recycled” increased recycling by over 20%. In ambiguous situations people are particularly susceptible to follow the herd.
When caught in the middle of a heated argument between people you know, your first instinct might be to stay out of it. But a paper recently published in the Journal of Experimental Social Psychology suggests that declaring neutrality comes with consequences.
In three studies, participants were surveyed about hypothetical bar-room scenarios: They were locked in a verbal dispute with someone else, and a close friend either backed them up or stayed out of it. Remaining neutral wasn’t considered a problem if the friend who was stuck in the crossfire was said to be equally close to both arguers. But if the friend was closer to the participant than to the other disputant, then a decision not to get involved was typically treated as a betrayal. Participants rated it as nearly as offensive as taking the other person’s side.
Stepping back from a friend’s fight may be perceived as a dereliction of duty and can send a worrying message, according to Alex Shaw, a psychologist at the University of Chicago and one of the paper’s authors. “If I don’t take a close friend’s side over an acquaintance, then to some extent, the friend is getting a signal that I sort of think of them the way I think of an acquaintance,” he says. If you must remain neutral, Shaw proposes hearing out both sides, explaining your stance, and playing the part of mediator. All are ways to communicate that you still have your friend’s back.
Humans have a habit of inserting themselves in the disputes of other people. We often care deeply about matters concerning what other people do to each other and, occasionally, will even involve ourselves in disputes that previously had nothing to do with us; at least not directly. Though there are many examples of this kind of behaviour, one of the most recent concerned the fatal shooting of a teen in Ferguson, Missouri, by a police officer. People from all over the country and, in some cases, other countries, were quick to weigh in on the issue, noting who they thought was wrong, what they think happened, and what punishment, if any, should be doled out. Phenomena like that one are so commonplace in human interactions it’s likely the case that the strangeness of the behaviour often goes almost entirely unappreciated
. What makes the behaviour strange? Well, the fact that intervention in other people’s affairs and attempts to control their behaviour or inflict costs on them for what they did tends to be costly. As it turns out, people aren’t exactly keen on having their behaviour controlled by others and will, in many cases, aggressively resist those attempts.
Let’s say, for instance, that you have a keen interest in victimizing someone. One day, you decide to translate that interest into action, attacking your target. If I were to attempt and intervene in that little dispute to try to help the target, there’s a very real possibility that some portion of the aggression might become directed at me instead. It seems as if I would be altogether safer if I minded my own business and let you get on with yours. In order for there to be selection for any psychological mechanisms that predispose me to become involved in other people’s disputes, then, there need to be some fitness benefits that outweigh the potential costs I might suffer. Alternatively, there might also be costs to me for not becoming involved. If the costs to non-involvement are greater than the costs of involvement, then there can also be selection for my side-taking mechanisms even if they are costly. So what might some of those benefits or costs be?
One obvious candidate is mutual self-interest. Though that term could cover a broad swath of meanings, I intend it in the proximate sense of the word at the moment. If you and I both desire that outcome X occurs, and someone else is going to prevent that outcome if either of us attempt to achieve it, then it would be in our interests to join forces—at least temporarily—to remove the obstacle in both of our paths. Translating this into a concrete example, you and I might be faced by an enemy who wishes to victimize both of us, so by working together to get them first, we can both achieve an end we desire.
In another, less direct case, if my friend became involved in a bar fight, it would be in my best interests to avoid seeing my friend harmed, as an injured (or dead) friend is less effective at providing me benefits than a healthy one. In such cases, I might preferentially side with my friend so as to avoid seeing costs inflicted on him. In both cases, both the other party and I share a vested interest in the same outcome obtaining (in this case, the removal of a mutual threat).
Related to that last example is another candidate explanation: kin selection. As it is adaptive for copies of my genes to reproduce themselves regardless of which bodies they happen to be located in, assisting genetic relatives in disputes could similarly prove to be useful. A partially overlapping set of genetic interests, then, could (and likely does) account for a certain degree of side-taking behaviour, just as overlapping proximate interests might. By helping my kin, we are achieving a mutually beneficial (ultimate-level) goal: the propagation of common genes.
A third possible explanation could also be grounded in reciprocal altruism, or long-term alliances. If I take your side today to help you achieve your goals, this might prove beneficial in the long-term to the extent that it encourages you to take my side in the future. This explanation would work even in the absence of overlapping proximate or genetic interests: maybe I want to build my house where others would prefer I did not and maybe you want to get warning labels attached to ketchup bottles. You don’t really care about my problem and I don’t really care about yours, but so long as you’re willing to help me scratch my back on my problem, I might also be willing to help you scratch yours.
There is, however, another prominent reason we might take the side of another individual in a dispute: moral concerns. That is, people could take sides on the basis of whether they perceive someone did something “wrong”. This strategy, then, relies on using people’s behaviour to take sides. In that domain, locating the benefits to involvement or the costs to non-involvement becomes a little trickier. Using behaviour to pick sides can carry some costs: you will occasionally side against your interests, friends, and family by doing so. Nevertheless, the relative upsides to involvement in disputes on the basis of morality need to exist in some form for the mechanisms generating that behaviour to have been selected for. As moral psychology likely serves the function of picking sides in disputes, we could consider how well the previous explanations for side taking fare for explaining moral side taking.
A mutualistic account of morality could certainly explain some of the variance we see in moral side-taking. If both you and I want to see a cost inflicted on an individual or group of people because their existence presents us with costs, then we might side against people who engage in behaviours that benefit them, representing such behaviour as immoral. This type of argument has been leveraged to understand why people often oppose recreational drug use the opposition might help people with long-term strategies inflict costs on the members of a population. The complication that mutualism runs into, though, is that certain behaviours might be evaluated inconsistently in that respect. As an example, victimization might be in my interests when in the service of removing my enemies or the enemies of my allies; however, it is not in my interests when used against me or my allies. If you side against those who victimize people, you might also end up siding against people who share your interests.
So let’s say one day I see you being attacked by someone who intends to murder to you. If I were to come to your aid and prevent you from being killed, I have not necessarily achieved my goal (“I don’t want to be murdered”); I’ve just helped you achieve yours (“You don’t want to be murdered”). To use an even simpler example, if both you and I are hungry, we both share an interest in obtaining food; that doesn’t mean that my helping you get food is filling my interests or my stomach. Thus, the interest in the above example is not necessarily a mutual one. As I noted previously, in the case of friends or kin it can be a mutual interest; it just doesn’t seem to be the case when thinking about the behaviour per se. My preventing your murder is only useful (in the fitness sense of the word) to the extent that doing so helps me in some way in the future.
Another account of morality which differs from the above positions that side-taking on the basis of behaviour could help reduce the costs of becoming involved in the disputes of others. Specifically, if all (or at least a sizable majority of) third parties took the same side in a dispute, one side would back down without the need for fights to be escalated to determine the winner (as more evenly matched fights might require increased fighting costs to determine a winner, whereas lopsided ones often do not). This is something of a cost-reduction model. While the idea that morality functions as a coordination device—the same way, say, a traffic light does—raises an interesting possibility, it too comes with a number of complications.
Chief among those complications is that coordination need not require a focus on the behaviour of the disputants. In much the same way that the colour of a traffic light bears no intrinsic relationship to driving behaviour but is publicly observable, so too might coordination in the moral domain need not bear any resemblance to the behaviour of the disputants. Third parties could, for instance, coordinate around the flip of a coin, rather than the behaviour of the disputants. If anything, coin flips might be better tools than disputant’s behaviour as, unlike behaviour, the outcome of coin flips are easily observable. Some behavior is notably not publicly observable, making coordination around it something of a hassle.
What about the alliance-building idea? At first blush, taking sides on the basis of behaviour seems like a much different type of strategy than siding on the basis of existing friendships. With some deeper consideration, though, I think there’s a lot of merit to the idea. Might behaviour work as a cue for who would make a good alliance partner for you?
After all, friendships have to start somewhere, and someone who was just stolen from might have a sudden need for partial partners that you might fill by punishing the perpetrator. Need provides a catalyst for new relationships to form. On the reverse end, that friend of yours who happens to be victimizing other people is probably going to end up racking up more than a few enemies: both the ones he directly impacted and the new ones who are trying to help his victims. If these enemies take a keen interest in harming him, he’s a riskier investment as costs are likely coming his way. The friendship itself might even become a liability to the extent that the people he put off are interested in harming you because you’re helping him, even if your help is unrelated to his acts. At such a point, his behaviour might be a good indication that his value as a friend has gone down and, accordingly, it might be time to dump your friend from your life to avoid those association costs; it might even pay to jump on the punishing bandwagon. Even though you’re seeking partial relationships, you need impartial moral mechanisms to manage that task effectively.
This could explain why strangers become involved in disputes (they’re trying to build friendships and taking advantage of a temporary state of need to do so) and why side-taking on the basis of behaviour rather than identity is useful at times (your friends might generate more hassle than they’re worth due to their behaviour, especially since all the people they’re harming look like good social investments to others). It’s certainly an idea that deserves more thought.