Tag: Psychology

What’s behind phantom cellphone buzzes?

Have you ever experienced a phantom phone call or text? You’re convinced that you felt your phone vibrate in your pocket, or that you heard your ring tone. But when you check your phone, no one actually tried to get in touch with you.

You then might plausibly wonder: “Is my phone acting up, or is it me?”

Well, it’s probably you, and it could be a sign of just how attached you’ve become to your phone.

At least you’re not alone. Over 80 percent of college students we surveyed have experienced it. However, if it’s happening a lot – more than once a day – it could be a sign that you’re psychologically dependent on your cellphone.

There’s no question that cellphones are part of the social fabric in many parts of the world, and some people spend hours each day on their phones. Our research team recently found that most people will fill their downtime by fiddling with their phones. Others even do so in the middle of a conversation. And most people will check their phones within 10 seconds of getting in line for coffee or arriving at a destination.

Clinicians and researchers still debate whether excessive use of cellphones or other technology can constitute an addiction. It wasn’t included in the latest update to the DSM-5, the American Psychiatric Association’s definitive guide for classifying and diagnosing mental disorders.

But given the ongoing debate, we decided to see if phantom buzzes and rings could shed some light on the issue.

A virtual drug?

Addictions are pathological conditions in which people compulsively seek rewarding stimuli, despite the negative consequences. We often hear reports about how cellphone use can be problematic for relationships and for developing effective social skills.

One of the features of addictions is that people become hypersensitive to cues related to the rewards they are craving. Whatever it is, they start to see it everywhere. (I had a college roommate who once thought that he saw a bee’s nest made out of cigarette butts hanging from the ceiling.)

So might people who crave the messages and notifications from their virtual social worlds do the same? Would they mistakenly interpret something they hear as a ring tone, their phone rubbing in their pocket as a vibrating alert or even think they see a notification on their phone screen – when, in reality, nothing is there?

A human malfunction

We decided to find out. From a tested survey measure of problematic cellphone use, we pulled out items assessing psychological cellphone dependency. We also created questions about the frequency of experiencing phantom ringing, vibrations and notifications. We then administered an online survey to over 750 undergraduate students.

Those who scored higher on cellphone dependency – they more often used their phones to make themselves feel better, became irritable when they couldn’t use their phones and thought about using their phone when they weren’t on it – had more frequent phantom phone experiences.

Cellphone manufacturers and phone service providers have assured us that phantom phone experiences are not a problem with the technology. As HAL 9000 might say, they are a product of “human error.”

So where, exactly, have we erred? We are in a brave new world of virtual socialization, and the psychological and social sciences can barely keep up with advances in the technology.

Phantom phone experiences may seem like a relatively small concern in our electronically connected age. But they raise the specter of how reliant we are on our phones – and how much influence phones have in our social lives.

How can we navigate the use of cellphones to maximize the benefits and minimize the hazards, whether it’s improving our own mental health or honing our live social skills? What other new technologies will change how we interact with others?

Our minds will continue to buzz with anticipation.

Daniel J. Kruger, Research Assistant Professor, University of Michigan

Photo Credit: ‘Brain’ via http://www.shutterstock.com

You can follow The Systems Scientist on Twitter or Facebook.


Donate to The Systems Scientist

Buy Now Button

This article was originally published on The Conversation. Read the original article.

Advertisements

Does empathy have limits?

Is it possible to to run out of empathy?

That’s the question many are asking in the wake of the U.S. presidential election. Thousands have marched on streets and airports to encourage others to expand their empathy for women, minorities, and refugees. Others have argued that liberals lack empathy for the plight of rural Americans.

Against this backdrop, some scholars have recently come out against empathy, saying that it is overhyped, unimportant and, worse, dangerous. They make this recommendation because empathy appears to be limited and biased in ethically problematic ways.

As psychologists who study empathy, we disagree.

Based on advances in the science of empathy, we suggest that limits on empathy are more apparent than real. While empathy appears limited, these limits reflect our own goals, values, and choices; they do not reflect limits to empathy itself.

The ‘dark side’ of empathy

Over the past several years, a number of scholars, including psychologists and philosophers, have made arguments that empathy is morally problematic.

For example, in a recently published and thought-provoking book, “Against Empathy,” psychologist Paul Bloom highlights how empathy, so often touted for its positive outcomes, may have biases and limitations that make it a poor guide for everyday life.

What explains our feelings of empathy toward some and not others?
N i c o l a, CC BY

Bloom claims that empathy is a limited-capacity resource, like a fixed pie or fossil fuel that quickly runs out. He suggests that,

“We are not psychologically constituted to feel toward a stranger as we feel toward someone we love. We are not capable of feeling a million times worse about the suffering of a million than about the suffering of one.”

Such views are echoed by other scholars as well. For example, psychologist Paul Slovic suggests that “we are psychologically wired to help only one person at a time.”

Similarly, philosopher Jesse Prinz has argued that empathy is prejudiced and leads to “moral myopia,” making us act more favorably toward people we have empathy for, even if this is unfair.

For the same reason, psychologist Adam Waytz suggests that empathy can “erode ethics.” Slovic, in fact, suggests that “our capacity to feel sympathy for people in need appears limited, and this form of compassion fatigue can lead to apathy and inaction.”

Are there limits?

The empathy that the scholars above are arguing against is emotional: It’s known scientifically as “experience sharing,” which is defined as feeling the same emotions that other people are feeling.

This emotional empathy is thought to be limited for two main reasons: First, empathy appears to be less sensitive to large numbers of victims, as in genocides and natural disasters. Second, empathy appears to be less sensitive to the suffering of people from different racial or ideological groups than our own.

In other words, in their view, empathy seems to put the spotlight on single victims who look or think like us.

Empathy is a choice

We agree that empathy can often be weaker in response to mass suffering and to people who are dissimilar from us. But the science of empathy actually suggests a different reason for why such deficits emerge.

As a growing body of evidence shows, it’s not that we are unable to feel empathy for mass suffering or people from other groups, but rather that sometimes we “choose” not to. In other words, you choose the expanse of your empathy.

Empathy is a choice.
Riccardo Cuppini, CC BY-NC-ND

There is evidence that we choose where to set the limits of empathy. For example, whereas people usually feel less empathy for multiple victims (versus a single victim), this tendency reverses when you convince people that empathy won’t require costly donations of money or time. Similarly, people show less empathy for mass suffering when they think their helping won’t make any difference or impact, but this pattern goes away when they think they can make a difference.

This tendency also varies depending on an individual’s moral beliefs. For instance, people who live in “collectivist cultures,” such as Bedouin individuals, do not feel less empathy for mass suffering. This is perhaps because people in such cultures value the suffering of the collective.

This can also be changed temporarily, which makes it seem even more like a choice. For example, people who are primed to think about individualistic values show less empathic behaviors for mass suffering, but people who are primed to think about collectivistic values do not.

We argue that if indeed there was a limit on empathy for mass suffering, it should not vary based upon costs, efficacy or values. Instead, it looks like the effect shifts based on what people want to feel. We suggest that the same point applies to the tendency to feel less empathy for people different from us: Whether we extend empathy to people who are dissimilar from us depends on what we want to feel.

In other words, the scope of empathy is flexible. Even people thought to lack empathy, such as psychopaths, appear able to empathize if they want to do so.

Why seeing limits to empathy is problematic

Empathy critics usually do not talk about choice in a logically consistent manner; sometimes they say individuals choose and direct empathy willfully, yet other times say we have no control over the limits of empathy.

These are different claims with different ethical implications.

The problem is that arguments against empathy treat it as a biased emotion. In doing so, these arguments mistake the consequences of our own choices to avoid empathy as something inherently wrong with empathy itself.

We suggest that empathy only appears limited; seeming insensitivity to mass suffering and dissimilar others is not built into empathy, but reflect the choices we make. These limits result from general trade-offs that people make as they balance some goals against others.

We suggest caution in using terms like “limits” and “capacity” when talking about empathy. This rhetoric can create a self-fulfilling prophecy: When people believe that empathy is a depleting resource, they exert less empathic effort and engage in more dehumanization.

So, framing empathy as a fixed pie misses the mark – scientifically and practically.

What are the alternatives?

Even if we accepted that empathy has fixed limits – which we dispute, given the scientific evidence – what other psychological processes could we rely upon to be effective decision-makers?

Is compassion less biased?
Fr Lawrence Lew, O.P., CC BY-NC

Some scholars suggest that compassion is not as costly or biased as empathy, and so should be considered more trustworthy. However, compassion can also be insensitive to mass suffering and people from other groups, just like empathy.

Another candidate is reasoning, which is considered to be free from emotional biases. Perhaps, cold deliberation over costs and benefits, appealing to long-term consequences, may be effective. Yet this view overlooks how emotions can be rational and reasoning can be motivated to support desired conclusions.

We see this in politics, and people use utilitarian principles differently depending on their political beliefs, suggesting principles can be biased too. For example, a study found that conservative participants were more willing to accept consequential trade-offs of civilian lives lost during wartime when they were Iraqi instead of American. Reasoning may not be as objective and unbiased as empathy critics claim.

Whose standard of morality are we using?

Even if reasoning was objective and didn’t play favorites, is this what we want from morality? Research suggests that for many cultures, it can be immoral if you don’t focus on the immediate few who share your beliefs or blood.

For example, some research finds that whereas liberals extend empathy and moral rights to strangers, conservatives are more likely to reserve empathy for their families and friends. Some people think that morality should not play favorites but others think that morality should be applied more strongly to family and friends.

So even if empathy did have fixed limits, it doesn’t follow that this makes it morally problematic. Many view impartiality as the ideal, but many don’t. So, empathy takes on a specific set of goals given a choice of a standard.

By focusing on apparent flaws in empathy and not digging deeper into how they emerge, arguments against empathy end up denouncing the wrong thing. Human reasoning is sometimes flawed and it sometimes leads us off course; this is especially the case when we have skin in the game.

In our view, it is these flaws in human reasoning that are the real culprits here, not empathy, which is a mere output of these more complex computations. Our real focus should be on how people balance competing costs and benefits when deciding whether to feel empathy.

Such an analysis makes being against empathy seem superficial. Arguments against empathy rely on an outdated dualism between biased emotion and objective reason. But the science of empathy suggests that what may matter more is our own values and choices. Empathy may be limited sometimes, but only if you want it to be that way.

C. Daryl Cameron, Assistant Professor of Psychology and Research Associate in the Rock Ethics Institute, Pennsylvania State University; Michael Inzlicht, Professor of Psychology, Management, University of Toronto, and William A. Cunningham, Professor of Psychology, University of Toronto

Photo Credit: Francisco Schmidt

You can follow The Systems Scientist on Twitter or Facebook.

Donate to The Systems Scientist

Buy Now Button

This article was originally published on The Conversation. Read the original article.

Can Trump resist the power of behavioral science’s dark side like other politicians?

More than two dozen governments, including the U.S., now have a team of behavioral scientists tasked with trying to improve bureaucratic efficiency to “nudge” their citizens toward what they deem to be higher levels of well-being.

A few recent examples include a push by the socialist French government to increase the numbers of organ donors, a conservative UK government plan to prevent (costly) missed doctor appointments, and efforts by the Obama White House to boost voter turnout on Election Day.

While the government’s use of our psychological quirks to affect behavior rubs some people the wrong way, most of us can agree that the above examples achieve positive ends. More organ donors mean more lives saved, fewer missed doctor appointments mean the government or health industry is more efficient, and increased voting means stronger citizen engagement in democracy.

But “nudges” themselves are value neutral. That is, they can be used to both achieve altruistic ends or more malicious ones. Just as behavioral science can be used to increase voter turnout, it can also be used to suppress the votes of specific individuals likely to favor the opposing side, as reportedly happened in the recent U.S. presidential election.

The nudge, in other words, has a dark side.

My research explores how behavioral science can help people follow through on their intentions where they make better or longer-term choices that increase their well-being. Because choices are influenced by the environment in which they are made, changing the environment can change decision outcomes.

This can be positive to the extent that those designing interventions have good intentions. But what happens when someone uses these insights to systematically influence others’ behavior to favor his or her own interests – even at the expense of everyone else’s?

That’s my concern with President Donald Trump, whose campaign appears to have exploited behavioral science to suppress the vote of Hillary Clinton supporters.

What’s in a nudge?

Behavioral science is a relatively young field, and governments have only recently begun using its insights to inform public policy.

The UK was the first in 2010 when it created its Behavioral Insights Team. In subsequent years, dozens of governments around the world followed, including Canada with its Behavioral Insights Unit and the U.S., which in 2015 officially launched the White House Social and Behavioral Sciences Team.

The teams’ missions are all relatively similar: to leverage insights from behavioral science to make public services more cost-effective and easier to use, to help people make better choices for themselves, and to improve well-being.

In the UK, for example, the Behavioral Insights Team was able to persuade about 100,000 more people a year to donate their organs by tweaking a message people received when renewing their car tax. Here in the U.S., the Social and Behavioral Science Team helped the Department of Defense increase the amount of retirement savings accounts for service members by 8.3 percent.

These kinds of interventions have been criticized for unjustly interfering with an individual’s autonomy. Some even compare it with mind-control.

However, as I have pointed out elsewhere, our environment (and the government) is always exerting some influence on our behavior, so we’re always being nudged. The question is therefore not whether we will be nudged, but how and in what direction.

For example, when you sit down to dinner, the size of your plate can make a big difference in how much you eat. Studies show you’re more likely to consume less food if you use a smaller plate. So if the government is handing out the dinnerware, and if most us wanted to avoid overeating, why not set the default plate to a small one?

But now let’s consider the dark side: a restaurant might hand out a small plate if it means it can charge more for less food and thus make more money. The owner likely doesn’t care about your waist size.

Any intervention based on behavioral science is therefore neither good nor bad. What matters is the intention behind it, the aim which the nudge is ultimately supposed to help achieve.

Potential for abuse

Take the case of what Cambridge Analytica – a company founded in 2013 and reportedly funded by the family of billionaire conservative donor Robert Mercer – did during the election. This team of data scientists and behavioral researchers claims to have collected thousands of data points on 220 million Americans in order to “model target audience groups and predict the behavior of like-minded people.”

Essentially, all that data can be used to deduce individual’s personality traits and then send them messages that match their personality, which are more likely to be persuasive. For example, highly neurotic Jane will be more receptive to a political message that promises safety, as opposed to financial gains, which may be more compelling to conscientious Joe.

So what’s the problem? In and of itself, this analysis can be a neutral tool. A government might want to use this approach to provide helpful information to at-risk populations, for example by providing suicide prevention hotlines to severely depressed individuals, as Facebook is currently doing. One might even argue that Cambridge Analytica, first hired by the Cruz campaign and later by Trump, was not acting unethically when it sent such personalized messaging to convince undecided voters to support the eventual Republican nominee. After all, this is what all marketing campaigns set out to do.

But there is a fine ethical line here that behavioral science can make easier to cross. In the same way that people can be influenced to engage in a behavior, they may also be discouraged from doing so. Bloomberg reported that Cambridge Analytica identified likely Clinton voters such as African-Americans and tried to dissuade them from going to the ballot box. The company denies discouraging any Americans from casting their vote.

Beyond hiring the company, the Trump administration has a direct tie to Cambridge Analytica through chief strategist Steve Bannon, who sits on its board.

Alexander Nix, CEO of Cambridge Analytica, talks about what his company does.

How might Trump nudge?

So far, it’s unclear whether or how the Trump administration might use behavioral science in the White House.

Trump, like most Republicans, has emphasized his desire to make government more efficient. Since behavioral science is generally a low-cost intervention strategy that provides tangible, measurable gains that should appeal to a business-minded president, Trump may very well turn to its insights to accomplish this goal. After all, the UK’s Behavioral Insights Team was kicked off under conservative leadership.

The White House Social and Behavioral Science Team’s impressive interventions have led to hundreds of millions of dollars in savings across a variety of departments and at the same time increased the well-being of millions of citizens. The future of the team is now unclear. Some members are worried that Trump will use their skills in less benevolent ways.

Trump’s apparent use of Cambridge Analytica to suppress Clinton turnout, however, is not a good sign. More broadly, the president does not seem to value ethics. Despite repeated warnings from government ethics watchdogs, he refuses to seriously deal with his innumerable conflicts of interest. Without the release of his tax returns, the true extent of his conflicts remain unknown.

And as we know from behavioral science, people frequently underestimate the influence conflicts of interests have on their own behavior.

In addition, studies show that people can easily set aside moral concerns in the pursuit of efficiency or other specific goals. People are also creative in rationalizing unethical behavior. It doesn’t seem to be a stretch to imagine that Trump, given his poor track record where ethics is concerned, could cross the fine ethical line and abuse behavioral science for self-serving ends.

A virus and a cure

Behavioral science has been heralded as part of the solution to many societal ills.

Behavioral economists Richard Thaler and Cass Sunstein, co-authors of the book “Nudge” coining the term, have been strong advocates of using the field’s tools to improve government policy – when the intentions are transparent and in the public interest.

But might the current administration use them in ways that go against our own interests? The problem is that we may not even be aware when it happens. People are often unable to tell whether they are being nudged and, even if they are, may be unable to tell how it’s influencing their behavior.

Governments around the world have found success using the burgeoning field of behavioral science to improve the efficiency of their policies and increase citizens’ well-being. While we should continue to find new ways to do this, we also need clear guidelines from Congress on when and how to use behavioral science in policy. That would help ensure the current or a future occupant of the White House doesn’t cross the line into the dark side of nudges.

The Conversation

Jon M Jachimowicz, PhD Student in Management, Columbia University

Photo Credit: Keyword Suggest

You can follow The Systems Scientist on Twitter or Facebook.

Donate to The Systems Scientist

Buy Now Button

This article was originally published on The Conversation. Read the original article.

Why each side of the partisan divide thinks the other is living in an alternate reality

To some liberals, Donald Trump’s inauguration portends doom for the republic; to many conservatives, it’s a crowning moment for the nation that will usher in an era of growth and optimism.

It’s as if each side is living in a different country – and a different reality.

In fact, over the last few months, a handful of liberal-leaning sites have begun fixating on what they’ve dubbed the “reality gap”: the tendency of Donald Trump’s supporters to endorse misinformation about political and economic issues. Sixty-seven percent of Trump voters, for instance, believe that unemployment has gone up under President Obama’s administration. (It hasn’t.) Up to 52 percent believe that Trump won both the electoral college and the popular vote in the 2016 election. (He didn’t.) And 74 percent of Trump supporters believe that fewer people are insured now than before the implementation of the Affordable Care Act. (More are.)

But this unfairly casts conservatives as being blind to reality. In fact, people across the political spectrum are susceptible. Consider that 54 percent of Democrats believe that Russia either “definitely” or “probably” changed voting tallies in the United States to get Trump elected. Although investigations are still ongoing, so far there’s been no evidence of direct tampering of voter records.

Many are at a loss when trying to explain these findings and have blamed a combination of “fake news,” politicians and slanted media.

Certainly misleading media reports and hyper-partisan social media users play a role in promoting misinformation, and politicians who repeat outright falsehoods don’t help. But research suggests something else may be going on, and it’s no less insidious just because it can’t be blamed on our partisan enemies. It’s called information avoidance.

‘I don’t want to hear it’

Social scientists have documented that all of us have a well-stocked mental toolkit to ward off any new information that makes us feel bad, obligates us to do something we don’t want to do or challenges our worldview.

These mental gymnastics take place when we avoid looking at our bank account after paying the bills or shirk scheduling that long overdue doctor’s appointment. The same goes for our political affiliation and beliefs: If we’re confronted with news or information that challenges them, we’ll often ignore it.

One reason we avoid this sort of information is that it can make us feel bad, either about ourselves or more generally. For instance, one study found that people didn’t want to see the results of a test for implicit racial bias when they were told that they might subconsciously have racist views. Because these results challenged how they saw themselves – as not racist – they simply avoided them.

Another series of experiments suggested that we’re more likely to avoid threatening information when we feel like we don’t have the close relationships and support system in place to respond to new problems. Patients who felt like they lacked a supportive network were less likely to want to see medical test results that might reveal a bad diagnosis. Students who lacked a large friend group or strong family ties didn’t want to learn whether or not their peers disliked them. Feeling like we lack the support and resources to deal with bad things makes us retreat into our old, comforting worldviews.

No problem? No need for a solution

In other cases, people don’t want to acknowledge a problem, whether it’s gun violence or climate change, because they don’t agree with the proposed solutions.

For instance, in a series of experiments, social psychology scholars Troy Campbell and Aaron Kay found that people are politically divided over scientific evidence on climate change, environmental degradation, crime and attitudes toward guns because they dislike the potential solutions to these problems. Some don’t want to consider, say, government regulation of carbon dioxide, so they simply deny that climate change exists in the first place.

In the study, participants read a statement about climate change from experts paired with one of two policy solutions, either a market-based solution or a government regulatory scheme. Respondents were then asked how much they agreed with the scientific consensus that global temperatures are rising.

The researchers found that Republicans were more likely to agree that climate change is happening when presented with the market-based solution. Democrats tended to agree with the consensus regardless of the proposed solution. By framing the solution to climate change in terms that don’t go against Republican free-market ideology, the researchers suspect that Republicans will be more willing to accept the science.

In other words, people are more willing to accept politically polarizing information if it’s discussed in a way that doesn’t challenge how they view the world or force them to do something they don’t want to do.

Doubling down on a worldview

To return to Trump’s supporters: Many identify strongly with him and many see themselves as part of a new political movement. For this reason, they probably want to avoid new findings that suggest their movement isn’t as strong as it appears.

Remember those findings that many Trump supporters believe that he won the popular vote? Among Trump supporters, one poll suggests that 52 percent also believe that millions of votes were cast illegally in the 2016 election, a claim Trump himself made to explain his popular vote loss.

Accepting that their candidate lost the popular vote challenges deeply held beliefs that the nation has come together with a mandate for Trump’s presidency and policies. Information that conflicts with this view – that suggests a majority of Americans don’t support Trump, or that people protesting Trump are somehow either “fake” or paid agitators – poses a threat to these worldviews. As a result, his supporters avoid it.

Information avoidance doesn’t address why different people believe different things, how misinformation spreads and what can be done about it.

But ignoring the effects of information avoidance and discussing only ignorance and stubbornness does us all a disservice by framing the problem in partisan terms. When people on the left believe that only right wingers are at risk of changing the facts to suit their opinions, they become less skeptical of their own beliefs and more vulnerable to their own side’s misconceptions and misinformation.

Research suggests there are three ways to combat information avoidance. First, before asking people to listen to threatening information, affirmation – or making people feel good about themselves – has proven effective. Next, it’s important to make people feel in control over what they get to do with that information. And lastly, people are more open to information if it’s framed in a way that resonates with how they see the world, their values and their identities.

It’s crucial to recognize the all-too-human tendency to put our fingers in our ears when we hear something we don’t like. Only then can we move away from a media and cultural environment in which everyone is entitled to not just their own opinions but also their own facts.

The Conversation

Lauren Griffin, Director of External Research for frank, College of Journalism and Communications, University of Florida and Annie Neimand, Research Director and Digital Strategist for frank, College of Journalism and Communications, University of Florida

Photo Credit: Clare Black

 

You can follow The Systems Scientist on Twitter or Facebook.

Donate to The Systems Scientist
Buy Now Button

 

This article was originally published on The Conversation. Read the original article.

The dirty politics of scapegoating – and why victims are always the harmless, easy targets

The word “scapegoat” is being used a lot in discussions about politics in 2016. The new US president-elect, Donald Trump, appealed to some voters with rhetoric that appeared to scapegoat Mexicans and Muslims for various social and economic problems.

Campaigning ahead of the UK’s vote for Brexit also scapegoated immigrants and foreign bureaucrats for many social problems, from violent crime to funding problems for the NHS.

Since both votes were cast, hate crimes against immigrants and ethnic minorities have increased in both countries. There have also been frequent calls for harsh policies, including mass forced deportations of migrant workers and invasive medical examinations for asylum seekers.

What drives this scapegoating? Why do people, whose political grievances might be legitimate in themselves, end up targeting their anger at relatively harmless victims?

It is part of the nature of scapegoating, as the late French theorist of mythology René Girard argued, that the target is not chosen because it is in any way responsible for society’s woes. If the target does happen to be at all responsible, that is an accident. The scapegoat is instead chosen because it is easy to victimize without fear of retaliation.

Origins of the scapegoat

The name “scapegoat” comes from the Book of Leviticus. In the story it tells, all the sins of Israel are put on the head of a goat, which is then ritualistically driven out. Needless to say, the goat is not really guilty of the sins.

The Scapegoat, by William Holman Hunt. Wikimedia Commons

If we want to understand this ritual, we must first understand the nature of human violence. Girard observed how many cultures characterize violence in terms of infection and contagion. In communities without a strong legal system, justice is carried out through private vengeance. But each act of vengeance provokes another, and violence can spread like a plague. “Blood feuds” – chains of violent reprisals – have been known to wipe out entire communities.

In this kind of society, Girard argues, the real purpose of scapegoating is:

To polarize the community’s aggressive impulses and redirect them toward victims that may be actual or figurative, animate or inanimate, but that are always incapable of propagating further violence.

If the community as a whole lashes out against a victim who cannot retaliate, then the community’s resentments and frustrations can be violently vented in a way that does not run the risk of unleashing an uncontrollable plague of violence.

A safe alternative to class war

Girard’s insights can also be applied to modern society. The results of the US election and the UK referendum have been partially explained by the economic anxiety felt within former industrial regions that have been left behind by globalization.

The blame for this anxiety lies with the political classes, the elites, the Washington and London “insiders”. They put their faith in an economic model and ignored its effects on ordinary lives. They made no visible effort to create new jobs in communities that had been built around heavy industry. It was as though they hoped the people would rust away alongside the machines.

The rhetoric in both campaigns was nominally directed against these elites: against “the establishment”. But when it came to the crunch, voters in the US gave power to a plutocrat – a direct beneficiary of the new economic model. And in the UK, support remains high for a government that is pure establishment. The British home secretary, Amber Rudd, was described by the Financial Times as:

A born-to-rule Tory with a black book so impressive that she had a gig as “aristocracy co-ordinator” for the party scenes of Four Weddings and a Funeral.

So just when you might expect the economically anxious to hit out at the elites, they instead attack migrants and minorities. The elites cannot be their scapegoat since a defining feature of a scapegoat is its inability to retaliate. And the “establishment” is very capable of retaliating. To quote a 2009 piece in The Economist:

When people contemplate class war, they tend to think of hostilities flowing in only one direction – that is, upwards, from the plebs to the toffs, the poor to the rich … Less attention is given to the possibility of a different sort of rancour: when the well-heeled get angry, and take against the plebs.

The “well-heeled” are much too powerful to be scapegoats. The “plebs” might resent them, but a scapegoat is a victim that can be safely attacked. Think of a man yelling at his child because he is angry at his wife. He doesn’t have the energy for a protracted marital conflict, but if he is to resist lashing out at her he must lash out at someone.

In a social sense, scapegoating “works”: it concentrates violence on a small, powerless set of victims and prevents it from triggering a dangerous chain reaction of reprisals. Of course, this is no consolation to the scapegoats. For them, there is only the hope that society might one day have less cause for violence altogether.

-Lecturer in History of Philosophy / Philosophy of Economics, University of St Andrews

You can follow The Systems Scientist on Twitter or Facebook

Photo credit: Gage Skidmore

 

 

 

This article was originally published on The Conversation. Read the original article.

Confirmation bias: A psychological phenomenon that helps explain why pundits got it wrong

As post-mortems of the 2016 presidential election began to roll in, fingers started pointing to what psychologists call the confirmation bias as one reason many of the polls and pundits were wrong in their predictions of which candidate would end up victorious.

Confirmation bias is usually described as a tendency to notice or search out information that confirms what one already believes, or would like to believe and to avoid or discount information that’s contrary to one’s beliefs or preferences. It could help explain why many election-watchers got it wrong: in the run-up to the election, they saw only what they expected, or wanted, to see.

Psychologists put considerable effort into discovering how and why people sometimes reason in less than totally rational ways. The confirmation bias is one of the better-known of the biases that have been identified and studied over the past few decades. A large body of psychological literature reports how confirmation bias works and how widespread it is.

The role of motivation

Confirmation bias can appear in many forms, but for present purposes, we may divide them into two major types. One is the tendency when trying to determine whether to believe something is true or false, to look for evidence that it is true while failing to look for evidence that it is false.

Imagine four cards on a table, each one showing either a letter or number on its visible side. Let’s say the cards show A, B, 1 and 2. Suppose you are asked to indicate which card or cards you would have to turn over in order to determine whether the following statement is true or false: If a card has A on its visible side, it has 1 on its other side. The correct answer is the card showing A and the one showing 2. But when people are given this task, a large majority choose to turn either the card showing A or both the card showing A and the one showing 1. Relatively few see the card showing 2 as relevant, but finding A on its other side would prove the statement to be false. One possible explanation for people’s poor performance of this task is that they look for evidence that the statement is true and fail to look for evidence that it is false.

Another type of confirmation bias is the tendency to seek information that supports one’s existing beliefs or preferences or to interpret data so as to support them while ignoring or discounting data that argue against them. It may involve what is best described as case building, in which one collects data to lend as much credence as possible to a conclusion one wishes to confirm.

At the risk of oversimplifying, we might call the first type of bias unmotivated, inasmuch as it doesn’t involve the assumption that people are driven to preserve or defend their existing beliefs. The second type of confirmation bias may be described as motivated because it does involve that assumption. It may go a step further than just focusing on details that support one’s existing beliefs; it may involve intentionally compiling evidence to confirm some claim.

It seems likely that both types played a role in shaping people’s election expectations.

Case building versus unbiased analysis

An example of case building and the motivated type of confirmation bias is clearly seen in the behavior of attorneys arguing a case in court. They present only evidence that they hope will increase the probability of a desired outcome. Unless obligated by law to do so, they don’t volunteer evidence that’s likely to harm their client’s chances of a favorable verdict.

Another example is a formal debate. One debater attempts to convince an audience that a proposition should be accepted, while another attempts to show that it should be rejected. Neither wittingly introduces evidence or ideas that will bolster one’s adversary’s position.

In these contexts, it is proper for protagonists to behave in this fashion. We generally understand the rules of engagement. Lawyers and debaters are in the business of case building. No one should be surprised if they omit information likely to weaken their own argument. But case building occurs in contexts other than courtrooms and debating halls. And often it masquerades as unbiased data collection and analysis.

Where confirmation bias becomes problematic

One sees the motivated confirmation bias in stark relief in commentary by partisans on controversial events or issues. Television and other media remind us daily that events evoke different responses from commentators depending on the positions they’ve taken on politically or socially significant issues. Politically liberal and conservative commentators often interpret the same event and its implications in diametrically opposite ways.

Anyone who followed the daily news reports and commentaries regarding the election should be keenly aware of this fact and of the importance of political orientation as a determinant of one’s interpretation of events. In this context, the operation of the motivated confirmation bias makes it easy to predict how different commentators will spin the news. It’s often possible to anticipate, before a word is spoken, what specific commentators will have to say regarding particular events.

Here the situation differs from that of the courtroom or the debating hall in one very important way: Partisan commentators attempt to convince their audience that they’re presenting a balanced factual – unbiased – view. Presumably, most commentators truly believe they are unbiased and responding to events as any reasonable person would. But the fact that different commentators present such disparate views of the same reality makes it clear that they cannot all be correct.

Selective attention

Motivated confirmation bias expresses itself in selectivity: selectivity in the data one pays attention to and selectivity with respect to how one processes those data.

When one selectively listens only to radio stations or watches only TV channels, that express opinions consistent with one’s own, one is demonstrating the motivated confirmation bias. When one interacts only with people of like mind, one is exercising the motivated confirmation bias. When one asks for critiques of one’s opinion on some issue of interest but is careful to ask only people who are likely to give a positive assessment, one is doing so as well.

This presidential election was undoubtedly the most contentious of any in the memory of most voters, including most pollsters and pundits. Extravagant claims and counterclaims were made. Hurtful things were said. Emotions were much in evidence. Civility was hard to find. Sadly, “fallings out” within families and among friends have been reported.

The atmosphere was one in which the motivated confirmation bias would find fertile soil. There is little doubt that it did just that and little evidence that arguments among partisans changed many minds. That most pollsters and pundits predicted that Clinton would win the election suggests that they were seeing in the data what they had come to expect to see – a Clinton win.

None of this is to suggest that the confirmation bias is unique to people of a particular partisan orientation. It is pervasive. I believe it to be active independently of one’s age, gender, ethnicity, level of intelligence, education, political persuasion or general outlook on life. If you think you’re immune to it, it is very likely that you’ve neglected to consider the evidence that you’re not.

-Research Professor of Psychology, Tufts University

You can follow The Systems Scientist on Twitter or Facebook

Photo credit: mwcnews.net

 

 

This article was originally published on The Conversation. Read the original article.

2016 election reflections

By Robert J. Garrison

It’s morning in America. A morning after Donald Trump winning the 2016 presidential election and the world didn’t end. Last night Donald Trump took apart the Blue Wall of the upper Midwest, brick by brick, and will most likely use it to start building the wall along the southern border. There is a lot of finger-pointing as to why Donald Trump was able to break through the Blue Wall of the upper Midwest.

Of course, instead of blaming themselves, the democrats started blaming the American voter.Pundits, elites, and celebrities took to social media screaming how could America be so misogynist, racist, stupid, and so full of hate to allow someone like Donald Trump to be elected. Once again the democrats fail to look in the mirror and see who is really to blame for their party’s loss.

Well, I wasn’t the only one that saw the writing on the wall pertaining to this election. In fact, the staff here at TSS was one of the few news outlets that never underestimated Donald Trump and how he was able to use psychology and sociology to connect with the voters in a very powerful way. TSS Editor-in-chief Matt Johnson saw this long before anybody else did. In “Why Trump will win unless the left puts on their game face” Matt warned democrats not to underestimate Donald Trump or his outside the box campaign tactics. He told them that unless they removed themselves from their bubble, they wouldn’t be able to see what Donald Trump was doing and how he was connecting with the voters.

Well, last night democrats didn’t remove themselves from their bubbles. Why? Because their candidate Hillary Clinton and her advisers kept themselves deeply embedded in that bubble that bred a blinding arrogance. It’s that arrogance that led the DNC, and the media, rigging the primary so Hillary Clinton could steal the nomination from Senator Bernie Sanders.

Maybe, just maybe, they should spend less energy blaming the voters for their lost and more energy on blaming the party that rigged the primary to give it to Hillary instead of giving it to Bernie, whom would’ve most likely beaten Trump.

Full disclosure, for those that think that I’m saying this cause they think I voted for Trump, your wrong because I didn’t vote for either Hillary Clinton or Donald Trump.

However, I did predict that Donald Trump would win in a nail bitter because we here at TSS never underestimated Donald Trump and we thought outside our bubbles. We had our game faces on but apparently the mainstream media nor the democrats didn’t. That is why you both lost and the American people won! Chew on that for the next four years.

 

Robert J. Garrison is a political and religious writer for The Systems Scientist. You can connect with him directly in the comments section, follow him on Twitter or on Facebook, or catch up on his articles in the Archives

 

You can also follow The Systems Scientist on Twitter or Facebook as well. 

Photo credit: theduran.com

 

 

Copyright ©2016 – The Systems Scientist