Single training interventions found to effectively reduce bias

Last Friday, the Human-Computer Interaction Institute (HCII) hosted a seminar by Carey Morewedge. From 2007 until 2013, Morewedge was an assistant and associate professor at Carnegie Mellon University in the Department of Social and Decision Sciences as well as the Marketing group at the Tepper School of Business, and is now an associate professor of marketing at the Questrom School of Business at Boston University. He spoke about the research that he has conducted, along with a team of researchers, on “debiasing” decisions. Their paper, which will be published in Policy Insights from the Behavioral and Brain Sciences on October 1, discusses the effectiveness of a single training intervention on reducing bias.
Morewedge began by explaining the importance of studying bias, which can be defined as the deviation from an objective standard. Morewedge noted that bias affects nearly every aspect of daily life, including fields such as business, policy, medicine, and law. The large presence of bias demonstrates the need for debiasing strategies.
The most challenging part of debiasing is that people often make biased decisions unintentionally. Humans often assume that everything they do is based on knowledge and experience, and can fail to pay attention to statistical and logical evidence. Prior attempts to use training to reduce decision biases have been largely unsuccessful, which pushed scientists and policy makers to focus on the use of incentives and choice architecture, which uses a well-designed series of choices to guide people to make less biased decisions. These approaches, however, are often ineffective and expensive, and can even backfire when used improperly. Instead of looking at these options, Morewedge and his team chose to look further into the possible benefits of debiasing training.
Morewedge’s experiments began with the development of bias testing methods. The team spent over a year tailoring their training video game, which was eventually used in testing. In the final study, each participant was initially given a bias pre-test, which determined the amount of bias present in their decisions before the training. Participants were then given proper training through the use of an educational game or video. After the training, they were given several post-tests to check their bias levels again. These tests were administered immediately after the training and again after an extended period of time.
The research team conducted several styles of intervention to test various types of bias.
In the first experiment, the intervention training targeted three kinds of biases: bias blind spot, confirmation bias, and fundamental attribution error. Bias blind spot refers to people’s tendency to perceive themselves to be less biased than their peers. Confirmation bias occurs when people seek out information and evidence that supports their own beliefs, while neglecting information that contradicts their beliefs. Fundamental attribution error occurs when people explain a situation by relying heavily on personality, rather than facts. For example, you may attribute a student’s silence in class during the semester to his shy and introverted personality, when in reality, the person may simply find the course boring.
The second type of intervention training tested three other kinds of bias including anchoring, representativeness, and social projection. Anchoring looks at whether or not people are influenced by information that other people give. For example, if participants are asked to estimate the average time people spend on Facebook, and someone states that he uses Facebook 12 hours a week, people might be more likely to answer with a number close to 12. This experiment also looked at representativeness, which causes problems when people overestimate a single example’s ability to represent a whole. The final type of bias, social projection, estimates how people assume others’ emotions, thoughts, and values are similar to one’s own.
After analysis of the experiments, the researchers observed a large difference between the pre-test and the post-test. Both kinds of interventions showed debiasing immediately after training, with the bias of participants who played games decreasing 31.94 percent and the bias of participants who watched videos decreasing 18.60 percent. This debiasing effect also lasted over time. Two months later, the bias of those that had played games was still 23.57 percent lower than the pre-test, and the bias of those that had played games had gone down even more, to 19.20 percent. These results showed that training interventions appear to be an effective means to improve decision making ability over both the short-term and long-term.
Another interesting conclusion gained from these results was that training games appeared to outperform video training. These results suggest that active forms of training, such as games, appear to be more effective than passive forms of training, such as videos, because of the personalized feedback they provide. Still, both are effective.
Overall, the team’s results suggested that game and video training methods could be used effectively alongside other debiasing methods to achieve improved societal decision making. These results could impact a variety of fields, and in general, could improve the way we make decisions as a society.