Tag Archives: GroupThink

Ethnic Diversity Reduces “Groupthink,” Economic “Bubbles”

Despite progress in raising awareness about implicit bias and stereotypes, most people are less likely to trust recommendations and evaluations from people of different ethnic groups.

Sheen Levine

Sheen Levine

However, this bias may reduce the “herd mentality” that characterized recent price “bubbles” in U.S. housing and global financial markets, reported Columbia’s Sheen S. Levine, Evan P. Apfelbaum of MIT, Goethe University’s Mark Bernard, Texas A&M’s Valerie L. Bartelt, Edward J. Zajac of Northwestern, and University of Warwick’s David Stark.
They concluded that, “Diversity facilitates friction that enhances deliberation and upends conformity.”

Economic “bubbles” occur when the majority of traders, probably influenced by a type of “groupthink,” set inaccurate prices, leading to a mismatch between market prices and true asset values.

Irving Janis

Irving Janis

Groupthink can occur when three conditions interact, according to Yale’s Irving Janis:

  • Group Cohesiveness
    • Deindividuation,” when group cohesiveness becomes more important than individual dissenting views,
  • Group Structure
  • Context
    • Stressful external threats,
    • Recent failures,
    • Decision-making difficulties,
    • Moral dilemmas.
Scott E. Page

Scott E. Page

A mathematical model, developed by University of Michigan’s Scott E. Page and Lu Hong of Loyola University, demonstrated that a wider range of viewpoints leads to less groupthink and more balanced decisions.

Diverse groups ran into fewer “dead ends” in developing solutions than homogenous groups full of smart individuals, who tended to think similarly.

David A. Thomas

David A. Thomas

Likewise, additional experimental evidence by Georgetown’s David A Thomas and Robin J. Ely of Harvard confirmed that identity-diverse groups can outperform homogeneous groups.
Group errors depended on group member ability and member diversity, expressed in the formula:

Collective Accuracy = Average Accuracy + Diversity.

To test the impact of group diversity on market “bubbles,” Levine’s group constructed experimental markets in Singapore and Texas, USA, in which participants traded stocks to earn money.

Evan Apfelbaum

Evan Apfelbaum

More than 175 volunteers with backgrounds in business or finance were randomly-assigned to groups of six ethnically-homogeneous or ethnically- diverse participants.

Traders knew the ethnic composition of their groups, but they couldn’t communicate with each other.
In addition, their “trades” of dividend-paying stock during 10 rounds were anonymous.

Homogeneous groups set inflated selling prices, yet traders in those groups still bought the stock, resulting in increasing stock prices.

Mark Bernard

Mark Bernard

In contrast, traders in diverse groups refused inflated selling prices, so the stock price fell to approximately the price in an “ideal” market with “rational” traders.

When traders and other decision-makers come from similar ethnic, social, and attitudinal backgrounds, they tend to place undue confidence in others’ opinions and decisions, and tend not to subject them to rigorous analysis and scrutiny.

Valerie Bartelt

Valerie Bartelt

As a result, they may be more likely to accept prices and deals that deviate from actual underlying values.
Levine’s group concluded that “homogeneity…imbues people with false confidence in the judgment of coethnics, discouraging them from scrutinizing behavior.”

  • How do you mitigate “groupthink” in organizational decision-making?

RELATED POSTS:

Twitter  @kathrynwelds
Google+
Facebook Notes

©Kathryn Welds

Debiasing Decisions: Combat Confirmation Bias, Overconfidence Bias

Philip Meißner

Philip Meißner

Cognitive and behavioral biases can contribute to “blind spots” in decision-making, leading to less effective outcomes.
To improve decision outcomes, University of Marburg ’s Philip Meißner, Torsten Wulf of HHL Leipzig Graduate School of Management and HEC’s Olivier Sibony proposed a systematic checklist to identify potential decision derailment based on bias, along with rapid remedies.

Torsten Wulf

Torsten Wulf

They argues that two types of bias contribute to most decisions that lead to undesirable results:

  • Confirmation bias, the unconscious tendency to believe new information that is consistent with existing beliefs and recent experiences, and to discount contradictory data,
  • Overconfidence bias, the out-of-awareness likelihood to overestimate one’s skills, insights, and judgment.
    This leads to increased risk-taking based on illusory sureness of the decision and ability to mitigate adverse outcomes.
Olivier Sibony

Olivier Sibony

Previously, Lovallo and Sibony articulated four related decision biases:

  • Pattern-recognition biases, countered by changing the “angle of vision,”
  • Action-oriented biases, mitigated by recognizing uncertainty,
  • Interest biases, minimized by explicitly highlighting these interests,
  • Social biases, reduced by depersonalizing debate.

Debiasing techniques such as checklists, can limit the negative effects of biases in decision-making by offering a disciplined, comprehensive analysis of downside risks and by systematically considering multiple viewpoints.

Atul Gawande

Atul Gawande

However, effectively implementing checklists requires consistent discipline, noted Harvard’s Atul Gawande, who cited examples of partial adherence leading to costly oversights and failures.

One approach, suggested by Princeton’s Daniel Kahneman and Gary Klein of McKinsey, is a “premortem.”
Decision makers imagine that the decision has failed and analyze sources and reasons for adverse outcomes, to more thoroughly assess points of failure and possible mitigation strategies.
Formal scenario-planning is another way to expose assumptions underlying a plan, as well as a competitor’s priorities and potential strategy.

Massimo Garbuio

Massimo Garbuio

Using a variety of debiasing techniques significantly increased the Return on Investment (ROI) in a study by University of Sydney’s Massimo Garbuio and Dan Lovallo and Olivier Sibony of HEC.
As a result, Michael Birshan, Ishaan Nangia, and Felix Wenger of McKinsey, argued that debiasing techniques should be embedded in formal organizational decision-making processes, particularly for high-impact, repetitive decisions.

Michael Birshan

Michael Birshan

Decision biases may be out of awareness, or unconscious, so it’s more effective to evaluate the process of developing a proposal, rather than focusing only on the content and merits of a proposal.

Decision-making safeguards can be built into standard analysis processes by including questions to expose:

  • Multiple data sources,
  • Diverse opinions and perspectives,
  • Downside risk,
  • Potential negative outcomes for company, industry, and broader ecosystem.
Daniel Kahneman

Daniel Kahneman

Proposals are considered ready for a decision only when multiple perspectives are available to mitigate confirmation bias and risk analysis is available to reduce overconfidence bias.
Responses to decision checklist questions can be quantified to indicate one of four action steps, according to Daniel Kahneman:

  • Decide, based on inclusion of robust safeguards against both confirmation bias and overconfidence bias,
  • Screening MatrixReach out, suggesting the need for gathering additional perspectives, opinions, and perspectives to prevent narrow assumptions to reduce confirmation bias.
    The Vanishing-Options Test, proposed by Stanford’s Chip Heath and Dan Heath of Duke University, can generate new ideas by imagining that none of the current proposals are available.
  • Stress-test, by conducting a pre-mortem or analysis by external devil’s advocate or provocateur to reduce overconfidence risk by.
  • Reconsider when both more perspectives and risk analysis are required to reduce both overconfidence bias and confirmation bias.
    This screening matrix helps reduce related decision-making biases:
  1. Self-interest Bias
    -To what extent is the proposal motivated by self-interest?
Ishaan Nangia

Ishaan Nangia

Recommendation
-Assess for over-optimism

  1. Affect Heuristic
    -How strong is the team’s emotional attachment to a specific proposal?
    -To what extent were risks and costs fully considered for both preferred and non-preferred options?

Recommendations
-Assess for strongly-preferred outcomes
-Reintroduce analysis of all options

  1. Groupthink
    -How many dissenting opinions were analyzed?
    -How adequately were all options explored?
    -Was dissent discouraged? 
Felix Wenger

Felix Wenger

Recommendations
-Encourage substantive disagreements as a valuable part of the decision process
-Solicit dissenting views from members of the recommending team, through private meetings

4. Saliency Bias
     -To what extent are decisions made based on a potentially incomparable, but memorable success?
     -What about the proposed analogy is comparable to the current situation?
     -What are relevant examples from less successful companies? What happened in those cases?

Decision Making QuestionsRecommendation
-Carefully scrutinize analogies’ similarity to the current decision situation
Solicit additional analogies using reference class forecasting:

.Select reference class,
.Assess distribution of outcomes,
.Intuitively estimate project’s position in distribution,
.Assess estimate’s reliability,
.Correct intuitive estimate.

  1. Confirmation Bias
    -What viable alternatives were included with the preferred recommendation?
    -At what stage in the decision analysis were alternatives discarded?
    -What efforts were undertaken to seek information to disconfirm the main assumptions and hypotheses?

Recommendation
-Request two additional alternatives to the main recommendation, including analysis of benefits and drawbacks
-Acknowledge unknowns, risks

  1. Availability Bias

    Max Bazerman

    Max Bazerman

    If you had more time to gather date, what information would you seek?, asked Harvard’s Max Bazerman
    -How can you access similar data now?

Recommendation
-Use checklists to ensure comprehensive analysis of data required for each decision type

  1. Anchoring Bias
    -What data sources are used to analyze decision?
    -Which data are estimates? By whom? If so, from which data were estimates extrapolated?
    -To what extent could there be:
  • Unsubstantiated numbers?
  • Extrapolation from non-equivalent previous situations?
  • Attraction to specific anchors?

Recommendations
-Present data from other sources, benchmarks, or models
-Request new analysis

8. Halo Effect
     -To what extent does the analysis team expect that a person, organization, or approach previously successful in one context will be equally effective in different situation?

Phil Rosenzweig

Phil Rosenzweig

Recommendations
-Question potentially inaccurate inferences
-Solicit additional comparable examples
-Question attributions of success and failure to leaders’ personalities instead of chance factors, advised IMD’s Phil Rosenzweig.

9. Sunk-Cost Fallacy, Endowment Effect
     -To what extent are recommenders attached to past decisions?

Recommendation
Disregard past expenditures when considering future costs and revenues

  1. Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect
    -To what extent is the comparison case unwarrantedly optimistic?

Recommendation
-Adopt an outside view by using relevant simulations or war games

  1. Disaster Neglect
    -To what extent is the worst case scenario realistically and sufficiently negative?
    -How was the worst case generated?
    -To what extent does the worst case consider competitors’ likely responses?
    -What other scenarios could occur?

Recommendation
-Conduct a premortem, suggested by Gary Klein of Applied Research Associates:  Imagine the worst case scenario occurred, then propose likely causes, mitigations   

  1. Loss Aversion
    -To what extent is the evaluation and decision team risk averse?

Recommendation
-Realign incentives to share responsibility for the risk or to reduce risk

  1. Planning Fallacy focuses only on the current case while ignoring similar projects’ history and statistical generalization from related cases.
    -To what extent does the analysis rely on “top-down, outside-view” comparisons to similar projects?
    -Did the evaluators use a “bottom-up, inside-view” to estimate time required for each step?

Recommendation
-Statistically analyze a broad range of similar cases to avoid over-estimates from “top-down, outside-view” approaches and underestimates from “bottom-up, inside-view”
-Differentiate accurate forecasts from ambitious targets

  1. Loss aversion
    -To what extent are evaluators more concerned with avoiding loss than achieving gains?
    – How concerned are evaluators with being held responsible for a failed project?
    -To what extent has the organization specified acceptable risk levels?

Recommendation
-Seek risk tolerance guidelines from organizational leaders.

Decision-making tools like checklists can significantly reduce unconscious biases, provided that they are consistently and systematically applied.

-*What strategies have you found most helpful in reducing biases in decision-making?

Follow-share-like http://www.kathrynwelds.com and @kathrynwelds

Related Posts:

Twitter:  @kathrynwelds
Google+LinkedIn Groups Psychology in Human Resources (Organisational Psychology)
Facebook
Blog:  Kathryn Welds | Curated Research and Commentary

©Kathryn Welds

 

Detect and Mitigate Decision Biases

Sydney Finkelstein

Sydney Finkelstein

Sydney Finkelstein, Jo Whitehead and Andrew Campbell of Dartmouth’s Tuck Business School, posit that leaders make decisions largely through unconscious neural processes in their book and Harvard Business Review article, Think Again: Why Good Leaders Make Bad Decisions and How to Keep it from Happening to You

• Pattern recognition
• Emotional tagging.

Although these processes are usually effective “heuristics” that enable quick and often prudent decisions, pattern recognition and emotional tagging can be distorted by biases including:

• Self-interest
• Emotional attachments to a position
• Misleading memories derived from inaccurate generalizations from dissimilar previous situations

The authors articulate common-sense recommendations to detect and mitigate “red flags” to decisional bias, echoing conclusions from much-earlier research on “GroupThink” more than four decades ago:

• Enlist the perspective of an independent person to identify which decision makers are likely to be affected by self-interest, emotional attachments, or misleading memories

• Develop safeguards and oversight mechanisms in organizational governance processes

• Alert decision-makers to possible sources of bias

• Build in opportunities to analyze, “spar”, challenge, decisions

-*What approaches do you use to detect and neutralize your potential biases in decision-making?

LinkedIn Open Group – Mindful Leadership:
Twitter: @kathrynwelds
Google+
Facebook Notes:

©Kathryn Welds