Tag Archives: overconfidence

Unrealistic Optimism Drives Profitability

Overconfident decision-making in financial markets led to myriad negative consequences in the past decade, when companies underestimated business risks. 

Gilles Hilary

Gilles Hilary

In contrast to overconfidence, unrealistically optimistic judgments can result in increased profitability and market value, according to INSEAD’s Gilles Hilary and Benjamin Segal with Charles Hsu of Hong Kong University of Science & Technology.

Benjamin Segal

Benjamin Segal

Hilary, Hsu, and Segal demonstrated that over-optimism differs from overconfidence, and may result in larger growth projections.

Charles Hsu

Charles Hsu

The team drew on earlier work by University of Illinois’s Dirk Hackbarth that showed both overconfident, and overoptimistic managers chose higher debt levels and issued more new debt.
Hackbarth did not differentiate over-confident and over-optimistic investment behaviors, and reported that both tendencies reduce manager-shareholder conflict, which can increase firm value.

Dirk Hackbarth

Dirk Hackbarth

Static over-optimism” refers to an unrealistically positive view of the impact of one’s own actions on future outcomes.
In contrast, “dynamic overconfidence” refers to overvaluation of one’s skills and the accuracy of private information.
In addition, “dynamic overconfidence”  is associated with  underestimates of random events after several positive outcomes, according to Hackbarth.

Together, static over-optimism and dynamic overconfidence lead to “dynamic over-optimism” after successes.

Neil Weinstein

Neil Weinstein

The pervasiveness of this “rose-tinted glasses” view leading to over-optimistic assessments was demonstrated by Neil Weinstein of University of Arizona.
He investigated people’s beliefs about future positive and negative health events, discussed in a previous blog post.
Weinstein reported that people tend to believe negative events are less likely to happen to them than to others, whereas they expect they are more likely than other people to experience positive events.

Hilary’s team built on Hackbarth’s concepts by comparing North American companies’ quarterly earnings forecasts with analysts’ predictions and actual performance.
Then, they calculated the number of company-issued press releases containing optimistic language.

Optimistic performance forecasts were correlated with better-than-expected performance, suggesting that successes led to additional effort and positive expectations.

Hilary noted the potentiating effect of past successful performance, though it may lead to “burnout” after about four quarters due to the challenge of continually exceeding performance expectations.

The team noted that this cycle of over-optimism and burnout might be mitigated by instituting policies to moderate overestimates or underestimates future performance by rewarding executives who provide accurate forecasts.

Sheryl Winston Smith

Sheryl Winston Smith

Similarly, Temple’s Sheryl Winston Smith noted that optimistic entrepreneurs chose higher levels of debt financing relative to equity, facilitating patent-based and product-based innovation among nearly 5,000 US firms tracked by the Kauffman Firm Survey (KFS).

Young-Hoon Kim

Young-Hoon Kim

In contrast to these financial studies, Yonsei University’s Young-Hoon Kim, Nanyang Technical University’s Chi-yue Chiu and Zhimin Zou of University of Illinois reported mixed results for self-enhancing (overconfident) and self-effacing (pessimistic) biases on performance

Chiu Chi-Yue

Chiu Chi-Yue

Kim’s team posited that either over-optimistic or pessimistic biases lead to “self-handicapping” behavior, in which people perform under disadvantageous conditions that provide an explanation for any poor performance outcomes.

Although over-optimism may drive innovation and financial results, longer-term consequences may include performance “burnout,” reduced motivation, and lower performance.

-*How to you manage the impact of optimism bias and pessimism bias on judgments and performance?

Please follow-share-like www.kathrynwelds.com and @kathrynwelds

RELATED POSTS:

Twitter:  @kathrynwelds
Blog: – Kathryn Welds | Curated Research and Commentary
Google+ google.com/+KathrynWelds
LinkedIn Open Group Psychology in Human Resources (Organisational Psychology)
Facebook Notes:

©Kathryn Welds

Debiasing Decisions: Combat Confirmation Bias, Overconfidence Bias

Philip Meißner

Philip Meißner

Cognitive and behavioral biases can contribute to “blind spots” in decision-making, leading to less effective outcomes.
To improve decision outcomes, University of Marburg ’s Philip Meißner, Torsten Wulf of HHL Leipzig Graduate School of Management and HEC’s Olivier Sibony proposed a systematic checklist to identify potential decision derailment based on bias, along with rapid remedies.

Torsten Wulf

Torsten Wulf

They argues that two types of bias contribute to most decisions that lead to undesirable results:

  • Confirmation bias, the unconscious tendency to believe new information that is consistent with existing beliefs and recent experiences, and to discount contradictory data,
  • Overconfidence bias, the out-of-awareness likelihood to overestimate one’s skills, insights, and judgment.
    This leads to increased risk-taking based on illusory sureness of the decision and ability to mitigate adverse outcomes.
Olivier Sibony

Olivier Sibony

Previously, Lovallo and Sibony articulated four related decision biases:

  • Pattern-recognition biases, countered by changing the “angle of vision,”
  • Action-oriented biases, mitigated by recognizing uncertainty,
  • Interest biases, minimized by explicitly highlighting these interests,
  • Social biases, reduced by depersonalizing debate.

Debiasing techniques such as checklists, can limit the negative effects of biases in decision-making by offering a disciplined, comprehensive analysis of downside risks and by systematically considering multiple viewpoints.

Atul Gawande

Atul Gawande

However, effectively implementing checklists requires consistent discipline, noted Harvard’s Atul Gawande, who cited examples of partial adherence leading to costly oversights and failures.

One approach, suggested by Princeton’s Daniel Kahneman and Gary Klein of McKinsey, is a “premortem.”
Decision makers imagine that the decision has failed and analyze sources and reasons for adverse outcomes, to more thoroughly assess points of failure and possible mitigation strategies.
Formal scenario-planning is another way to expose assumptions underlying a plan, as well as a competitor’s priorities and potential strategy.

Massimo Garbuio

Massimo Garbuio

Using a variety of debiasing techniques significantly increased the Return on Investment (ROI) in a study by University of Sydney’s Massimo Garbuio and Dan Lovallo and Olivier Sibony of HEC.
As a result, Michael Birshan, Ishaan Nangia, and Felix Wenger of McKinsey, argued that debiasing techniques should be embedded in formal organizational decision-making processes, particularly for high-impact, repetitive decisions.

Michael Birshan

Michael Birshan

Decision biases may be out of awareness, or unconscious, so it’s more effective to evaluate the process of developing a proposal, rather than focusing only on the content and merits of a proposal.

Decision-making safeguards can be built into standard analysis processes by including questions to expose:

  • Multiple data sources,
  • Diverse opinions and perspectives,
  • Downside risk,
  • Potential negative outcomes for company, industry, and broader ecosystem.
Daniel Kahneman

Daniel Kahneman

Proposals are considered ready for a decision only when multiple perspectives are available to mitigate confirmation bias and risk analysis is available to reduce overconfidence bias.
Responses to decision checklist questions can be quantified to indicate one of four action steps, according to Daniel Kahneman:

  • Decide, based on inclusion of robust safeguards against both confirmation bias and overconfidence bias,
  • Screening MatrixReach out, suggesting the need for gathering additional perspectives, opinions, and perspectives to prevent narrow assumptions to reduce confirmation bias.
    The Vanishing-Options Test, proposed by Stanford’s Chip Heath and Dan Heath of Duke University, can generate new ideas by imagining that none of the current proposals are available.
  • Stress-test, by conducting a pre-mortem or analysis by external devil’s advocate or provocateur to reduce overconfidence risk by.
  • Reconsider when both more perspectives and risk analysis are required to reduce both overconfidence bias and confirmation bias.
    This screening matrix helps reduce related decision-making biases:
  1. Self-interest Bias
    -To what extent is the proposal motivated by self-interest?
Ishaan Nangia

Ishaan Nangia

Recommendation
-Assess for over-optimism

  1. Affect Heuristic
    -How strong is the team’s emotional attachment to a specific proposal?
    -To what extent were risks and costs fully considered for both preferred and non-preferred options?

Recommendations
-Assess for strongly-preferred outcomes
-Reintroduce analysis of all options

  1. Groupthink
    -How many dissenting opinions were analyzed?
    -How adequately were all options explored?
    -Was dissent discouraged? 
Felix Wenger

Felix Wenger

Recommendations
-Encourage substantive disagreements as a valuable part of the decision process
-Solicit dissenting views from members of the recommending team, through private meetings

4. Saliency Bias
     -To what extent are decisions made based on a potentially incomparable, but memorable success?
     -What about the proposed analogy is comparable to the current situation?
     -What are relevant examples from less successful companies? What happened in those cases?

Decision Making QuestionsRecommendation
-Carefully scrutinize analogies’ similarity to the current decision situation
Solicit additional analogies using reference class forecasting:

.Select reference class,
.Assess distribution of outcomes,
.Intuitively estimate project’s position in distribution,
.Assess estimate’s reliability,
.Correct intuitive estimate.

  1. Confirmation Bias
    -What viable alternatives were included with the preferred recommendation?
    -At what stage in the decision analysis were alternatives discarded?
    -What efforts were undertaken to seek information to disconfirm the main assumptions and hypotheses?

Recommendation
-Request two additional alternatives to the main recommendation, including analysis of benefits and drawbacks
-Acknowledge unknowns, risks

  1. Availability Bias

    Max Bazerman

    Max Bazerman

    If you had more time to gather date, what information would you seek?, asked Harvard’s Max Bazerman
    -How can you access similar data now?

Recommendation
-Use checklists to ensure comprehensive analysis of data required for each decision type

  1. Anchoring Bias
    -What data sources are used to analyze decision?
    -Which data are estimates? By whom? If so, from which data were estimates extrapolated?
    -To what extent could there be:
  • Unsubstantiated numbers?
  • Extrapolation from non-equivalent previous situations?
  • Attraction to specific anchors?

Recommendations
-Present data from other sources, benchmarks, or models
-Request new analysis

8. Halo Effect
     -To what extent does the analysis team expect that a person, organization, or approach previously successful in one context will be equally effective in different situation?

Phil Rosenzweig

Phil Rosenzweig

Recommendations
-Question potentially inaccurate inferences
-Solicit additional comparable examples
-Question attributions of success and failure to leaders’ personalities instead of chance factors, advised IMD’s Phil Rosenzweig.

9. Sunk-Cost Fallacy, Endowment Effect
     -To what extent are recommenders attached to past decisions?

Recommendation
Disregard past expenditures when considering future costs and revenues

  1. Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect
    -To what extent is the comparison case unwarrantedly optimistic?

Recommendation
-Adopt an outside view by using relevant simulations or war games

  1. Disaster Neglect
    -To what extent is the worst case scenario realistically and sufficiently negative?
    -How was the worst case generated?
    -To what extent does the worst case consider competitors’ likely responses?
    -What other scenarios could occur?

Recommendation
-Conduct a premortem, suggested by Gary Klein of Applied Research Associates:  Imagine the worst case scenario occurred, then propose likely causes, mitigations   

  1. Loss Aversion
    -To what extent is the evaluation and decision team risk averse?

Recommendation
-Realign incentives to share responsibility for the risk or to reduce risk

  1. Planning Fallacy focuses only on the current case while ignoring similar projects’ history and statistical generalization from related cases.
    -To what extent does the analysis rely on “top-down, outside-view” comparisons to similar projects?
    -Did the evaluators use a “bottom-up, inside-view” to estimate time required for each step?

Recommendation
-Statistically analyze a broad range of similar cases to avoid over-estimates from “top-down, outside-view” approaches and underestimates from “bottom-up, inside-view”
-Differentiate accurate forecasts from ambitious targets

  1. Loss aversion
    -To what extent are evaluators more concerned with avoiding loss than achieving gains?
    – How concerned are evaluators with being held responsible for a failed project?
    -To what extent has the organization specified acceptable risk levels?

Recommendation
-Seek risk tolerance guidelines from organizational leaders.

Decision-making tools like checklists can significantly reduce unconscious biases, provided that they are consistently and systematically applied.

-*What strategies have you found most helpful in reducing biases in decision-making?

Follow-share-like http://www.kathrynwelds.com and @kathrynwelds

Related Posts:

Twitter:  @kathrynwelds
Google+LinkedIn Groups Psychology in Human Resources (Organisational Psychology)
Facebook
Blog:  Kathryn Welds | Curated Research and Commentary

©Kathryn Welds

 

Gender Bias in STEM Hiring Even When it Reduces Financial Returns  

Women are under-represented in Science, Technology, Engineering, and Mathematics (STEM) academic programs and professional roles, and some question whether this is a a result of personal preference, implicit bias, institutional barriers, or other factors,

Ernesto Reuben

Ernesto Reuben

To investigate, Columbia University’s Ernesto Reuben, Paola Sapienza of Northwestern University, and University of Chicago’s Luigi Zingales developed an experimental job market.
Both male and female candidates demonstrated equal skill in performing an arithmetic task, yet both female and male “hiring managers” were twice as like to hire comparable male candidateseven when the hiring managers earned less by hiring less qualified males.
*Even when participants had a financial incentive to choose the candidate with the greatest task-relevant skills, they chose less-qualified male candidates.

Paola Sapienza

Paola Sapienza

Reuben and team also found that when candidates were asked to report their performance on the task-related achievement test, men exaggerated their performance with “honest overconfidence.”
In contrast, women generally underreported their accomplishments, found University of Wisconsin’s Sylvia Beyer.

Luigi Zingales

Luigi Zingales

This gender-based bias in hiring decisions was reduced, but not eliminated when candidates’ previous performance was provided by a third party.

Sylvia Beyer

Sylvia Beyer

Some candidates were directed to report expected future performance based on initial math task performance, then the “employer” made the hiring decision.
Other candidates provided no estimate, but Reuben’s team reported candidates’ past performance to the “hiring managers.”

In other studies, “employers” had no information on each “candidate’s” previous performance, but met each applicant in person before making a hiring decision.
After the hiring managers’ choice, candidates reported expected future performance, or Reuben’s team provided candidates’ past performance to the “hiring manager.”

Anthony Greenwald

Anthony Greenwald

Volunteers then completed the Implicit Association Test (IAT), developed by University of Washington’s Anthony Greenwald, Debbie McGhee, and Jordan Schwartz, to elicit unconscious stereotypes of gender, competencies, and occupations.

When the candidates reported their expected performance and the “hiring manager” chose a candidate with a lower score than other contenders, 90% of the selected but underperforming candidates were male.
As a result, “hiring managers” who selected less qualified male candidates sacrificed 5-7% of their own compensation for biased selections.

Pedro Rey-Biel

Pedro Rey-Biel

Reuben and colleagues, with Pedro Rey-Biel of Universitat Autònoma de Barcelona previously demonstrated that this preference for underperforming candidates was explained by the persuasive impact of men’s significantly exaggerated statements (usually by at least 30%) about past and future performance and by scores on the Implicit Association Test.

Hyperbole is apparently effective for male candidates in job interviews when the “hiring manager” scores high on the IAT.

However, this embellishment strategy is ineffective for women, as Reuben and team demonstrated:  In another study, women were still selected 33% less than expected even when they showcased their accomplishments.
Women’s overt self-promotion may provoke “backlash” against those who behave in counter-stereotypic ways.

This research suggests the prevalence of implicit biases against hiring women to perform science, technology, engineering, and mathematics (STEM) functions, and male candidates’ tendency to embellish past performance and boast about future potential accomplishments.

As a result, women are selected less frequently for roles in STEM careers, continuing their under representation in these fields.

Even if women do not exaggerate past accomplishments and future potential, this research implies that they should ensure that they communicate and reinforce the full range of skills.

“Real life” hiring managers can overcome implicit hiring biases through awareness and “proper information processing” by focusing on validated performance data, and comparing candidates of the same gender with each other..

-*What strategies have you seen mitigate the influence of implicit bias influence in hiring decisions?

Follow-share-like http://www.kathrynwelds.com and @kathrynwelds

RELATED POSTS:

Twitter @kathrynwelds
BlogKathryn Welds | Curated Research and Commentary
Google+  http:www.google.com/+KathrynWelds
LinkedIn Open Group Psychology in Human Resources (Organisational Psychology)
Facebook Notes

©Kathryn Welds

Perseverance Increases Skill Increases Luck: “The Harder I Work, The Luckier I Get”

Samuel Goldwyn

Samuel Goldwyn

Thomas Jefferson

Thomas Jefferson

Samuel Goldwyn recast Thomas Jefferson’s earlier observation: “I am a great believer in luck, and I find the harder I work, the more I have of it.”

Michael Mauboussin, of Columbia University, and previously Chief Investment Strategist at Legg Mason Capital Management Inc. investigated this relationship between effort and luck in his book, The Success Equation.The Success Equation

Michael Mauboussin

Michael Mauboussin

Mauboussin, an innovator in behavioral finance, adopted Harvard biologist Stephen Jay Gould’s “paradox of skill” to analyze the interaction of effort, skills, and luck, and best strategies to optimize outcomes in investing, sports, and career performance.

Stephen Jay Gould

Stephen Jay Gould

He posits that as skill improves in activities where outcomes are affected by skill and luck, the standard deviation of skills narrows.
In this case, luck becomes more important in determining outcomes:

Whenever you see an outlier in sports, it is always a combination of really good skill and really good luck… (Often) they are about one and a half or two standard deviations away from the average…not all skilled players have (winning) streaks, but all (winning) streaks are held by skillful players.”

For example, as investors become more sophisticated and have access to advanced computational tools, as athletes benefit from targeted training and development regimens, and as students are groomed for admission to top universities, differences among these skilled performers decreases.
Chance influences can determine outcomes.

Mauboussin says that luck has several elements:

  • Affects an individual or organization,
  • May be evaluated as “good” or “bad”
  • Another outcome could have occurred
  • The outcome is uncontrollable, but is comprised of several elements

To increase luck, he advises assessing each contender’s strength in the situation and finding “…something completely different to get you on the right side of the tail of the skill distribution,” such as employing an unusual or unexpected tactic.

The stronger player has positive asymmetric resources, so the effective strategy is to simplify the game.
In contrast the underdog should seek to complicate the game, such as through disruptive innovation, a flank strategy or a guerilla tactic.

Because most people have a bias toward optimism and overestimate personal capabilities, it may be difficult to assess oneself as an “underdog” in a performance situation.

Daniel Kahneman

Daniel Kahneman

Nobel Prize winner Daniel Kahneman and Amos Tversky explained that individuals who adopt an inside view gather substantial information, combine it with their own inputs, then project into the future without considering “distributional information” about a wide variety of previous instances.
This approach risks developing an idiosyncratic, overconfident perspective by underestimating costs, completion times, and risks of planned actions, while overestimating benefits.

Amos Tversky

Amos Tversky

In contrast, people who adopt the outside view consider the problem as an instance of a larger reference class and consider the entire distribution of outcomes when this type of situation occurred previously.
This approach can reduce overconfidence.
However, this approach could discourage entrepreneurs, who will realize that a small percentage actually succeeds.

In addition, besides the bias toward overconfidence, people tend to “under-sample” instances of failure when a previously successful approach is applied in a new situation and doesn’t succeed.

Nate Silver

Nate Silver

Sabermetricians like Nate Silver, posit that worthwhile statistics provide:

  • Persistence or correlation from one period to the next, a strong indicator of high skill
  • Predictive value or high correlation with the target objective

Nate Silver-The Signal and The NoiseThe Oakland As baseball team uncovered these principles in determining that  a superior measure of athletic performance in this sport is on-base percentage rather than the traditional measure, batting average.

In this case, on-base percentage has a higher correlation from one season to the next and a higher correlation with run production than batting average, fulfilling both criteria.

Daniel Kahneman also suggested that skill, expertise, and intuition render more uniform results in a predictable environment.

Thinking Fast and SlowHowever, many organizational environments are unstable and non-linear, rendering experts less accurate because they cannot employ an effective predictive model.

Collective judgments through “the wisdom of crowds” may mitigate the challenges of unstable contexts because they provide more data points.

Mauboussin advocated considering the continuum of stability vs instability in which the issue is situated to determine strategy and to beware of applying simple heuristics that are vulnerable to bias, and social or situational influences.

He suggested the guideline “think twice” to prepare, detect and correct for common mental traps, including:

  • The Inside-only View
  • Tunnel Vision
  • Oversimplification
  • Situational Power
  • Overvaluing Expert Knowledge

-*How do you optimize your performance when chance elements can affect your outcomes?

Related Posts

Twitter:   @kathrynwelds
Google+:
Blog: – Kathryn Welds | Curated Research and Commentary
LinkedIn Open Group Psychology in Human Resources (Organisational Psychology)
Facebook Notes:

©Kathryn Welds