Tag Archives: stack ranking

Comparative Rankings May Reduce Gender Bias in Career Advancement

Iris Bohnet

Iris Bohnet

An “evaluation nudge” is a decision framing aid that may reduce biased judgments in hiring, promotion, and job assignments, according to Harvard’s Iris Bohnet, Alexandra van Geen, and Max H. Bazerman.

Alexandra van Geen

Alexandra van Geen

They recommended that organizations evaluate multiple employees simultaneously rather than each person independently.
This approach differs from “Stack Ranking” (“Rank and Yank”), advocated by GE’s Jack Welch and critiqued by many.

Multiple simultaneous evaluations are frequently used for hiring decisions, but less frequently when considering employee candidates for developmental job assignments and promotions.

Max Bazerman

Max Bazerman

Bazerman and Sally B. White, then of Northwestern with George F. Loewenstein of Carnegie Mellon demonstrated preference reversals between joint and separate evaluation.

George F. Loewenstein

George F. Loewenstein

Lack of comparison information in separate evaluation typically leads people to rely on internal referents as decision norms. These internal criteria may be biased preferences, according to Princeton’s Nobel laureate Daniel Kahneman and Dale T. Miller of Stanford.

Dale T. Miller

Dale T. Miller

Lack of comparative referents also can lead evaluators to rely on easily calibrated attributes, found University of Chicago’s Christopher K. Hsee.
Both of these mental shortcuts can systematically exclude members of under-represented groups.

Christopher K. Hsee

Christopher K. Hsee

Another problem is the “want/should” battle of emotions and preferences, outlined by Bazerman and Ann E. Tenbrunsel of Notre Dame, with Duke’s Kimberly A. Wade-Benzoni in their provocatively titled article, “Negotiating with Yourself and Losing.”

Ann E. Tenbrunsel

Ann E. Tenbrunsel

They argue that the want self” tends to dominate when deciding on a single option because there’s less information and less need to justify the decision.
In contrast, the more analytic “should self” is activated by the need to explain decision rationales.

Kimberly Wade-Benzoni

Kimberly Wade-Benzoni

Bohnet’s team asked more than 175 volunteer “employees” to perform a math task or a verbal task, then 554 “employer” evaluators (44% male, 56% female) received information on “employees’” past performance, gender, and the average past performance for all “employees.”

“Employers” were paid based on their “employees’’” performance in future tasks, similar to managerial incentives in many organizations.
Consequently, “employers” were rewarded for selecting people they considered effective performers.
Based on information about “employee” performance, evaluators decided to:

  • “Hire” the “employees,” or
  • Recommend the “employees” to perform the task in future, or
  • Return “employees” to the pool for random assignment to an employer.
Keith E. Stanovich

Keith E. Stanovich

The Harvard team found that “employers” who evaluated “employees” in relation to each other’s performance were more likely to select employees based on past performance, rather than relying on irrelevant criteria like gender.

Richard F. West

Richard F. West

In contrast, more than 50% of “employers” evaluated each candidate separately without reference to other “employees,” selected under-performing people for advancement.
Only 8% of employers selected under-performers when comparing “employees” to each other, and multiple raters for multiple candidates also tended to select the higher performing “employees.”

Team Bohnet suggested that people have two distinct and situation-specific modes of thinking, “System 1” and “System 2,” illustrated by University of Toronto’s Keith E. Stanovich and Richard F. West of James Mason University.

Keith Stanovich-Richard West System 1- System 2 ThinkingThese cognitive patterns can lead evaluators to select incorrect decision norms, leading to biased outcomes.

Decision tools like the “evaluative nudge” decision-framing can reduce bias in hiring and promotion decisions, leading to a more equitable workplace opportunity across demographic groups.

-*What other evaluation procedures can reduce unconscious bias in performance appraisal and career advancement selection processes?

Related Posts:

Twitter:  @kathrynwelds
Facebook

©Kathryn Welds

Advertisement

Do Unintended Consequences of Forced-Ranking of Employee Performance Outweigh their Short-Term Benefits?

Forced ranking (“stack ranking” or “rank and yank”) of employee performance was one contribution to MSFT’s loss of momentum, according to Kurt Eichenwald’s article on How Microsoft lost its Mojo. 

His extensive interviews with current and past Microsoft employees point to forced rankings leading to:

  •     Competitive sabotage and undermining of peers
  •     Focus on short-term results that coincide with twice-yearly rankings
  •     Undermined intrinsic motivation in face of  “impossible”-seeming odds
  •     Reduced innovation
  •     Lack of collaboration
  •     Focus on “visibility” to managers’ peers instead of improving performance
  •     Misguided decisions
  •     Mistrust of management and colleagues
  •     Unwanted attrition
  •     Stress for all.Forced ranking systems, used by a substantial number of Fortune 500 companies, is the eighth most-frequently used appraisal technique in the U.S.

It requires management teams to evaluate employees’ performance against other employees, rather against pre-determined standards.
The goal is to create a meritocracy in which superior performance is recognized and under-performance is “managed.”

Steve Scullen

Steve Scullen evaluated “forced distribution rating system” (FDRS) in a simulation study of 100 companies of 100 employees each over a three year period.
He reported in Personnel Journal that forced ranking and hypothetically firing of the bottom 5% or 10%, resulted in a 16% productivity improvement.
Productivity gains increase when more low performers were removed.

He acknowledged the negative consequences of forced rankings for employee morale, teamwork, collaboration, recruitment, shareholder perception, and brand image.
Nevertheless, Scullen found that the potential problems were counterbalanced by benefits.

Scullen determined that most benefit from forced ranking comes in the first few years of implementation: “…each time a company improves its workforce by replacing an employee with a new hire, it becomes more difficult to do so again… the better the workforce is, the more difficult it must be to hire applicants who are superior to the current employees who would be fired.

Dick Grote’s Forced Ranking: Making Performance Management Work, argues that most companies achieve benefits of forced ranking systems in “a few years” and are advised to replace
forced ranking with other talent management initiatives after the organization has implemented a refined selection process to ensure hiring top talent.

Peter Capelli

Peter Cappelli of The Wharton School and author of Talent on Demand: Managing Talent in an Age of Uncertainty, quantified the benefit of removing low performers:  This group contributes about five times less to organizations than high performers, according to his research.

In contrast, Alys Woodward of IDC challenged these arguments in her article on misunderstanding and misuse of statistics in stack ranking.

Alys Woodward

She concluded that “stack ranking assumes the statistics dictate reality, rather than reflect reality.”

Likewise, W. Edwards Deming opposed ranking because he thought that it destroys pride in workmanship, and opined that “the only way to improve a product or service is for management to improve the system that creates that product or service. Rewarding or punishing individuals trapped in the system is pointless and counterproductive.”

W. Edwards Deming

Robert Mathis and John Jackson pointed out potential legal challenges to stack-ranking.
They note that the practice may be difficult to defend in a court test because it does not comply with the following legal criteria:

  •     Criteria based on job analysis
  •     Absence of disparate impact and evidence of validity
  •     Formal evaluation criteria that limit managerial discretion
  •     Rating linked to job duties and responsibilities
  •     Documentation of appraisal activities
  •     Prevents action from controlling employee’s career
  •     Counseling to help poor performers improve

Though most employees do not seek out employers who use stack ranking, organizations may realize a short-term benefit in streamlining the workforce.
However, the practice may have unintended “soft” consequences, legal challenges, and time-limited value.

-*What positive and negative impacts have you observed related to forced-ranking appraisal systems?

Twitter:  @kathrynwelds
Facebook Notes:

 

©Kathryn Welds

Reinventing Performance Management to Reduce Bias: Strengths, Future Focus, Frequent Feedback

Steven Scullen

Steven Scullen

Most performance management systems set goals at the beginning of the year and determine variable compensation by rating accomplishment of those objectives.

These evaluations typically are considered in lengthy “consensus meetings” in which managers discuss the performance of hundreds of people in relation to their peers – sometimes called “stack ranking,” or more cynically “rank-and-yank.”

Michael Mount

Michael Mount

These year-end ratings don’t provide “in-the-moment” and “real-time” feedback about actual performance as it happens, so may be less useful in improving performance.

Assessing skills produces inconsistent data based on raters’ own skills in that competency and the value they attach to each performance objective, leading to unconscious bias.

Maynard Goff

Maynard Goff

This risk to performance rating validity was demonstrated by Drake University’s Steven Scullen, Michael Mount of University of Iowa, and Korn Ferry’s Maynard Goff, who considered 360 degree performance evaluations by two bosses, two peers, and two subordinates for nearly 4500 managers.

They found that three times as much rating variance was explained by individual raters’ idiosyncratic evaluation choices, rather than actual performance.

Manual London

Manual London

Sources of bias include halo error, leniency error, and organizational perspective based on current role, suggested by SUNY’s Manuel London and James Smither of LaSalle University, and validated by Scullen’s team.

These findings led the researchers to conclude “Most of what is being measured by the ratings is the unique rating tendencies of the rater. Thus ratings reveal more about the rater than they do about the ratee,” replicating similar findings by University of Georgia’s Charles Lance, Julie LaPointe and Amy Stewart.

Ashley Goodall

Ashley Goodall

To mitigate these biases in Deloitte’s performance management system, Ashley Goodall of Deloitte Services LP engaged Marcus Buckingham, formerly of The Gallup Organization, to analyze existing practices and develop an empirically-validated approach.

Goodall and Buckingham calculated the total annual hours required to conduct performance ratings using the existing process and found that managers invested 2 million hours a year.
This finding confirmed that one goal in revising the process was to increase speed and efficiency.

Marcus Buckingham

Marcus Buckingham

In addition, Goodall and Buckingham sought to increase the meaningfulness of performance management by focusing on discussions about future performance and careers rather than on the appraisal process.

They concluded a performance management system should be characterized by:

  • Reliable performance data, controlling for idiosyncratic rater effects,
  • Speed to administer,
  • Ability to recognize performance,
  • Personalization: “One-size-fits-one”,
  • Considering actions to take in response to data,
  • Continuous learning and improvement.

Deloitte logoDeloitte conducted a separate controlled study of 60 high-performing teams including almost 1300 employees representing all parts of the organization compared with an equal number of employees from an equivalent sample to determine questionnaire items that differentiate high- and lower-performing teams.

They found that performance and related compensation allocations could be more accurately based on managers’ statements about their intended future actions toward each employee rather than asking about team members’ skills.

Several items accounted for the vast majority of response variation between top performing groups and others, particularly At work, I have the opportunity to do what I do best every day.”

Now Discover Your StrengthsBusiness units whose employees said they “strongly agree” with this item were substantially more likely to be more productive, earn high customer satisfaction scores, and experience low employee turnover.

Other powerful predictors of performance were:

  • I have the chance to use my strengths every day,
  • My coworkers are committed to doing quality work,
  • The mission of our company inspires me.

Deloitte’s revised performance management system asks team leaders to rate four items on a 5-point scale from “strongly agree” to “strongly disagree” or yes-no at the end of every project or once a quarter:

  • Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus [measures overall performance and unique value],
  • Given what I know of this person’s performance, I would always want him or her on my team [measures ability to work well with others],
  • This person is at risk for low performance [identifies problems that might harm the customer or the team],
  • This person is ready for promotion today [measures potential].

These responses provide a performance snapshot that informs but doesn’t completely determine compensation.
Other factors include project assignment difficulty and contributions other than formal projects, evaluated by a leader who knows each individual personally or by a group considering data across several groups.

In addition, every team leader prioritizes once-weekly “check-ins” with each employee to ensure that priorities are clear and progress toward them is consistent.

Strengthfinder 2.0

Strengthfinder 2.0

Goodall and Buckingham opined that “radically frequent check-ins are a team leader’s killer app to recognize, see, and fuel performance,” in addition to using a self-assessment tool that identifies each team members’ strengths and enables sharing with teammates, team leader, and the organization.

These three “interlocking rituals” of the weekly check-in, quarterly or project-end performance snapshot, and annual compensation decision enable a shift from retrospective view of performance to more “real-time” coaching to support performance planning and enhancement.

Deloitte’s approach seeks a “big data“ view of each person’s organizational performance and contribution rather than the “simplicity” of a small data view summarized in a single stack-rank number.

-*How do you develop a “Big Data” view of people’s performance?

-*How do you enable continuous, “in-the-moment” performance feedback instead of once-a-year retrospective view?

Follow-share-like http://www.kathrynwelds.com and @kathrynwelds

Related Posts:

Twitter:  @kathrynwelds
Google+
LinkedIn Groups Psychology in Human Resources (Organisational Psychology)
Facebook
Blog: – Kathryn Welds | Curated Research and Commentary

©Kathryn Welds

Work with Experts – But Don’t Compete – to Improve Performance

Francis Flynn

Francis Flynn

People can improve performance on tasks ranging across:

Emily Amanatullah

Emily Amanatullah

when performing individually but alongside an outstanding performer, according to Stanford’s Francis Flynn and University of Texas, Austin’s Emily Amanatullah.

They attributed performance enhancement to increased mental focus and physical effort, motivated by:

Robert Zajonc

Robert Zajonc

  • Social facilitation due to the expert role model’s mere presence, described more than 50 years ago by Robert Zajonc, then of University of Michigan
  • Social comparison” with “skillful coactors,” demonstrated by University of North Carolina’s John Seta.

    John Seta

    John Seta

However, performance declined when people competed directly with a strong performer, Flynn and Amanatullah reported.
They concluded that “high status coactors” enable people to “psych up” performance when not competing, but become “psyched out” when challenging the expert, based on their analysis of Masters golf tournament statistics over five years.

Ray Reagans

Ray Reagans

High status co-actors can achieve their influential position through demonstrated skill or their greater awareness of status dynamics due to better ability to “self-monitor,” found Flynn and Amanatullah with Ray E. Reagans of Carnegie Mellon and Daniel R. Ames of Columbia University.

Daniel R Ames

Daniel R Ames

People with greater self-monitoring ability tend to more effective in managing their “exchange relationships,” and generally establish a reputation as a generous “exchange partner.”

As a result, they are typically more likely than low self-monitors to be sought out for help and to refrain from asking others for help.

Co-action,” organizational status differences and interpersonal “exchange” all occur in organizations when employees work independently but in near proximity with others, and when people collaborate toward shared goals.

These finding suggest that working near expert colleagues can enable improve performance among co-workers, but competition for salary increases, promotions, access to special training, and other perks can undermine individual achievement by provoking anxiety.

Flynn and Amanatullah recommended that organizations and employees can showcase desired skillful performance by role models, while enabling employees to earn rewards and incentives through individual efforts rather than competition.
This recommendation may be impossible to implement in hierarchical organizations that identify “high potential” employees and differentiate performance through “stack ranking.”

-*How do you avoid the “psych out” effect of competing with highly skilled performers?

Follow-share-like http://www.kathrynwelds.com and @kathrynwelds

RELATED POSTS:

Twitter @kathrynwelds
BlogKathryn Welds | Curated Research and Commentary
Google+
LinkedIn Open Group Psychology in Human Resources (Organisational Psychology)
Facebook Notes

©Kathryn Welds