Tag Archives: performance appraisal

Do Unintended Consequences of Forced-Ranking of Employee Performance Outweigh their Short-Term Benefits?

Forced ranking (“stack ranking” or “rank and yank”) of employee performance was one contribution to MSFT’s loss of momentum, according to Kurt Eichenwald’s article on How Microsoft lost its Mojo. 

His extensive interviews with current and past Microsoft employees point to forced rankings leading to:

  •     Competitive sabotage and undermining of peers
  •     Focus on short-term results that coincide with twice-yearly rankings
  •     Undermined intrinsic motivation in face of  “impossible”-seeming odds
  •     Reduced innovation
  •     Lack of collaboration
  •     Focus on “visibility” to managers’ peers instead of improving performance
  •     Misguided decisions
  •     Mistrust of management and colleagues
  •     Unwanted attrition
  •     Stress for all.Forced ranking systems, used by a substantial number of Fortune 500 companies, is the eighth most-frequently used appraisal technique in the U.S.

It requires management teams to evaluate employees’ performance against other employees, rather against pre-determined standards.
The goal is to create a meritocracy in which superior performance is recognized and under-performance is “managed.”

Steve Scullen

Steve Scullen evaluated “forced distribution rating system” (FDRS) in a simulation study of 100 companies of 100 employees each over a three year period.
He reported in Personnel Journal that forced ranking and hypothetically firing of the bottom 5% or 10%, resulted in a 16% productivity improvement.
Productivity gains increase when more low performers were removed.

He acknowledged the negative consequences of forced rankings for employee morale, teamwork, collaboration, recruitment, shareholder perception, and brand image.
Nevertheless, Scullen found that the potential problems were counterbalanced by benefits.

Scullen determined that most benefit from forced ranking comes in the first few years of implementation: “…each time a company improves its workforce by replacing an employee with a new hire, it becomes more difficult to do so again… the better the workforce is, the more difficult it must be to hire applicants who are superior to the current employees who would be fired.

Dick Grote’s Forced Ranking: Making Performance Management Work, argues that most companies achieve benefits of forced ranking systems in “a few years” and are advised to replace
forced ranking with other talent management initiatives after the organization has implemented a refined selection process to ensure hiring top talent.

Peter Capelli

Peter Cappelli of The Wharton School and author of Talent on Demand: Managing Talent in an Age of Uncertainty, quantified the benefit of removing low performers:  This group contributes about five times less to organizations than high performers, according to his research.

In contrast, Alys Woodward of IDC challenged these arguments in her article on misunderstanding and misuse of statistics in stack ranking.

Alys Woodward

She concluded that “stack ranking assumes the statistics dictate reality, rather than reflect reality.”

Likewise, W. Edwards Deming opposed ranking because he thought that it destroys pride in workmanship, and opined that “the only way to improve a product or service is for management to improve the system that creates that product or service. Rewarding or punishing individuals trapped in the system is pointless and counterproductive.”

W. Edwards Deming

Robert Mathis and John Jackson pointed out potential legal challenges to stack-ranking.
They note that the practice may be difficult to defend in a court test because it does not comply with the following legal criteria:

  •     Criteria based on job analysis
  •     Absence of disparate impact and evidence of validity
  •     Formal evaluation criteria that limit managerial discretion
  •     Rating linked to job duties and responsibilities
  •     Documentation of appraisal activities
  •     Prevents action from controlling employee’s career
  •     Counseling to help poor performers improve

Though most employees do not seek out employers who use stack ranking, organizations may realize a short-term benefit in streamlining the workforce.
However, the practice may have unintended “soft” consequences, legal challenges, and time-limited value.

-*What positive and negative impacts have you observed related to forced-ranking appraisal systems?

Twitter:  @kathrynwelds
Facebook Notes:


©Kathryn Welds


Reinventing Performance Management to Reduce Bias: Strengths, Future Focus, Frequent Feedback

Steven Scullen

Steven Scullen

Most performance management systems set goals at the beginning of the year and determine variable compensation by rating accomplishment of those objectives.

These evaluations typically are considered in lengthy “consensus meetings” in which managers discuss the performance of hundreds of people in relation to their peers – sometimes called “stack ranking,” or more cynically “rank-and-yank.”

Michael Mount

Michael Mount

These year-end ratings don’t provide “in-the-moment” and “real-time” feedback about actual performance as it happens, so may be less useful in improving performance.

Assessing skills produces inconsistent data based on raters’ own skills in that competency and the value they attach to each performance objective, leading to unconscious bias.

Maynard Goff

Maynard Goff

This risk to performance rating validity was demonstrated by Drake University’s Steven Scullen, Michael Mount of University of Iowa, and Korn Ferry’s Maynard Goff, who considered 360 degree performance evaluations by two bosses, two peers, and two subordinates for nearly 4500 managers.

They found that three times as much rating variance was explained by individual raters’ idiosyncratic evaluation choices, rather than actual performance.

Manual London

Manual London

Sources of bias include halo error, leniency error, and organizational perspective based on current role, suggested by SUNY’s Manuel London and James Smither of LaSalle University, and validated by Scullen’s team.

These findings led the researchers to conclude “Most of what is being measured by the ratings is the unique rating tendencies of the rater. Thus ratings reveal more about the rater than they do about the ratee,” replicating similar findings by University of Georgia’s Charles Lance, Julie LaPointe and Amy Stewart.

Ashley Goodall

Ashley Goodall

To mitigate these biases in Deloitte’s performance management system, Ashley Goodall of Deloitte Services LP engaged Marcus Buckingham, formerly of The Gallup Organization, to analyze existing practices and develop an empirically-validated approach.

Goodall and Buckingham calculated the total annual hours required to conduct performance ratings using the existing process and found that managers invested 2 million hours a year.
This finding confirmed that one goal in revising the process was to increase speed and efficiency.

Marcus Buckingham

Marcus Buckingham

In addition, Goodall and Buckingham sought to increase the meaningfulness of performance management by focusing on discussions about future performance and careers rather than on the appraisal process.

They concluded a performance management system should be characterized by:

  • Reliable performance data, controlling for idiosyncratic rater effects,
  • Speed to administer,
  • Ability to recognize performance,
  • Personalization: “One-size-fits-one”,
  • Considering actions to take in response to data,
  • Continuous learning and improvement.

Deloitte logoDeloitte conducted a separate controlled study of 60 high-performing teams including almost 1300 employees representing all parts of the organization compared with an equal number of employees from an equivalent sample to determine questionnaire items that differentiate high- and lower-performing teams.

They found that performance and related compensation allocations could be more accurately based on managers’ statements about their intended future actions toward each employee rather than asking about team members’ skills.

Several items accounted for the vast majority of response variation between top performing groups and others, particularly At work, I have the opportunity to do what I do best every day.”

Now Discover Your StrengthsBusiness units whose employees said they “strongly agree” with this item were substantially more likely to be more productive, earn high customer satisfaction scores, and experience low employee turnover.

Other powerful predictors of performance were:

  • I have the chance to use my strengths every day,
  • My coworkers are committed to doing quality work,
  • The mission of our company inspires me.

Deloitte’s revised performance management system asks team leaders to rate four items on a 5-point scale from “strongly agree” to “strongly disagree” or yes-no at the end of every project or once a quarter:

  • Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus [measures overall performance and unique value],
  • Given what I know of this person’s performance, I would always want him or her on my team [measures ability to work well with others],
  • This person is at risk for low performance [identifies problems that might harm the customer or the team],
  • This person is ready for promotion today [measures potential].

These responses provide a performance snapshot that informs but doesn’t completely determine compensation.
Other factors include project assignment difficulty and contributions other than formal projects, evaluated by a leader who knows each individual personally or by a group considering data across several groups.

In addition, every team leader prioritizes once-weekly “check-ins” with each employee to ensure that priorities are clear and progress toward them is consistent.

Strengthfinder 2.0

Strengthfinder 2.0

Goodall and Buckingham opined that “radically frequent check-ins are a team leader’s killer app to recognize, see, and fuel performance,” in addition to using a self-assessment tool that identifies each team members’ strengths and enables sharing with teammates, team leader, and the organization.

These three “interlocking rituals” of the weekly check-in, quarterly or project-end performance snapshot, and annual compensation decision enable a shift from retrospective view of performance to more “real-time” coaching to support performance planning and enhancement.

Deloitte’s approach seeks a “big data“ view of each person’s organizational performance and contribution rather than the “simplicity” of a small data view summarized in a single stack-rank number.

-*How do you develop a “Big Data” view of people’s performance?

-*How do you enable continuous, “in-the-moment” performance feedback instead of once-a-year retrospective view?

Follow-share-like http://www.kathrynwelds.com and @kathrynwelds

Related Posts:

Twitter:  @kathrynwelds
LinkedIn Groups Psychology in Human Resources (Organisational Psychology)
Blog: – Kathryn Welds | Curated Research and Commentary

©Kathryn Welds