Factors that Impact Effective 360 Feedback: Part I

February 16, 2015 by Ken Nowack

“The real secret of magic lies in the performance.”

David Copperfield

11809586_thb

Multi-rater or 360-degree feedback, was used by approximately 90% of Fortune 500 companies last year. Under ideal circumstances, 360-degree feedback should be used as an assessment for professional development, rather than, evaluation1.

Unfortunately, not all circumstances are ideal.

Although popular, there are at least five important factors that impact the effectiveness of 360 feedback that most publishers and vendors won’t tell you about either these potentially useful or potentially dangerous assessments. OK, in full disclosure our company also develops and distributes a wide variety of validated off-the-shelf 360 assessments.

So, why would I share these with you? Simply because I care more about having 360-feedback and the “outing” process behind it done correctly than incorrectly2.

1. Ratings Between Rater Groups are Only Modestly Correlated with Each Other.

Research consistently shows that ratings between direct reports, peers, supervisors, self and others overlap only modestly. Self-ratings are typically weakly correlated with other rater perspectives with greater convergence between peer and supervisor ratings.

In general, direct reports tend to emphasize and filter interpersonal and relationship behaviors into their subjective ratings whereas superiors tend to focus more on “bottom line” results and task-oriented behaviors (Nowack, 2002). At a practical level, it means that coachees might be challenged to understand how to interpret observed differences by rater groups and whether to decide to focus their developmental “energy” on managing upward, downward and/or laterally in light of these potentially discrepant results.

2. Ratings Within Rater Groups are Only Modestly Correlated with Each Other.

In one meta-analytic study by Conway & Huffcutt (1997), the average correlation between two supervisors was only .50, between two peers, .37 and between two subordinates only .30. Current research suggests that if a 360-degree feedback assessment has an average of 5 questions to measure each competency (not uncommon in practice), it would require at least 4 supervisors, 8 peers and 9 direct reports to achieve acceptable levels of reliability (.70 or higher).

Since our coachees rarely can find that one “all knowing and candid” rater to provide them with specific and useful feedback, it suggests that having an adequate representation and larger number of feedback sources is critical to ensure accurate and reliable data to be used for behavioral change efforts.

Given these findings, vendors who do not provide a way for participants to evaluate within-rater agreement in feedback may increase the probability that average scores used in reports can be easily misinterpreted—particularly if they are used by coaches to help coachees focus on specific competencies and behaviors for developmental planning purposes.

3. Perceptual Distortions by Participants and Raters Make Interpretation of 360-Feedback Results Challenging.

A triad of “positive illusions” have been previously posited by Taylor & Brown (1988) that would appear to be important moderators of multi-rater feedback interventions: 1) People tend to inflate the perceptions of their skills and abilities; 2) People typically exaggerate their perceived control over work and life events; and 3) People generally express unrealistic optimism about their future.

Coaches should also keep in mind that people generally tend to forget negative feedback about themselves–specifically in areas that matter most to them and typically remember performing more desirable behaviors than other raters can later identify (Gosling, John, Craik & Robins, 1998).

It is also important to point out that people usually define their strengths based on traits they already possess and define their developmental opportunities more in terms of traits they lack at the moment (Dunning, Heath, & Suls, 2004). Research suggests that people not only compare themselves to others but to how they used to be in the past. In general, individuals evaluate their current and future selves as better than their past selves (Wilson & Ross, 2001).

These findings suggests that coaches need to keep in mind that perceptions about feedback can be highly influenced by these personality and individual difference variables making it imperative that each participant have an internal or external facilitator spend time to really help the individual clarify and interpret the findings.

In my next Blog, I will summarize the last two important factors for making 360 feedback effective….Be well….

 

  1. Nowack, K. (2009). Leveraging Multirater Feedback to Facilitate Successful Behavioral Change. Consulting Psychology Journal: Practice and Research, 61, 280-297 []
  2. Nowack, K. (1999). 360 Degree feedback. In DG Langdon, KS Whiteside, & MM McKenna (Eds.), Intervention: 50 Performance Technology Tools, San Francisco, Jossey-Bass, Inc., pp.34-46. []

Kenneth Nowack, Ph.D. is a licensed psychologist (PSY13758) and President & Chief Research Officer/Co-Founder of Envisia Learning, is a member of the Consortium for Research on Emotional Intelligence in Organizations. Ken also serves as the Associate Editor of Consulting Psychology Journal: Practice and Research. His recent book Clueless: Coaching People Who Just Don’t Get It is available for free for a limited time by signing up for free blog updates (Learn more at our website)

Posted in Engagement, Leadership Development

If You Enjoyed This Post...

You'll love getting updates when we post new articles on leadership development, 360 degree feedback and behavior change. Enter your email below to get a free copy of our book and get notified of new posts:

  1. David Witt says:

    Hi Ken,
    Thanks for pointing out some of the challenges with 360 feedback and the importance of getting a large enough sample to generate meaningful results. I’m looking forward to your next post on the two important factors for making 360 feedback more effective.

  2. […] factors that most vendors won’t tell you about that directly impact of multi-rater feedback (see my last Blog for the first three).  In Part II of this Blog I will continue to explore the last three in more […]

  3. […] factors that most vendors won’t tell you about that directly impact of multi-rater feedback (see my last Blog for the first three).  In Part II of this Blog I will continue to explore the last three in more […]

  4. […] tell you about that directly impact the effectiveness and success of multi-rater feedback (see my last Blog for the first three).  In Part II of this Blog I will continue to explore the last three in more […]

3 Trackbacks

  1. […] factors that most vendors won’t tell you about that directly impact of multi-rater feedback (see my last Blog for the first three).  In Part II of this Blog I will continue to explore the last three in more […]

  2. […] factors that most vendors won’t tell you about that directly impact of multi-rater feedback (see my last Blog for the first three).  In Part II of this Blog I will continue to explore the last three in more […]

  3. By Factors that Impact Effective 360 Feedback: Part II on February 23, 2015 at 11:45 pm

    […] tell you about that directly impact the effectiveness and success of multi-rater feedback (see my last Blog for the first three).  In Part II of this Blog I will continue to explore the last three in more […]

Follow Envisia Learning:

RSS Twitter linkedin Facebook

Are You Implementing a Leadership Development Program?

Call us to discuss how we can help you get more out of your leadership development program:

(800) 335-0779, x1