“Honest criticism is hard to take, particularly from a relative, a friend, an acquaintance, or a stranger. .” Franklin P. Jones

Cigarettes in the United States all come with health warning labels on boxes—perhaps vendors should do the same in marketing and selling multi-rater assessments that are so commonly used by coaches, consultants and organizational practitioners.  These same cautions also apply to multi-rater assessments developed “in-house” by many organizations using their own competency models.  At least five important myths should be considered when using and interpreting multi-rater feedback interventions ((Nowack, K (2009). Leveraging Multirater Feedback to Facilitate Successful Behavioral Change. Consulting Psychology Journal: Practice and Research, 61, 280-297)).

Five Myths About 360 Feedback

1. Ratings Between Rater Groups are Highly Correlated with Each Other. Research consistently shows that ratings between direct reports, peers, supervisors, self and others overlap only modestly.  Self-ratings are typically weakly correlated with other rater perspectives with greater convergence between peer and supervisor ratings (Nowack, 1992).  These diverse perspectives amount to different perspectives held for the coachee by the different rater groups. It seems intuitive to expect that some differences in perspectives will occur across rater groups.  In general, direct reports tend to emphasize and filter interpersonal and relationship behaviors into their subjective ratings whereas superiors tend to focus more on “bottom line” results and task-oriented behaviors. Interestingly, peers seem to be uniquely able to predict future leadership behavior. It is important to note that rater group differences might also be a point of confusion in the interpretation of their data for coachees trying to use the results to determine specific behaviors to modify and which stakeholder to target.  At a practical level, it means that coachees might be challenged to understand how to interpret observed differences by rater groups and whether to decide to focus their developmental “energy” on managing upward, downward and/or laterally in light of these potentially discrepant results.

2. Ratings Within Rater Groups are Highly Correlated With Each Other. In one meta-analytic study by Conway & Huffcutt (1997), the average correlation between two supervisors was only .50, between two peers, .37 and between two subordinates only .30. From a practical perspective, since reliabilities set an upper limit for validity, having too few raters providing input to the 360-feedback process might actually minimize the usefulness of the feedback that is given back to participants. Given these findings, vendors who do not provide a way for participants to evaluate within-rater agreement in feedback may increase the probability that average scores used in reports can be easily misinterpreted—particularly if they are used by coaches to help coachees focus on specific competencies and behaviors for developmental planning purposes. Vendors who don’t provide a way to look at spread or differences within rater groups could lead to misleading interpretations of average scores which are very typical of how 360 feedback reports summarize data.  If raters are polarized in their perceptions of specific behaviors they are observing, the mean score will tend to be interpreted as if the participant is average (e.g. if on a 1 to 7 frequency scale one rater experiences the participant to be a “2” and the other rater experiences the participant to be a “6” the mean score looks OK and might not reveal so such rater differences).

3. Interpretation of 360-Feedback Results is Relatively Easy and Straightforward. A triad of “positive illusions” has been previously hypothesized by Taylor & Brown (1988) that include: 1) People tend to inflate the perceptions of their skills and abilities; 2) People typically exaggerate their perceived control over work and life events; and 3) People generally express unrealistic optimism about their future. Of practical significance is the meaningfulness of self and other rating differences and its relationship to receipt of feedback and actual performance on the job.  Research consistently has found that when self and other ratings are in agreement and high, effectiveness is generally also high. Effectiveness on the job tends to decrease as self and other ratings disagree and become lower. Finally, in our own coaching practice, using diverse multi-rater assessments measuring different competency models, we have repeatedly observed that under-estimators (those whose self-ratings are meaningfully lower than others) tend to be highly perfectionist, self-critical, overly achievement striving and likely to focus on their perceived weaknesses rather than leveraging their “signature” strengths in developmental planning discussions.

Despite, trying to help participants interpret their feedback findings in a “balanced” manner, these under-estimators appear to be hyper-vigilant to the perceived “negative” information contained in their report and often “fixate” on the lowest average scores on ratings scales and the open-ended comments that appear to be “neutral or critical” in tone relative to other more positive comments collected within rater groups. Finally, to add to the confusion about over-estimators and under-estimators it is known that personality can play a big role in the acceptance and use of 360-feedback for actual behavior change.

For example, leaders who possess high core self-evaluations (a constellation of high self-esteem, high self-efficacy, internal locus of control and high emotional stability) seem to be most motivated to change behavior when they have a big gap between self and other ratings and those with low core self-evaluations tend to be most motivated when there is maximum agreement between self and other ratings ((Bono, J. & Colbert, A. (2005).  Understanding responses to multi-rater feedback: The role of core self-evaluations. Personnel Psychology, 58, 171-203)).

4. Most Leaders Improve Following 360 Feedback. Although research supports that feedback does result in significant performance improvement, effect sizes are relatively small suggesting that “zebras don’t easily lose their stripes.”  For example, numerous studies of personality show that genetic effects account for approximately 50% of the variance in personality and that in at least one third of all feedback interventions performance actually decreases. It would appear that we must accept that all of us have some skill and ability “set points” that may provide an upward ceiling to the growth and development of many coachees. Current research also suggests that the effect sizes (statistical measure of actual change) is quite modest in meta-analytic research on 360 feedback meaning that the size of change may not be very large.

5. Self-Directed Learning Following 360 Feedback Results in the Best Results. All too often, vendors and some practitioners espouse the “diagnose and adios” approach to multi-rater feedback hoping that self-directed insight alone will result in motivated behavioral change efforts.   As previous research suggests, this approach could actually contribute to more negative affect and behavioral disengagement. Some limited support for other approaches to structured follow-up comes from a recent doctoral dissertation study evaluating the effectiveness of 360-feedback interventions in 257 leaders in diverse organizations ((Rehbine, N. (2007). The iimpact of 360-degree feedback on leadership development. Unpublished  doctoral dissertation, Capella University)).  In her study over 65% of those surveyed expressed strong interest in utilizing some type of an online follow-up tool to measure progress and facilitate their own individual behavioral change efforts. It would appear that follow up coaching, manager involvement in the developmental planning process and online systems to facilitate tracking and monitoring of action plans all would help leverage the 360 feedback effort.

I will be presenting next weekend at the International Coaching Congress and Midwinter Conference of the American Psychological Association (Consulting Psychology) on the topic of leveraging 360-degree feedback so hope you can join me!  Be well…..

Kenneth Nowack, Ph.D. is a licensed psychologist (PSY13758) and President & Chief Research Officer/Co-Founder of Envisia Learning, is a member of the Consortium for Research on Emotional Intelligence in Organizations. Ken also serves as the Associate Editor of Consulting Psychology Journal: Practice and Research. His recent book Clueless: Coaching People Who Just Don’t Get It is available for free for a limited time by signing up for free blog updates (Learn more at our website)

Posted in 360 Degree Feedback, Engagement, Leadership Development

If You Enjoyed This Post...

You'll love getting updates when we post new articles on leadership development, 360 degree feedback and behavior change. Enter your email below to get a free copy of our book and get notified of new posts:

  1. David Bracken says:

    Nice! As to point 2, I have never understood the hesitation to show the full frequency distribution to help inform the participant. The benefits far outweigh perceived risks.

Follow Envisia Learning:

RSS Twitter linkedin Facebook

Are You Implementing a Leadership Development Program?

Call us to discuss how we can help you get more out of your leadership development program:

(800) 335-0779, x1