Where to Manage Your Developmental Energy in 360-Degree Feedback: Upward? Downward? or Lateral?

September 28, 2015 by Sandra Mashihi

“Be careful whose advice you buy, but be patient with those who supply it.”

-Mary Schmich

Differences between raters on 360-degree feedback assessments are common, and research suggests that the different rater groups are often only moderately associated with each other (Nowack, 1992). However, these meaningful rater group differences might also be a point of confusion in the interpretation of data for participants trying to use the results to determine specific behaviors to modify and which stakeholder to target.

At a practical level, this means that participants might be challenged to understand how to interpret observed differences by rater groups and asked whether to decide to focus their developmental “energy” on managing upward, downward, or laterally in light of these potentially discrepant results.  Research suggests that raters tend to use specific filters when completing 360-degree feedback assessments (Nowack, 2009). For example, superiors tend to focus more on performance, output, and task-oriented behaviors (Nowack, 2002).

In our research, we find that MANAGERS tend to reflect on three things when they complete 360-degree feedback assessments:

1) Bottom-line performance: Does the employee meet or exceed his or her performance objectives?

2) Technical competence: Does the employee technically know what he or she is doing?

3) “Burr in the saddle effect”: Has the employee created situations or problems that require the manager to investigate further or try to spend time resolving internal or external customer complaints?

Employees who are viewed by managers as getting work done with quality, as possessing strong technical competence, and as minimizing extra work on the part of the manager to resolve internal or external political issues are generally rated higher on 360-degree feedback assessments.

In general, DIRECT REPORTS tend to emphasize and filter interpersonal and relationship behaviors into their subjective ratings, whereas PEERS tend to be fairly accurate in actually predicting future leadership potential (although it is unclear exactly what qualities, competencies, personality attributes, or other behaviors they might be weighing when they complete 360-degree assessments). Based on these rating filters, observational opportunities, and job-role relationships with participants, it seems that peer ratings might be interpreted as an important message about moving ahead, whereas direct report ratings might be interpreted as an important message about getting along.  Participants should consider both the source and the congruence between various rater groups in determining which one they might target as a part of their developmental planning.

Coach’s Critique: 

“Who’s ratings should I consider for my development plan?!”  This question often comes up from a place of confusion for many participants that are undergoing the 360 interpretation. They receive ratings from different groups with different perspectives, and wonder how to weigh the importance of each group’s rating for their development plan.

Well, it is not a mystery as to why each group provides different feedback. Each rater group such as managers, peers, and direct reports emphasize on different areas of development, according to research. While managers tend to emphasize ROI or demonstrations of technical competence, direct reports are well suited to assess interpersonal skills as they tend to feel the impacts to a greater degree, and peers can see things as a whole, thus are more likely to provide feedback about participant’s potential strengths for future performance.

So, the answer to this question is not that simple. All groups have something valuable to offer in terms of feedback. Thus, participants should consider the types of areas they seek to include in their development plan, and where there are correlations between rater groups before determining target areas for development.

For those of you who have taken a 360, or utilize them with your clients, what has been your experience with determining the rater group feedback to include as part of development planning?


Dr. Sandra Mashihi is a senior consultant with Envisia Learning, Inc. She has extensive experience in sales training, behavioral assessments and executive coaching. Prior to working at Envisia Learning, Inc., She was an internal Organizational Development Consultant at Marcus & Millichap where she was responsible for initiatives within training & development and recruiting.. Sandra received her Bachelor’s of Science in Psychology from University of California, Los Angeles and received her Master of Science and Doctorate in Organizational Psychology from the California School of Professional Psychology.

Posted in 360 Degree Feedback

If You Enjoyed This Post...

You'll love getting updates when we post new articles on leadership development, 360 degree feedback and behavior change. Enter your email below to get a free copy of our book and get notified of new posts:

Follow Envisia Learning:

RSS Twitter linkedin Facebook

Are You Implementing a Leadership Development Program?

Call us to discuss how we can help you get more out of your leadership development program:

(800) 335-0779, x1