What Vendors Don’t Want You to Know About Multi-Rater Feedback

January 24, 2009 by Ken Nowack

“The real secret of magic lies in the performance.”

David Copperfield

11809586_thb

Multi-rater or 360-degree feedback, was used by approximately 90% of Fortune 500 companies last year. Under ideal circumstances, 360-degree feedback should be used as an assessment for professional development, rather than, evaluation ((Nowack, K. (2010). Leveraging Multirater Feedback to Facilitate Successful Behavioral Change. Consulting Psychology Journal: Practice and Research, in press.)).

Unfortunately, not all circumstances are ideal.

Although popular, there are a number of “dirty little secrets” that most publishers and vendors won’t tell you about these potentially useful and dangerous assessments. OK, in full disclosure our company also develops and distributes a wide variety of validated off-the-shelf 360 assessments. So, why would I share these with you? Simply because I care more about having 360-feedback and the “outing” process behind it done correctly than incorrectly ((Nowack, K. (1999). 360 Degree feedback. In DG Langdon, KS Whiteside, & MM McKenna (Eds.), Intervention: 50 Performance Technology Tools, San Francisco, Jossey-Bass, Inc., pp.34-46.)).

Secret #1 Lack of Theoretical Grounding

Most vendors are reluctant to tell you too much about the theoretical models behind their 360 tools because in many cases there aren’t any. For every vendor who does have a 360 tool on the market with some competency model that is grounded in theory and research, another offers one that lacks any grounding.

Secret #2 Lack of Published Psychometric Properties

Our company has an automated online system to accommodate customized 360 feedback questionnaires that includes email administration, scoring, and reporting. I can’t tell you how many times I put on my “vendor cap” and look the other way when these come into our office. Many have questions that don’t match the response scales, are written in a way that are difficult to discern what is being measured or have several questions in one. That’s the good part.

I don’t think we have had a single customized 360 project from a company or consultant where any real analysis has been done on the questionnaire to ensure that it has even adequate psychometric properties (e.g., internal consistency reliability, test re-test reliability, factor analysis). Kinda scary but I guess Home Deport doesn’t really care how buyers use the tools that wander out their door after the person checks off your receipt to verify you did indeed purchase it.

Secret #3 Average Scores Can Be Easy to Misinterpret

Most vendors use average scores of raters in their summary reports. For example it’s not uncommon to report a table summarizing the “most frequent” and “least frequent” behaviors perceived by the different rater groups. These top/bottom “Letterman lists” are derived by simple average score calculations. If all raters are essentially in agreement with each other the average score is a pretty good metric.

However, quite a bit of research on 360 degree feedback suggests that we should expect diversity in ratings both within and between rater groups. The more dispersion, the more confusing average scores are in feedback reports. As my friend and CEO of Personal Strengths Publishing Tim Scudder says, “If my head in a hot oven and my feet are in cold snow, on average, I am feeling pretty comfortable.” Average scores can be potentially misleading, particularly when behavior changes are being attemtped based on the results of 360 feedback reports. We offer at least three different ways within our reports to determine rater agreement and to offer some insight about ways to interpret and use average score summaries. I wish more vendors would do the same.

Secret #4 Most Competencies Within 360 Assessments are Highly Intercorrelated

Most vendors offer multi-rater tools that posit measuring specific competencies in different domains (e.g., communication, interpersonal, leadership). What most vendors will never tell you is that most competencies are very highly correlated with each other (assuming they have done the research to discover this!). What this means is that greater attention should probably be given to the “big picture” of feedback reports — what rater differences exist and what are the themes that come out of the feedback.

Secret #5 Normative Scoring Can Be Confusing to Intepret

The use of norms can indeed be confusing to respondents trying to interpret their 360 results. A lot of vendors are pretty impressed with their norms and offer them as real selling points in the sale of their 360 tools. How was the normative group defined and how many are in it? How truly representative are they (even within a company) of the respondents. At the end of the day, relative scores comparing self views to those of others invited to provide feedback are really most useful and important for behavior change.

Secret #6 Vendor Reports are Typically Static but Learning Styles Vary

Most vendors have put a lot of money into the programming to create nifty looking feedback reports. Most don’t have much flexibility in the report itself. For example, how many vendors do you know that offer a choice between line or bar graphs or average score interpretations versus normative score interpretations? Well, not many. Unfortunately, respondent learning styles and preferences for how to read, interpret and understand reports are more diverse than what most vendors will acknowledge.

Secret #7 It’s Hard to Sell Time-Series Reports

We all know that measuring behavior change is really the most important metric in 360 feedback interventions. Vendors should spend more time trying to sell outcomes and less time just pushing products. The cost of 360 feedback does vary but getting a client to purchase 2 for each employee (one to be administered 12-18 months downstream) can often be a deal breaker. Of course vendors would like nothing more than to sell a company on another adminstration of 360 feedback but most are pretty careful about “pushing” this more from a cost concern than “Best Practices” perspective. Yes, doing it once is cheaper than doing it twice (or more). Multi-rater feedback was always intended to be a process and not an event.

Well, there you have it. Secrets from a vendor about other vendors and what none of us will tell you about 360 feedback tools! However, now that you know these seven, I’d like to share one last secret.

What matters most about muti-rater feedback is the process and not the tool.

Do it right or don’t do it at all…..Be well…..

[tags]emotional intelligence, multi-rater feedback, 360 degree feedback, peer ratings, norms, self-ratings, feedback interventions, kenneth nowack, ken nowack, nowack[/tags]

Kenneth Nowack, Ph.D. is a licensed psychologist (PSY13758) and President & Chief Research Officer/Co-Founder of Envisia Learning, is a member of the Consortium for Research on Emotional Intelligence in Organizations. Ken also serves as the Associate Editor of Consulting Psychology Journal: Practice and Research. His recent book Clueless: Coaching People Who Just Don’t Get It is available for free for a limited time by signing up for free blog updates (Learn more at our website)

Posted in Engagement, Leadership Development

If You Enjoyed This Post...

You'll love getting updates when we post new articles on leadership development, 360 degree feedback and behavior change. Enter your email below to get a free copy of our book and get notified of new posts:

Follow Envisia Learning:

RSS Twitter linkedin Facebook

Are You Implementing a Leadership Development Program?

Call us to discuss how we can help you get more out of your leadership development program:

(800) 335-0779, x1