“The real secret of magic lies in the performance”Â
David Copperfield
Multi-rater or 360-degree feedback, was used by approximately 90% of Fortune 500 companies last year. Under ideal circumstances, 360-degree feedback should be used as an assessment for professional development, rather than, evaluation ((Tornow, W., London, M. (1998). Maximizing the value of 360-degree feedback. San Francisco: Jossey-Bass Inc.)).
Unfortunately, not all circumstances are ideal.
Although popular, there are a number of “inside secrets” that most publishers and vendors won’t tell you about these potentially useful but possibly dangerous assessments.
OK, in full disclosure our company also develops and distributes a wide variety of validated off-the-shelf 360 assessments and has a proprietary engine for creating customized 360 solutions. So, why would I share these with you? Â Simply because I care more about having 360-feedback done following “Best Practices” and trying not to do harm ((Nowack, K. (1999). 360 Degree feedback. In DG Langdon, KS Whiteside, & MM McKenna (Eds.), Intervention: 50 Performance Technology Tools, San Francisco, Jossey-Bass, Inc., pp.34-46.)).
Secret #1Â Lack of Theoretical Grounding
Most vendors are reluctant to tell you too much about the theoretical models behind their 360 tools because in many cases there aren’t any! For every vendor who does have a 360 assessment on the market with some competency model that is grounded in theory and research, another offers one that lacks any grounding at all. Ever wonder why all the competency models look the same for a particular job family?
Secret #2 Lack of Published Psychometric Properties
Our company has an automated online system to accommodate customized 360 feedback questionnaires that includes email administration, scoring, and reporting. I can’t tell you how many times I put on my “vendor cap” and look the other way when these poorly designed questionnaire come into our hands from a company or consultant asking to utilize our “engine” for creating an online assessment. I can’t tell you how many times we have seen questions that don’t match the response scale, are written in a way that are difficult to discern what is being measured or have several questions in one. And we wonder why there is limited evidence about the impact of 360 feedback on actual behavior change!
I don’t think we have had a single customized 360 project from a company, coach or consultant where any real statistical analysis has ever been done on the questionnaire to ensure that it has even adequate psychometric properties (e.g., internal consistency reliability, test re-test reliability, factor analysis). Honestly, we look the other way unless we have been contracted to provide consultation on the actual design of the 360 assessment.
Secret #3 Average Scores Can Be Easy to Misinterpret
Most vendors use average scores of raters in their summary reports. For example it’s not uncommon to report a table summarizing the “most frequent” and “least frequent” behaviors perceived by the different rater groups. These top/bottom “Letterman lists” are derived by simple average score calculations. If all raters are essentially in agreement with each other the average score is a pretty good metric.
However, quite a bit of research on 360 degree feedback suggests that we should expect diversity in ratings both within and between rater groups. The more dispersion, the more confusing average scores are in feedback reports. As my friend and CEO of Personal Strengths Publishing Tim Scudder says, “If my head in a hot oven and my feet are in cold snow, on average, I am feeling pretty comfortable.”
Average scores can be potentially misleading, particularly when behavior changes are being attemtped based on the results of 360 feedback reports. As a vendor we offer at least three different ways within each of our feedback reports to determine “rater agreement” and to offer some insight about ways to interpret and use average score summaries. I wish more vendors would do the same.
Secret #4Â Most Competencies Within 360 Assessments are Highly Intercorrelated
Most vendors offer multi-rater tools that posit measuring specific competencies in different domains (e.g., communication, interpersonal, leadership). What most vendors will never tell you is that most competencies are very highly correlated with each other (assuming they have done the research to discover this!). What this means is that greater attention should probably be given to the “big picture” of feedback reports — what rater differences exist and what are the themes that come out of the feedback. Sorry about having to share this with all of you that are doing coaching with 360 feedback assessments!
Secret #5Â Normative Scoring Can Be Confusing to Intepret
The use of norms can indeed be confusing to respondents trying to interpret their 360 results. A lot of vendors are pretty impressed with their industy, job level and international norms and offer them as real selling points in the sale of their 360 tools. How was the normative group defined and how many are in it? How truly representative are they (even within a company) of the respondents.
At the end of the day, relative scores comparing self-views to those of others providing feedback are really most useful and important for behavior change.
Secret #6Â Vendor Reports are Typically Static but Learning Styles are Diverse
Most vendors have put a lot of money into the programming to create nifty looking feedback reports. Most don’t have much flexibility in the report itself. For example, how many vendors do you know that offer a choice between line or bar graphs or average score interpretations versus normative score interpretations? Well, not many. Unfortunately, respondent learning styles and preferences for how to read, interpret and understand reports are more diverse than what most vendors will acknowledge.
Well, there you have it. Secrets from a vendor about other vendors and what none will dare to tell you about 360 feedback assessment!
However, now that you know these, I’d like to share one last secret.
What matters most about muti-rater feedback is the process and not the tool.
Do it right or don’t do it at all…..Be well….
I read your posts for a long time and must tell that your articles are always valuable to readers.