“One size does not fit all.”
-Paula Dobriensky
Little research exists to provide definitive answers as to the best way to present 360-degree feedback results. Â However, it is intuitive that participants have different learning styles, and some may prefer to favor the interpretation of either qualitative versus quantitative presentations of results.
One study that does give some insight about what presentation style might maximize the acceptability, understanding, and interpretation of 360-degree feedback results comes from Atwater and Brett (2003). Â These researchers compared several different report presentations to participants and concluded that:
1. Individuals appear to be significantly less positive and less motivated after receiving text feedback than after receiving numeric feedback.
2. Individuals appear to prefer numeric scores and normative feedback in multi-rater interventions. (Note: Based on our Envisia’s International partners, it is unusual for them to prefer normative scores. They tend to prefer average scores.)
Some but not all vendors offer choices in report presentation options. Â The ability to provide multiple options in feedback reports appears to be helpful for participants to understand and accept feedback and ultimately decide to focus on one or more competency areas for development efforts. For example, Envisia Learning, Inc., provides the following free options for each feedback report, based on coach, consultant, organizational preferences, and the purpose of the 360-degree feedback presentation (www.360online.net/reportOption):
- Graphical presentation of results (color-coded graphs) comparing self-ratings to those of others
- Summary tables showing all 360-degree feedback assessment questions by rater categories
- Comparison results using vendor norms, organizational norms, or average scores
- Standardized norms presented in graphs, using t-scores or z-scores
- Most/least frequent behaviors with rating distributions by each rater category
- Johari Window printing preference, showing a four-box grid comparing self assessment ratings to those of others for each rater category, providing an index of self-insight or self-awareness
- Multiple methods to access rater agreement, such as a range of scores and a statistical measure of rater agreement based on standard deviation to help participants understand and interpret outliers and polarized perceptions
- Open-ended questions categorized by rater group or randomly presented
Coach’s Critique:
Throughout my experience in coaching, I have learned this…ONE SIZE DOES NOT FIT ALL…everyone has a different way of handling the 360-degree feedback data…they have different ways of learning the data, different ways of processing the data, different ways of perceiving the data, and different ways of reacting to the data. With so many different types of people with different personalities, learning styles, and perceptions, it seems obvious that in order to effectively manage participant’s processing of the feedback, they would need to be presented with various ways of illustrating the feedback.
For some people, graphical presentations allow them to put their feedback in perspective because it illustrates behavioral patterns in terms of the different categories. For others, graphical data might be complicated and they want to view the lists of items in terms of its frequency. I have seen other clients that seem to emphasize on open-ended feedback, rather than the remainder of the quantitative data. For this reason, when choosing a 360-degree feedback report, it is best to choose one that is either customizable, or one that presents the feedback information in various ways in order to facilitate learning for different types of people.
What do you find the most effective way of presenting 360-feedback results?
Great post, presenting the feedback in a way to suit each individual learning style will help it to be absorbed.