“For many people, one of the most frustrating aspects of life is not being able to understand other people’s behavior.” -Goethe
Writing good behavioral statements for a customized 360-degree feedback assessment is critical to ensure that what is being measured is accurate and useful for developmental purposes. Here are some recommended tips to ensure your questions are specific, behavioral, and useful:
Ask only one thing in each question. A common mistake that is made is that individual items attempt to assess different things. The result of this is that none of the things will accurately be measured. It is important to ask only ONE specific thing!
Ask something that can be observed by others. If a behavior is NOT observable, it should not be permitted to be assessed. It can result in presumptions and inaccuracy of results.
Write in a clear language and avoid terms that may not be obvious. For example, jargon or technical terms are not understood by everyone. Items should be clearly stated so that ALL rater can understand it.
Make sure that the item is relevant to the competency area. If an item does not reflect it’s appropriate competency area, it will likely be misinterpreted. Often times, a competency area helps create context and definition of each item. For this reason an item is not useful unless it is appropriately placed in the right category.
Verify that the wording of the question matches the scale. If you are measuring frequency of a behavior, make sure that the item reflects frequency as a metric. If an item is about effectiveness, ensure that the item reflects that.
Coach’s Critique:
Some researchers and practitioners suggest that 360-degree feedback items should be presented in a transparent manner (i.e. not be a test where questions are randomized or not easily grouped with the competency they are intended to measure). For instance, suppose you are trying to measure the the item, “is willing to take the time to understand and listen to employees”. This can reflect the competency, “Listening”, but perhaps can also reflect the competency, “Employee Involvement”. How the item is presented to the raters might change how they think about past behaviors of the participant they are rating and can actually modify how they appraise the person.
If the item were to fall under “Employee Involvement”, the rater might key on the phrase “willing to take time to understand…” and evaluate the participant based on their impression of how motivated the individual is to take direct interest in others, rather than, the mechanics of listening skills. People are likely to create a context of meaning simply by looking at the question in terms of whether it is grouped under a competency category or just appearing in a 360-degree assessment in random order.
Rater bias can be minimized or exacerbated by both how a question is worded and whether it is shown in the context of the competency it is intending to measure. It all starts first with creating items/questions that aren’t double barreled and are specific in measuring observable behaviors to increase rater consistency and minimize potential rater error. Whether the item is identified and shown under its respective competency might also impact the consistency and accuracy of rater responses which are critical for useful 360-degree feedback interventions. Finally, the actual rating scale used (effectiveness, potential, frequency) might also strongly impact the usefulness and accuracy of rater feedback.
What do you suggest as best practices for creating 360-degree feedback behavioral items and how they should be organized in an actual assessment?
Hi Sandra, this is another very good blog – my additional suggestions for best practice would be ensure all of the behavioral statements are clearly linked to the organizations needs – To make a real difference 360 degree feedback questionnaires need to be ‘fit for purpose’. In real terms this means they need to be directly linked to the organization’s expectations of its managers/ leaders both now and in the future.