“Some authors should be paid by the quantity that is not written.” -Anonymous
In general, 360-degree feedback questionnaires should be targeted and contain relevant questions. They should be long enough to accurately measure the competencies they are attempting to assess (reliability), but not too long as to decrease motivation to complete them.
It’s important to consider factors that make the participant motivated or de-motivated to take a survey or 360-degree feedback tools since respondent fatigue is a common challenge. For example, it takes on average, between 30-60 seconds to answer each question, and if respondent is asked to complete questionnaires on a number of people, it’s important that “respondent fatigue” doesn’t set in. Typically, 360-degree feedback questionnaires contain between 40-70 items along with 1-2 open-ended questions and can be completed in 10-15 minutes.
Dale Rose’s 2009 Benchmark study found that 33% of the companies used 360-degree feedback assessments that contained 11-40 questions and 42% contained 41-69 questions, so this appears to be the “sweet spot” to minimize rater fatigue. ((Rose, D. (2009), Current Practices in 360-Degree Feedback: A Benchmark Study of North American Companies))
Clarkberg & Enarson (2008) suggest that participants willingness to take a survey can be determined by the he visual presentation of questions (e.g. borders, http, and number of questions on a page). ((Clarkberg, M. & Einarson, M., (2008). Improving Response Rates through Better Design: Rethinking a Web-Based Survey Instrument))
Brown (2003) further suggests that it might not be the exact number of questions that impact completion, but time and effort required to complete online assessment. ((Brown, J. (2003). Survey Metrics Ward Off Problems. Marketing News, 17, 17-20.))
With that said, there are multiple elements that contribute to respondent fatigue. Here are some of the main points to consider in order to balance between quality and quantity in 360-degree feedback tools:
- Questions per competency should be behaviorally based, observable, and specific
- Questions should not contain reversed scored or negatively worded items (360s are not a test)
- Questions should be free from jargon and euphemisms (e.g., “thinks outside the box”)
- Questions should be actionable and able to be modified through coaching, training, etc.
- Open-ended either specific or start/stop/continue doing
- Questions do not need to be randomized (make 360s as transparent as possible by organizing by competency)
- Questions per competency should be equivalent
- Minimum of 3 and maximum of 6 questions per competency
- Maximum length of items to minimize rater fatigue is approximately 50-70
Coach’s Critique:
In order for a 360-degree feedback tool to be reliable, it needs to test what is says it tests by exhausting items throughout all relevant competencies. At the same time, “respondent fatigue” may set in for many respondents after just 30 questions. They may become tired of thinking through every questions reflectively and carefully, and may be likely to suffer from inattention and pattern responding. Here is the challenge: does the respondent’s fatigue affect results? If a respondent is likely to take about half an hour to an hour to complete a 60 item multi-rater assessment, is there a difference between the quality of results of the first 30 items and the other half of the items? And, do we have control over instructing how respondents should take the assessment (i.e., either in the instructions that are given to the raters, training on using the rating scale or clear communication about the actual length of the assessment)?
With that said, I believe that choosing a tool that is not overly exhaustive is important. In fact, research suggests that shorter online surveys have an increased level of compliance and completion. I would rather know that a survey is taken properly than for it to reveal inaccurate results. At the same time, I believe it’s up to coaches, consultant, and implementers of 360-degree tools to consider some of the factors mentioned above and provide some form of cautionary/best practices about the use of the tool. Perhaps it’s a good idea to ensure that respondents know the importance of thoroughly thinking through questions and completing it.
What are your thoughts about “rater fatigue” and how to best handle it?