360-Degree Feedback - do it right, or don't do it at all

Written on the 3 September 2016 by Sharon Hudson, Director, Talent Tools

360-Degree Feedback - do it right, or don't do it at all

 Multi-rater or 360-degree feedback, was used by approximately 90% of Fortune 500 companies last year. There is a good reason for that, as an assessment for professional development, there is nothing better. Or worse. As a provider of 360-degree feedback tools, services, and accreditation training, the best advice I can give you is "do it right, or don't do it at all."

Our experience over the last 10 years combined with extensive meta-analysis and research reviews has identified many risk factors. Many that other providers, even the very experienceed, may haveencountered, but not identified a risk, or correlated it to less than ideal 360-degree project outcomes. Each risk factor has potential to make a constructive or deconstructive impact on 360-degree feedback project outcomes. So, it's worth becoming risk aware.

Why am I telling you about this? Because I am a stakeholder in the 360-feedback market. At Talent Tools we want everyone to experience positive learning outcomes and improved empowerment and performance as outcomes of 360-degree feedback. Sadly, that is not always the case. 

So, what are the pitfalls? Unfortunately, there are many. Too many for any article - maybe a book. But here are two examples of often overlooked risks.

Example One:  A risk within rater groups.

Using too few raters in each rater group to achieve acceptable levels of reliability.

360-degree feedback is the collection of data from a population or sample population of an array of relevant stakeholder groups. An adequate representation for each group is critical to ensure accurate and reliable data for analysis.

Based on recent research, we know that the correlation between two supervisors sits around .50, where 1 is perfectly correlated, and .70 or higher is considered an acceptable level or correlation. This means that to allow for differences within the 'supervisors rater group' you need at least 4 supervisor group raters to get reliable scoring. There are magic numbers for peers and direct reports as well.

If the total population of the rater group is below these 'magic' numbers, there is no issue, as you have complete data to analyise, and any "averages" are average of the whole rather than a sample population. However, if you are using too small a sample population, you increase the probability that averages used in reports will be misleading. Fortunately, the standard deviation can shed some extra light on the results.

We know that that the value your 360-degree feedback project yields the organisation is directly related to the data your gather.This is regardless of the capacity of your back-end analysis and investment in best practice feedback. It imperative that you identify an adequate number of the right raters, across the right rater groups, to ensure you have reliable data to generate valid results.

Example Two:  A risk in the interpretation of 360-feedback results

Perceptual distortions and bias by participants and raters

In our 360-feedback practice, encompassing a diverse range of questionnaires/instruments, measuring different competency models, with various rating scales in a range of industries and over a lengthy period of time, we have repeatedly observed patterns of perceptual distortions and bias  -  which may be conscious, unconscious or a combination of both. All have implications for interpretating the results in the report and providing feedback to the candidate.

For example, let's have a look at those self-raters who tend of under-rate themselves, which is common, particularly in the first roundof 360s. Under-raters are those whose self-ratings are significantly or uniformly lower than the ratings provided by the other rater groups. 

There a a mayrid of reasons forconsciously or unconsciously self under-rating:  modesty or humility, not wanting to be seen to score ourselves higher than others rate us;  having high personal standards and expectations, having a bad day or feeling depressed.

There are also several positive intrepretation suggestions available when one underrates, such as uncovering a hidden strength/s, being a humble high performer, or continually striving to be betterOn the down-side, self under-rating may be intrepreted as a lack of  in self-confidence, self-esteem, or lack of self-knowledge.360-degree Feedback Reports, Consultancy, Accreditation Training and Cloud Software at Talent Tools

What difference does self under-rating make to the feedback session?

Uner-raters often appear to be hypervigilant to the perceived "negative" information contained in their report and often "fixate" on the lowest scores on ratings scales and the open-ended comments that appear to be "neutral or "critical" or even "neutral" in tone or content. Research shows that a person with a negative self-views (e.g., those who are depressed) tend to tune into feedback that portrays them critically as opposed to positively. 

We know that perceptions about feedback can be highly influenced by these personality and individual differences, making it imperative that each participant have a competent facilitator spend time to really help them clarify and interpret their results.

360-degree feedback is the premiere tool for leaders - everyone needs feedback and focus and leaders need it more than most!  Let's do 360s well and benefit from positive experiences and meaningful personal and professional results for candidates and improved performance organisationally.  Done properly, people want this information, and use it, 360-degree feedback becomes one of your organisation's critical developmental tools. Please, "do it right, or don't do it at all."

 

Need help?  Contact us now - team@talenttools.com.au  |  1800 768 569

Let us help you turn talent into performance at your workplace.

 


Author:Sharon Hudson, Director, Talent Tools