360 feedback sounds simple.
You ask your colleagues, manager, and/or team for feedback on your performance. Done.
You fill out a self-assessment, because your insights into your performance matter too. Done.
The results are compiled. Your self-ratings are compared to those of your peers. Done.
You get a comprehensive view of your performance, learn how people experience you, and take steps to improve.
Simple, right?
Except… that’s not what usually happens.
Instead, 360 feedback creates resentment, defensiveness, or worse—a beautifully formatted report that sits on your desktop collecting digital dust.
If you’re thinking about implementing 360 feedback reviews—which is a smart move, especially as one-on-one reviews lose relevance—here are six common 360 feedback mistakes to avoid.
Mistake #1: Using poorly designed questions.
Generic surveys lead to generic feedback. Unclear questions lead to unclear feedback—or no feedback at all. For example:
“Is this person effective?”
Effective at what, precisely? And how is “effective” defined? Competent? Socially skilled? Strategic? Reliable? Is someone still “effective” if they produce excellent work but chooses not to interact with others during lunch?
“Does this person communicate clearly and inspire others?”
That’s a double-barreled question. What if it’s yes to one and no to the other?
“Is this person passionate about their work?”
That’s a mind-reading question. Passion can look like enthusiasm—or steady, disciplined dedication. Both are valuable. But asking raters to guess how someone feels internally is impossible.
Bottom line: Low-quality questions produce low-quality feedback.
A strong 360 measures specific, observable competencies tied directly to the role and the organization’s goals. So if you’re running a 360 for a leader, that might include:
- Decision-making quality
- Delegation
- Managerial courage
- Integrity
- Motivating others
- Emotional intelligence
Vague personality impressions don’t drive development. Behavior-based competencies do.
Mistake #2: Choosing the Wrong Raters
Sometimes raters are selected by the participant. The upside? They’ll choose people who know them well. The downside? They may choose only “safe” raters.
In my experience with 360 programs, I’ve seen managers intentionally exclude their direct reports because they anticipated poor ratings. Unsurprisingly, that distorted the results and rendered the 360 nearly useless.
A credible 360 feedback process requires balanced representation:
- Manager(s)
- Peers
- Direct reports (when applicable)
- Self-rating
If key perspectives are missing, the feedback becomes incomplete—and misleading.
Mistake #3: Ignoring Anonymity Concerns
If raters aren’t confident their responses are anonymous, honesty becomes a minefield. And fear of repercussions is very real.
Research by Visier found that 47% of employees felt at least some pressure to withhold the truth when asked about their job engagement. A survey by the Institute of Business Ethics revealed that 43% of workers feared losing their jobs if they spoke up about misconduct—and among those who did, 67% reported experiencing retaliation.
Even if retaliation is rare in your organization, the fear of risk is enough to impact honesty.
If your 360 feedback system does not guarantee anonymity through aggregation and minimum rater thresholds, your data will be inflated or sanitized. And sanitized feedback doesn’t lead to growth.
Mistake #4: Failing to Prepare Participants
Here are the two most common self-rating patterns I see:
- The “I’ll rate myself low to appear humble” strategy. Unfortunately, this often gets interpreted as low self-awareness or low confidence.
- The “go big or go home” strategy. Participants rate themselves extremely high to project confidence, only to have the contrast with rater feedback look even more dramatic.
Participants need to understand that perfect 360 scores are rare. In over 500 reports I’ve reviewed, even top performers had development areas.
Mistake #5: Not have a development plan.
Dead-end feedback is demoralizing. Imagine finally voicing your concerns about a manager’s behavior. The data is collected. The report is delivered.
And then… nothing happens.
No coaching. No training. No follow-up. No behavior changes. It’s like watching someone take your hard-earned money and feed it into a shredder. It’s like screaming into a cave. Without a development plan, 360 feedback becomes wasted data.
In organizations that integrate coaching alongside 360 assessments, change is measurable. Managers improve, teams become more engaged, and morale increases.
Mistake #6: Doing it only once.
A one-time 360 is a snapshot—a point in time. But if you want real insights—and real proof that a person took the feedback seriously—you need to see patterns.
Organizations that treat 360 feedback as part of an ongoing development effort (every 12–24 months) build stronger leadership pipelines and more self-aware managers.
360s are especially valuable when:
- Someone steps into a new role
- Identifying leadership potential
- Evaluating new leaders
- Assessing burnout risk
Personally, my favorite benefit of 360s is that they bring unsung heroes into the spotlight—and ring the warning bell when high performers are burning out.
A 360 is like going to the dentist. You can skip it for years, but eventually, something starts to hurt. And by the time it hurts, the problem is bigger—and more expensive—than it needed to be.
Preventive feedback is far less painful than a managerial root canal.
When implemented properly, 360 feedback is one of the most powerful development tools available. I’ve seen toxic managers finally confront their blind spots. I’ve seen overlooked contributors receive long-overdue recognition. I’ve seen teams relax after years of tension.
In a world full of assessments—from IQ tests to “Which season are you?” quizzes—360 feedback sits at the top tier, because their honest and make real change happen. And when honesty protected and followed by action, it becomes epically transformational.
