6 Step Checklist to Launch Your 360 Assessment

360 Feedback systems enable participants to better understand how their leadership behaviors are perceived by different groups within an organization, and external stakeholders.

A well-developed 360 lets people see their areas of strengths as well as areas ripe for improvement. When starting to build a 360 initiative, here are some critical issues to keep in mind:

  1. Start with your objective or goal

Start by being very clear about the purpose of the 360 program for your organization. Will it be strictly for the development of individuals? Or will the results be used in decision making such as identifying high potential employees, succession planning, promotions, or talent reviews? If you plan to go beyond leader development and use the data for personnel decisions, then you will have to assess the validity of the tool for that purpose. It is also important to communicate to participants exactly how the 360 results will or will not be used by the organization.

  1. Create a leadership competency model

Competency models are essential for developing items that capture the behaviors relevant to the success of your leaders and your organization. The model spells out how you expect leaders to operate across a number of domains, or competencies. The specific competencies should be based on your organizational culture, values, and business needs. If you don’t have an existing one, getting help from an IO consultant can better enable you to identify the behaviors that distinguish effective leaders in your particular environment. The competency model is something that the C-suite should sign-off on; getting that approval will also create buy-in for the 360 program.

  1. Keep the end in mind.

Make sure you have a plan for helping participants turn the insights from their 360 reports into development plans and behavioral change. Programs focused on executives often include personal coaching. In programs that are broader, covering hundreds or even thousands of managers, individual coaching may not be practical. In such programs, group debrief sessions or workshops can be an effective way to help participants understand their feedback and how to move forward with developing realistic, effective development plans. OrgVitality offers a 360-centric Manager Academy that can help participants throughout the entire process with micro-training that focuses them on the specific tasks for each stage of the process. If you don’t have access to something like this, communicating expectations with participants can help ensure a more successful project. At a minimum, incorporate some guidance into the reports, and make resources based on your competencies available through your learning management system.

  1. Specify who can give feedback.

Almost as important as deciding who can be a participant, is deciding who can be a rater. Broadly speaking, anybody who has ample opportunity to observe the leadership behaviors of the participant will make for a good rater. Traditionally, this includes the manager, direct reports, and peers of the participant. But it is worth broadening the scope for potential raters. Additional raters could include dotted or matrix managers, indirect reports, others who are familiar with the participant, and even external raters from customer or supplier organizations. You may also want to incorporate a review of participant rater lists, either by HR or by the participant’s manager, prior to the survey launching.

  1. Opt for a flexible system.

There’s a lot of work that goes into running a 360 process, so the more automated and flexible the system, the better. For example, you will want a system that lets you enroll participants either in cohorts or as individual registrations. Cohorts are important if there is a set milestone for a learning and development program, or if you want some type of group to go through the process simultaneously. Other times, there may just be the need for one or two people to go through the process. Having the ability to do both can make the process better.

Ideally the system should automatically populate rater lists with each participant’s manager and direct reports, and possibly some peers (others reporting to the same manager). This saves time since only minor adjustments need to be made to each enrollment. Your system should accommodate your preferences for the rater selection process, optional manager approvals, survey administration duration, and report delivery. The system should also have ability to extend administration if needed, and notify participants if their report is at risk due to an insufficient number of raters.

  1. Consider what will go into reports and who will have access

Well designed reports show the data from several perspectives. They usually start with competency scores which average the ratings across the behaviors defining the competency. Breakdowns for each item are important for participants to drill down into the details. Since different types of raters may have different perspectives, showing scores split out by rater type can be informative. Blind spot and hidden strength analysis highlights meaningful differences between self ratings and the other raters. Many participants find the written comments from raters to be the most valuable part of the report. We recommend including comments as written, not screening or editing. Comments are sometimes broken out for each rater type, but often while manager comments are separate, comments from other rater types are grouped for confidentiality.

Finally, you must decide who gets access to each report. If your system is only meant to support the development of each participant, then only they need access. They should be strongly encouraged to share the results with their manager, but not required. On the other hand, if your 360 program will support talent management decisions then both HR and the manager will need access to the report. If coaches will be supporting the participants, they will also need access.

Working through these considerations will greatly increase the effectiveness and acceptance of your 360 program.

Being able to give clear explanations of the purpose and methodology will build trust in the process when you launch it. As a last suggestion, consider launching with a small pilot of ten or twenty participants, and then debriefing with them (and some raters) about the process itself.



Stay Up-to-Date on the Latest Industry Trends & News!

The OrgVitality Difference

A unique fusion of powerful technology and scientific consulting – deployed so that you can do more with your data, easily.