|Websites for Professionals
Take control of your online presence
with your own professional website!
Four Tips for Getting the Most Out
We wrote this article for executives and other leaders who are trying to make heads and tails of their assessment results. They are smart folks but are not necessarily experts in interpreting assessment data and drawing the conclusions needed for making important decisions.
In the marketplace, there exist thousands of assessments. We cannot address the unique challenges each assessment presents. However, in the process of creating, identifying, selecting, and analyzing assessments, common challenges often make it difficult to get the most out of your assessment investment. The purpose of this article is to provide you with four tips to help you make the most out of your assessments:
For this article, we define an assessment as a survey or test designed to provide information about an employee or a workgroup. Typical assessments include the following:
Tip 1: Look at the Individual Item Data, Because Rollups Mask Important Differences
Summary results, or roll-ups, of individual items, should be used only to obtain preliminary snap shots of the data. They can, and often do, hide important individual differences among items and lead to unwarranted conclusions. If a vendor provides an executive summary of your assessment results, such differences might not be included. That's fine, as long as you’re willing to look at the assessment data in detail.
Most organizational surveys are organized into logical item groupings. Within each grouping, there is often an item or two that is conceptually and/or statistically "far off" from the others. For instance, here are five items from an employee engagement survey, belonging to the Developmental Opportunity section of the survey.
Tip 2: Ask Questions about the Validation Sample and Norms
Don't make blind assumptions about how an assessment was validated or that the norms are accurate representations of where you should be. Depending on how different the norms are from your organization, the conclusions may be less meaningful. Often the differences are minor, but sometimes they can be major. Ask and use judgment, both before the purchase and when interpreting results.
First of all, what is a "validation sample?" In this context, it is the group(s) of people on whom the test or survey was first researched. In all likelihood, it successfully predicted or measured what it was intended to, or else the test publishers wouldn't have sold it to you. But, the group of people on which it was first researched might have been very different from your group.
Let's take the following hypothetical situation—suppose there is an assessment, called "Sales Infinity Plus." It is designed to assess junior sales staff and predict those who are likely to emerge into effective mid-level salespeople. It measures: (a) hard work, (b) a high-energy delivery, (c) product knowledge, and (d) ability to overcome objections.
The researchers validated it on a sample of call center employees selling a television service. These call center employees speak to hundreds of homeowners on the phone every day. The service sells for about $100 per month. Let's assume that it significantly predicts high sales performance in that environment.
Now suppose your company sells commercial security systems. The salespeople must visit the customers' sites. The purchases cost thousands of dollars. The buyers are upper-level executives. There are many ways to customize the installation. While the same factors that the other test measured apply here, they apply in a different context. For the commercial security sales person, job components include an in-person presentation, learning the customer's business in order to effectively recommend solutions, and the ability to present clear and persuasive proposals.
From one environment to the next, the same assessment will not predict sales success equally. It has no measures to address the additional aspects of the job. Does that mean it will not work? No. It just means that it won't predict with the certainty that it would have in its original setting.
Let's now look at "norms." The concept of norming is similar to validation. Some organizations prefer surveys or tests that are normed. That means that the actual average representing your participants on each item is accompanied on the reports by a "norm," or what others scored on that item. Let's look at an example.
Are they executives? Are they in your industry? Are they in small organizations or large ones? Do they have similar constraints to yours? Are they leaders or individual contributors? Are they overwhelmed by having too much to do? If you don't know these answers, determining relevancy becomes much more difficult.
Once you know about the organizations that the norms represent, you should then decide whether you want them included in the reports. By leaving them there for readers to see, you provide valuable information about whom assessment results should be compared.
Tip 3: Learn the General Characteristics of the Respondents before Trusting Data Value
Understand the instructions that the assessment respondents were given, so that you can determine how the instructions may have influenced the ratings. When viewing assessment data, learn the characteristics of the group that provided the data. Doing so may give you reason to take it with a slight grain of salt.
Several years ago, one of the authors surveyed hundreds of managers about whether a class they had taken affected their delegation and empowerment skills. Overall, they said that there was great improvement. Then, we surveyed the direct reports of those managers about how their managers' delegation and empowerment behavior had changed. The direct reports said there was improvement, but not quite to the degree the managers had said. Perhaps the managers themselves were not the best judges.
The big picture here is to be wary of anyone's, including your own, determination of whom to hire based on an interview. Make sure that the interviewer(s) were well trained before you place too much stock in their opinions. And, be sure to understand the instructions that were given to raters or interviewers about how their input would be used.
Tip 4: Act on the Results
The assessment should never be the last step in any process. You should take action to improve your organization. Even if the results show that your organization is in satisfactory shape, the assessment participants who provide the data want to know that you care about what they said. Thus, some follow-up action should always be taken.
It is important that you take some kind of improvement action. Your busy employees who took time out of their day to complete the assessments want to know that it was useful. Therefore, you should act on the data. Many would say that you are morally required to do so, based on conventional assessment etiquette.
Take action how? Identify the most important learning points and act on them. Fix weaknesses. Leverage strengths. You probably had a good reason for conducting the assessment. Whatever that reason was, go forward with what you learned. But, don't get carried away. Don't work on every need. Pick the most important few and work on those. Organizations that set too many goals for too few people often don't reach them.
Make sure to communicate your plan. Tell your employees that you appreciate their feedback and describe the actions you are taking (but, don't be unprofessionally honest in sensitive performance situations). Communicate in a credible way by using the information they provided. There are two reasons for this:
Is there ever a situation where no action is required? Suppose the survey was just to take pulse, and everything came up fine. In that situation, one might reasonably conclude that the company should focus its energies elsewhere. We believe that the company should take some action anyway. Even if there are no screaming fires to put out, there are improvement opportunities somewhere. Much emerging leadership theory now says that it makes more sense to leverage your strengths than to fix your weaknesses anyway.
What if you don't understand what the data tells you? Should you still take action? It is best to try to learn more before you invest resources into subsequent improvement plans. You can do a more focused survey if necessary, or you can do focus groups with people likely to shed meaningful light. When you feel that the results are clear enough, then move forward.
Assessments clearly come with many challenges. But, don’t be overwhelmed! Imperfect is still good. No assessment is ever perfect—the norming group is not identical to your organization’s employees, you can't quite get enough respondents, it costs too much to customize the survey, so you had to use the less appropriate generic version.
That doesn't make all the data bad. It just makes it less ideal. If we assume that perfection is not practical, then the best solution is to learn the shortcomings of what you are doing, make what the assessment as good as you can, and temper your confidence in your conclusions accordingly.
Planning, as always, is ideal.We have been astounded at seemingly smart people who plan surveys without thinking about what decisions they want to make with the results they receive. People often assume they will figure that part out once the assessment is completed or perhaps months or years later. The best procedure is to identify what you are trying to learn, and then design all aspects of the assessment from that perspective. Or, identify criteria to use when searching for and purchasing an assessment.
Many more articles in The HR Refresher in The CEO Refresher Archives