The CEO Refresher Websites for Professionals
Take control of your online presence
with your own professional website!
  Gradient
       
   

Four Tips for Getting the Most Out
of Your Assessment Investment

by Dr. Shelley A. Kirkpatrick and Dr. Perry Alter

 
   
 
   

Introduction

We wrote this article for executives and other leaders who are trying to make heads and tails of their assessment results. They are smart folks but are not necessarily experts in interpreting assessment data and drawing the conclusions needed for making important decisions.

In the marketplace, there exist thousands of assessments. We cannot address the unique challenges each assessment presents. However, in the process of creating, identifying, selecting, and analyzing assessments, common challenges often make it difficult to get the most out of your assessment investment. The purpose of this article is to provide you with four tips to help you make the most out of your assessments:

  • Look at Individual Item Data Because Rollups Mask Important Differences
  • Ask Questions about the Validation Sample and Norms
  • Learn the General Characteristics of Respondents before Trusting Data Value
  • Act on the Results

For this article, we define an assessment as a survey or test designed to provide information about an employee or a workgroup. Typical assessments include the following:

  • Employment or pre-hire tests
  • Employment or pre-hire interviews
  • Organizational surveys on engagement, retention, morale, or diversity
  • Skill or competency surveys
  • 360-degree leader and manager development assessments

Tip 1: Look at the Individual Item Data, Because Rollups Mask Important Differences

Summary results, or roll-ups, of individual items, should be used only to obtain preliminary snap shots of the data. They can, and often do, hide important individual differences among items and lead to unwarranted conclusions. If a vendor provides an executive summary of your assessment results, such differences might not be included. That's fine, as long as you’re willing to look at the assessment data in detail.

Most organizational surveys are organized into logical item groupings. Within each grouping, there is often an item or two that is conceptually and/or statistically "far off" from the others. For instance, here are five items from an employee engagement survey, belonging to the Developmental Opportunity section of the survey.


Developmental Opportunity

Average
(1=low,
5=high)

Underlying Explanation
(Not in Survey Data)

I receive adequate training.

4.0

The jobs are mostly repetitive;
little training is needed.

My performance appraisals are timely.

4.2

The organization requires this.

I am aware of potential opportunities for promotion.

4.2

The organization continually
circulates these in the newsletter.

My manager is committed to my continuous development.

2.3

Managers here just don't pay attention to that.

My manager distributes developmental assignments equally.

4.3

Equally and fairly, but rarely.

Overall Average for Developmental Opportunity

3.8

 


At first glance, it seems that this company does a good job at providing developmental opportunities. But, if you looked only at the overall average, you would overlook the important conclusion that managers do not prioritize their employees' development, based on the fact that the fourth item has an average of 2.3. The consequence of failing to dig deeper is that you are missing an important opportunity for improvement. Getting managers to pay more attention to employee development would very likely enhance performance or reduce turnover. 

Tip 2: Ask Questions about the Validation Sample and Norms

Don't make blind assumptions about how an assessment was validated or that the norms are accurate representations of where you should be. Depending on how different the norms are from your organization, the conclusions may be less meaningful. Often the differences are minor, but sometimes they can be major. Ask and use judgment, both before the purchase and when interpreting results.

First of all, what is a "validation sample?" In this context, it is the group(s) of people on whom the test or survey was first researched. In all likelihood, it successfully predicted or measured what it was intended to, or else the test publishers wouldn't have sold it to you. But, the group of people on which it was first researched might have been very different from your group.

Let's take the following hypothetical situation—suppose there is an assessment, called "Sales Infinity Plus." It is designed to assess junior sales staff and predict those who are likely to emerge into effective mid-level salespeople. It measures: (a) hard work, (b) a high-energy delivery, (c) product knowledge, and (d) ability to overcome objections.

The researchers validated it on a sample of call center employees selling a television service. These call center employees speak to hundreds of homeowners on the phone every day. The service sells for about $100 per month. Let's assume that it significantly predicts high sales performance in that environment.

Now suppose your company sells commercial security systems. The salespeople must visit the customers' sites. The purchases cost thousands of dollars. The buyers are upper-level executives. There are many ways to customize the installation. While the same factors that the other test measured apply here, they apply in a different context.  For the commercial security sales person, job components include an in-person presentation, learning the customer's business in order to effectively recommend solutions, and the ability to present clear and persuasive proposals.

From one environment to the next, the same assessment will not predict sales success equally. It has no measures to address the additional aspects of the job. Does that mean it will not work? No. It just means that it won't predict with the certainty that it would have in its original setting.

Let's now look at "norms." The concept of norming is similar to validation. Some organizations prefer surveys or tests that are normed. That means that the actual average representing your participants on each item is accompanied on the reports by a "norm," or what others scored on that item. Let's look at an example.

Competency: Communication

Your Organization’s Ratings

National Norm

Provides regular and timely updates on our mutual projects.

4.5

3.9

Provides helpful opinions.

4.3

4.1

Uses the most appropriate medium to communicate (email, phone, meetings).

4.7

4.2

Communicates specific actions and plans clearly.

4.5

3.8

Explains the big picture and context.

2.5

3.5

Overall Averages

4.1

3.9


This tells us that your organization is a little better than most people on some things, much better on other things, but much worse on at least one thing. However, this is not necessarily true.  Who exactly are these "national norm" people? Let's assume that the salespeople said that the norming was done in organizations like yours.

Are they executives? Are they in your industry? Are they in small organizations or large ones? Do they have similar constraints to yours? Are they leaders or individual contributors? Are they overwhelmed by having too much to do? If you don't know these answers, determining relevancy becomes much more difficult.

Once you know about the organizations that the norms represent, you should then decide whether you want them included in the reports. By leaving them there for readers to see, you provide valuable information about whom assessment results should be compared.

Tip 3: Learn the General Characteristics of the Respondents before Trusting Data Value

Understand the instructions that the assessment respondents were given, so that you can determine how the instructions may have influenced the ratings. When viewing assessment data, learn the characteristics of the group that provided the data. Doing so may give you reason to take it with a slight grain of salt.

Several years ago, one of the authors surveyed hundreds of managers about whether a class they had taken affected their delegation and empowerment skills. Overall, they said that there was great improvement. Then, we surveyed the direct reports of those managers about how their managers' delegation and empowerment behavior had changed. The direct reports said there was improvement, but not quite to the degree the managers had said. Perhaps the managers themselves were not the best judges.

Who Will See My Ratings?

In many organizational surveys, if raters are not told who will see the data and how it will be used, they may inflate the ratings. Lets take the case of a 360-degree assessment. Suppose Aaron (the rater) knows that data will be seen only by Andrea (the ratee) and that Andrea will appropriately interpret it as helpful. Usually, Aaron will be honest in his ratings, even to the extent of rating Andrea low.

But if Aaron doesn't know whether Andrea's boss will see the ratings or how he will use the ratings, Aaron may rate higher. Similarly, if Aaron doesn't know whether he will be identified as the rater, he may rate her higher. Andrea's growth is secondary to the fact that Aaron might be shooting her (or himself) in the foot with his honesty.

He probably doesn't mind contributing to Andrea's development, but he probably does not want to contribute to her professional demise. And what if Andrea knows that Aaron provided a bad rating? His time will come soon enough.

Rating errors also can occur with organizational surveys. Political considerations may affect ratings. For example, Brad may inflate his ratings if he knows that his division manager’s annual bonus is affected by the results of an engagement survey, assuming that Brad likes his division manager. On the other hand, if his ratings are anonymous, then Brad may “slam” his supervisor, whom he dislikes. Or, Brad may inflate his ratings if he knows that the supervisor may be able to deduce who said what.
 
Thus, if you are looking at roll-up assessment data, make sure you understand what guidance the raters were provided, so you can judge the extent to which that biased their honesty.


Now let's look at an interview situation. Traditional interviews are statistically terrible predictors of success. The key thing to be aware of here is that if interviewers are not trained in structured interviewing techniques, their assessments of candidates' potential are spotty at best. For example, here are some assessment errors often made by interviewers:

  • Similar-to-me error: Interviewers favor someone is like themselves, regardless of whether the job requires that trait. For example, a financially-oriented interviewer might favor someone who has great mathematical ability, even if the job in question does not involve math or numbers.

  • Over-use of first impressions: Many interviewers make decisions, at least to some extent, based on the candidate's handshake, choice of interview clothing, and/or eye-contact. Yet, these usually tell you little about how the candidate will perform on the job.

  • Halo error: The interviewer makes bad assumptions about the candidate's skills, based on answers to unrelated questions. For example, an interviewer might conclude that a candidate facilitates groups well, because she seems to be an insightful thinker. In fact, these are very different skill sets.

The big picture here is to be wary of anyone's, including your own, determination of whom to hire based on an interview. Make sure that the interviewer(s) were well trained before you place too much stock in their opinions.  And, be sure to understand the instructions that were given to raters or interviewers about how their input would be used.

Tip 4: Act on the Results

The assessment should never be the last step in any process. You should take action to improve your organization. Even if the results show that your organization is in satisfactory shape, the assessment participants who provide the data want to know that you care about what they said. Thus, some follow-up action should always be taken.

It is important that you take some kind of improvement action. Your busy employees who took time out of their day to complete the assessments want to know that it was useful. Therefore, you should act on the data. Many would say that you are morally required to do so, based on conventional assessment etiquette.

Take action how? Identify the most important learning points and act on them. Fix weaknesses. Leverage strengths. You probably had a good reason for conducting the assessment. Whatever that reason was, go forward with what you learned. But, don't get carried away. Don't work on every need. Pick the most important few and work on those. Organizations that set too many goals for too few people often don't reach them.

Make sure to communicate your plan. Tell your employees that you appreciate their feedback and describe the actions you are taking (but, don't be unprofessionally honest in sensitive performance situations). Communicate in a credible way by using the information they provided. There are two reasons for this:

  • Engagement surveys often show that people like to feel valued and like to contribute. If you don't take some kind of action and let them know that you did, you are bucking their engagement needs, which will eventually manifest in your bottom line.

  • If you don't act on the data, they will stop doing assessments for you. Your response rates will go down, which means the validity of your future data will be more questionable.

Is there ever a situation where no action is required? Suppose the survey was just to take pulse, and everything came up fine. In that situation, one might reasonably conclude that the company should focus its energies elsewhere. We believe that the company should take some action anyway. Even if there are no screaming fires to put out, there are improvement opportunities somewhere. Much emerging leadership theory now says that it makes more sense to leverage your strengths than to fix your weaknesses anyway.

What if you don't understand what the data tells you? Should you still take action? It is best to try to learn more before you invest resources into subsequent improvement plans. You can do a more focused survey if necessary, or you can do focus groups with people likely to shed meaningful light. When you feel that the results are clear enough, then move forward.

Closing Thoughts

Assessments clearly come with many challenges.  But, don’t be overwhelmed!  Imperfect is still good. No assessment is ever perfect—the norming group is not identical to your organization’s employees, you can't quite get enough respondents, it costs too much to customize the survey, so you had to use the less appropriate generic version.

That doesn't make all the data bad. It just makes it less ideal. If we assume that perfection is not practical, then the best solution is to learn the shortcomings of what you are doing, make what the assessment as good as you can, and temper your confidence in your conclusions accordingly.

Planning, as always, is ideal.We have been astounded at seemingly smart people who plan surveys without thinking about what decisions they want to make with the results they receive. People often assume they will figure that part out once the assessment is completed or perhaps months or years later. The best procedure is to identify what you are trying to learn, and then design all aspects of the assessment from that perspective. Or, identify criteria to use when searching for and purchasing an assessment.


       
   
 
       
   

The Authors

Shelley Kirkpatrick

 

 

Perry Alter

Shelley A. Kirkpatrick, Ph.D., is the Director of Assessment Services for Management Concepts (www.managementconcepts.com), a professional services company that specializes in training, publishing, and consulting, and is located in Vienna, Virginia. She has over 20 years of experience in developing individual and organizational assessments for the private sector as well as national security and defense organizations. A former professor at Carnegie Mellon University and The American University, Dr. Kirkpatrick has authored numerous articles on leadership, motivation, and corporate espionage that have appeared in academic journals as well as practitioner-based publications.

Perry Alter, Ph.D., is Principal of Lighthouse Consultants of the Southeast, based in Florida. He has been providing Talent Management consulting for many years, and has provided services to companies such as Amgen, Royal Bank, FedEx, Motorola, Chevron, and many others. His consulting practice focuses on employee selection, job analysis, and competency modeling. He is very passionate about assessment for the sake of continuous development. It is good for the individual, good for the organization, and makes work more engaging.

 
       
   
 
       
   
Many more articles in The HR Refresher in The CEO Refresher Archives
 
       
   
 
       
   
The CEO Refresher
 
       
   

Copyright 2010 by Dr. Shelley A. Kirkpatrick and Dr. Perry Alter. All rights reserved.

Current Issue - Archives - CEO Links - News - Conferences - Recommended Reading

Refresher Publications