|Websites for Professionals
Take control of your online presence
with your own professional website!
A Practical Approach for Measuring the
Companies invest in training and development in order to improve their business results. Kirkpatrick's four- level training evaluation approachis often used to discern whether the training achieved its goals. Evaluation Levels 1 through 3, which measure student reaction, learning, and behavioral application, respectively, are relatively straightforward to carry out. However, many companies stop there. They fail to measure business impact and/or the return on investment (ROI) of that training. And when they do attempt to measure it, they often have difficulty calculating it and interpreting it.
The result is that the true business impact of the training is never known. As long as students enjoy the training, demonstrate an acceptable end-of-course test score, and display some new on-the-job behaviors, the training is considered to have achieved its goal. Without measuring business results and/or ROI, companies don’t know whether their business needs that precipitated the training were met or not, and they don’t know if the improvements justified the cost of the training.
How can a positive evaluation occur at Levels 1 through 3 but fail to impact productivity or other business results? Let’s take an example of a company that relies on IT projects. This company, like many others, relies on its IT projects to make its workforce more productive. In short, the company wants to get the same amount of work done with fewer staff using technology to improve efficiency. As with many companies, its IT projects are often behind schedule and over budget.
An initial group of 20 project managers are sent to a weeklong introductory project management class, with the goal of improving their project management competencies, in order to improve project outcomes and impact the company’s productivity. The project managers, although extremely busy, attend the training. They give the course high marks on the end-of-class evaluation form. They score highly on the test given at the end of the course. A follow-up evaluation three months later reveals that they are applying what they learned; this is confirmed by their managers.
But, the evaluation stops there. Project outcomes are not re-examined to see if fewer projects are behind schedule or over-budget. The Chief Learning Officer (CLO) spends some time trying to compute ROI, but isn’t sure how to calculate the costs and benefits. And even if projects are getting done earlier, are the project managers investing their newly freed up time in profitable ways?
In this example, the company invested in a training course. But, it doesn’t know if it recouped the cost of the training, including project managers’ time and the cost of the training class. The CLO doesn’t know if she should send additional project managers to the training or purchase more advanced courses for her project managers.
The Future Focused Approach
Perhaps the most common approach to measuring the value of training is to survey the training participants immediately following a class and ask them hypothetical, “Level 4 like” questions about how they expect to use the training once they get back to the job. These often include questions such as, "Will you be able to apply what you learned? Will your job performance be impacted as a result of this class? What do you expect the extent of the training impact to be on your productivity?"
In fact, we often include questions like this in our own course evaluations. These questions provide a quick indication of the potential impact of the training. They are meant to identify any immediate issues with the course. However, they are not meant to serve as a final measure of training’s ultimate impact. To properly evaluate actual training impact, sufficient time must be allowed for employees to apply what they learned and for their new behaviors to impact the outcomes of interest. This often leads to a more rigorous approach to measuring impact.
The Quantitative Approach
The quantitative approach to computing ROI begins with a straightforward formula:
ROI seems easy to compute at first—add up the costs, add up the benefits, and then do the math. But, in actuality, it is difficult to accurately identify and measure costs and benefits. Costs can include:
Similarly, there is more to the identification of benefits than meets the eye. Benefits can include:
Our point is not to list all possible training costs and benefits, but rather to illustrate that it is easy to get bogged down in the details of the measures. Many positions are not readily measurable. For some jobs, it is easy to quantify productivity, such as production jobs (# of widgets per hour) or sales representatives (dollar amount of sales per week). In this case, the quantitative approach may be a good approach to use.
However, performance is not easily quantified for many jobs, such as management positions and non-production jobs. In our experience, most training initiatives apply to several jobs in a company or across companies. If we train 30 people in a class, they often represent as many as 10 or 20 different jobs. Except in the rare (for us) instances where most training participants are from the same job, AND the job is readily measurable, it is very difficult to measure impact using simple and standardized measurements.
A Balanced Alternative
In an attempt to find a way to quantify ROI while at the same time doing it in a manner that is easily understandable and inexpensive to compute, we have developed a balanced alternative. This alternative approach, like other approaches to ROI, is best carried out after employees have had a chance to apply their new knowledge and skills to their job, which is typically six to nine months following training.
Our Balanced ROI approach collects data from those closest to the productivity improvements, namely the employees who took the training. We rely on employees to tell us how the training has directly impacted them. As a reminder, this is most appropriate in environments where the training participants have different jobs, or when they have difficult-to-measure jobs (which, in our experience, almost always is the case). Our approach asks employees to identify specific examples, so that their estimates are based on real data, not hypotheticals. Although it doesn’t sum up ROI in a single number, it does provide enough evidence of training impact to make a reasonable determination of whether the training is returning more benefit than it cost.
What we do is follow-up with a short survey six to nine months following the training and ask them these questions:
The balanced approach provides concrete examples that can be communicated to and understood by company leaders. Rather than presenting impact in terms of a percentage, it uses real-life examples of success to which executives can relate, use to reinforce change and innovation, and share as lessons learned and best practices across the company. Most importantly, the evidence for success is fairly concrete. Granted, many examples are not articulately substantiated; leaders can use their own judgment to determine which examples to accept as adequately justified.
Below is a hypothetical illustration of what such as example might look like. In our experience, about 10 to 20% of the examples are as articulate as the one below. This percentage is likely a function of many factors, including how well employees understand the impact of their work on the bottom line and the nature of the skills taught.
These examples, and the calculations represented in them, are easy to understand and make a compelling case about the impact of the training. These answers to the Balanced ROI questions provide unique feedback and insights to trainers and course designers about how and why the course material impacts performance in the company at hand. Each company is somewhat unique and therefore how employees apply what they learned varies from company to company. It also may vary within the same company. Trainers who have taught the same class a dozen times can still learn about training impact despite not having seen employees in action.
Some additional tips for using the Balanced ROI approach are to provide a structured format for respondents or a sample response that displays the desired level of detail. This may reduce the number of unusable responses. Another tip is to ask employees to provide, at their own option, their names and contact information so that an experienced training professional can follow up to verify the accuracy of the example.
Finally, asking employees to provide multiple examples could yield additional usable responses. However, keep in mind that the goal is not necessarily to generate a large number of examples; the goal is to provide enough examples that paint a compelling picture about the impact that the training had.
First, we have applied this method with selection, development, inclusion, negotiation, and communication training offerings. It is adaptable to any training offering and arguably to any intervention. Its flexibility is perhaps its greatest strength.
Second, a critical benefit of this approach is that it shows what training impact actually "looks like." The examples are often easy to understand and relate to, and as stated earlier, can be used to illustrate best practices of what the training is intended to do.
Third, Balanced ROI questions can be easily included in a broader survey that asks standard training evaluation questions about obstacles, improvement needs, relevance of exercises, and other staples of training evaluation surveys.
Fourth, we should add that our discussion above only addresses the "benefits" side of the ROI equation. The company should still measure costs as a comparison point, using whatever approach it deems appropriate.
Fifth, by asking participants for one example, are we placing an arbitrary upper cap on the measurable benefits? What if someone really had two examples? Or five? And what about all the people who don't respond to the evaluation survey? Do their benefits go unmeasured? These are legitimate points, leading to the conclusion that the total of the resulting estimates is conservative. We have found that our clients resist asking for extra examples, in the interest of user-friendliness. At the same time, the information that we do obtain is very concrete and telling.
Sixth, there will be cases where the dollar benefits indicated by the credible examples obtained may be less than the identified costs. In such a case, you may ask yourself whether you have all the information, and whether the comparison is fully informed.
Despite this last point, the approach provides a great deal of information about how effective your training is and lets you increase the number of situations where you can collect business-impact data.
Many more articles in Training & Development in The CEO Refresher Archives