Is your training working? That is the question. We can talk about MOOCs and gamification and whether self-paced courses are better than scheduled courses until we’re blue in the face, but the reality is that only one question really matters — what works?
Much has been written on the subject and many experts have weighed in on what they consider to be the most crucial training metrics (here are my top 10). However, it remains that for individual courses and at individual companies, the effectiveness of training is ridiculously hard to measure.
This difficulty has even been studied. In 2008, Zane L. Berge, a professor at University of Maryland, Baltimore County published a study on exactly this topic. He found a host of reasons why training is hard to evaluate:
- Training lacks planning, sponsorship, or budget.
- Training is done for the wrong reasons.
- The training goals of various stakeholders are different: managers are interested in performance, while trainers are interested in results that can be measured with a test.
- The skills and knowledge learned during the training “are not applied on the job and thus have no impact.”
- And finally, the methods generally used to measure and evaluate training are “antiquated.”
So, assuming that you are tracking some metrics for your training programs, what are they actually measuring and how can you gain more insight into what’s working and what’s not?
As Berge found, some of the main problems with training evaluation are linked to a lack of clear goals, which happens both when training is not adequately planned and when various stakeholders have different objectives. When there is no pre-planning for evaluation and no communication about the specific objectives, it’s easy to both start and stop the evaluation process at the most basic level of employee engagement and reaction:
- How engaged were employees in the training (determined by how many videos they watched, how many resources they interacted with, how many discussions they participated in, etc.)?
- How did they feel about the training (did they find it interesting, valuable, a good use of their time)?
These are the easiest things to measure. But do they really tell you anything of substance? Not really. So, how can we do it better?
There are many models of training evaluation, including the popular Kirkpatrick Model, which breaks evaluation into four levels: reaction, learning, behavior, results. Other people have suggested implementation, application, business impact, and ROI. Still others have argued for separating behavior metrics from performance metrics, and other modifications.
Cornerstone OnDemand, which makes a learning management system and other software for training and recruitment, has developed a model that is both simple (only four steps) and includes higher level metrics (i.e., business impact). It also provides some guidance for how to objectively account for each aspect. Their model, shown in this slideshare, has four categories:
Activity — What are we doing?
This is the bookkeeping category. It includes things like the number of courses, the number of learners, and the total cost. Though they don’t specifically say this, it could also include much of what currently counts for training evaluation in many companies, that is, how many videos watched, and so on.
Efficiency — How well are we using our resources?
Efficiency is still pretty bookkeeping focused. For example: What is the cost per learning hour? How many hours are spent per activity? What is the cost per activity? (Here is an efficiency calculation for MOOCs compared to traditional ILT.)
Effectiveness — Is it doing what we intended it to do? What are the results?
Now we’re starting to get into the stuff that really matters. Note that to accurately measure this, however, you need to start with a list of specific training objectives you want to meet and (here’s the kicker), the dollar estimate of the impact of those objectives. For example, if your goal is to increase sales by 10% over a six-month period, or to reduce errors by 10% over the same period, how much would meeting each of those objectives be worth?
This measurement also includes a rating for usability, Net Promoter Score, and manager rating. Finally, employee attitudes toward the training are measured not only immediately following the training, but also a few months later, when employees will be better able to judge whether what they learned is valuable for their job performance.
Impact — What benefit are we getting from those results?
Finally, impact looks at the benefits that the efficiency results provide using Robert Brinkerhoff’s Success Case Evaluation method. This method looks specifically at success stories and the value of those successes. For example, if a MOOC saves 10 hours of training time for each of 100 people, then the business impact is equivalent to the value of 1,000 additional hours of work productivity. You can use also use this method to calculate ROI by comparing the value of the business impact with the cost of implementing the training program for those 100 people. In addition, this model can help you improve your training program by analyzing the successes (and the failures) and identifying what contributed to them.
There are many ways to evaluate training. Whatever method you choose, the key is to decide in advance what metrics are important, make a plan for how you will measure them, and then use the data to increase the effectiveness and the business impact of your training in the future.
Copyright 2015 Bryant Nielson. All Rights Reserved.
Bryant Nielson – Managing Director of CapitalWave Inc.– Being a big believer in Technology Enabled Learning, Bryant seeks to create awareness, motivate adoption and engage organizations and people in the changing business of education. Bryant is a entrepreneur, trainer, and strategic training adviser for many organizations. Bryant’s business career has been based on his results-oriented style of empowering the individual. Learn more about Bryant at LinkedIn: www.linkedin.com/in/bryantnielson