WHAT ARE WE TRAINING FOR?
“Knowledge has to be improved, challenged, and increased constantly, or it vanishes.” – Peter Drucker
I took part in a training last week on how to effectively evaluate the impact of learning. Throughout the session, the recently-released report from the Association for Talent Development (ATD), Evaluating Learning: Getting the Measurements That Matter, was referenced and many of those mentions were fairly shocking.
According to this white paper study, while organizations invest approximately 8% of annual profits in training initiatives, a mere 35% of those surveyed “reported that their organizations evaluated the business results of learning programs to any extent.”
Additionally, the 2016 report noted that the majority of funding available for training impact evaluation is earmarked for assessments targeting only Levels 1 and 2 of Kirkpatrick’s 4 levels of effectiveness. But before I go off about why this is so puzzling, perhaps it’s worth a quick refresher on exactly what this Kirkpatrick Model actually is.
Developed by Dr. Don Kirkpatrick, his namesake model sets the standard for accurately gauging the impact of training. His framework organizes learning impact into four buckets, which represent the essential ripple effect that well-designed training should initiate.
1. Reaction – This take the pulse of the individual learner. Did they enjoy the training? Did they find it engaging? Did they believe its content was relevant and useful to them, individually?
2. Learning – This level goes beyond the individual learner’s “feelings” about the course or training and evaluates how much of the intended new knowledge (theories, skills, competencies, etc.) imparted within the training was actually absorbed by its participants.
3. Behavior – This third level looks at the correlation between what participants learn in a training and their subsequent work performance or behavior. Level 3 evaluations provide the first glimpses into how effective training efforts are at achieving the intended collective end-goals (i.e. effective learning transfer ).
4. Results – At this point, evaluation goes beyond individual behavior and skill set to consider the degree to which the training’s effect on individual participants impacts the targeted results at an organizational level.
In many places, you will now also see a Level 5 added to Kirkpatrick’s model, which takes the evaluation one step beyond organizational goals and looks at ROI. The ROI of a training program is calculated as:
Net benefits (benefits minus costs) / program costs * 100 = ROI%(Phillips, Phillips, and Ray 2015)
CHALLENGES OF EFFECTIVE TRAINING EVALUATION
So just how do organizations justify continually-increasing training expenditures without the ability to prove the actual effectiveness of those investments? While in some cases, it can be justifiably blamed on an institutional “checklist mentality,” it is much more often related to the particular challenges of developing a truly effective training evaluation strategy.
While roughly 85% of organizations report regular evaluation efforts at Levels 1 (Reaction) and 2 (Learning), the percentages drop dramatically from there. Some potential drivers behind this sharp dip in evaluation efforts include:
- Organizational structure silos that prevent access to key management or supervisory players critical to evaluating Level 3 effectiveness and higher.
- Limited funding to develop rigorous evaluation methods beyond already-available assessment tools.
-
The nature of the methods best suited for evaluating higher-level learning and business effectiveness.
- Many strategies well-suited for gauging Level 3 effectiveness and above require 1-on-1 interviews, focus groups, and performance observation. All of which require time, personnel, budget, and most importantly effective planning and coordination.
Yet without a methodology for understanding training’s ripple effect throughout an organization, it is impossible to validate its effectiveness (or lack thereof) and use that data to optimize training content for improved results. And who wants to keep running (or participating in) an ineffective training program? Remind anyone of the old adage about insanity being defined as doing the same thing over and over again, all the while looking for a different outcome?
WHAT’S AT STAKE
Humor aside, I in no way underestimate the difficulty of structuring solid training evaluation strategies that span all levels of effectiveness. First, you must have a clear understanding (and agreement) about the real end-goal(s) of your learning expenditure. From there, it then takes a great deal of deliberate and thoughtful planning up-front, cooperation and cross-departmental coordination throughout, and dedication to creating a reliable feedback loop to support continual content and process improvement.
But the effort pays off. The payoff is seen in customer and employee satisfaction rates, in performance improvement and increased efficiencies, in the impact on a company’s bottom line. And also in the fact that an entire organization doesn’t feel as if it is treading water in the training and development department. No goodwill ever comes from wasting peoples’ time.
References:
Association for Talent Development (ATD). 2016. Evaluating Learning: Getting to Measurements That Matter. Alexandria, VA: ATD Press.
Phillips, P.P., J.J. Phillips, and R. Ray. 2015. Measuring the Success of Leadership Development. Alexandria, VA: ATD Press.