Photo by: AndreyPopov via Canva
Now, more than ever, we need to evaluate our training programs to ensure their effectiveness. With the growing need to convert traditional instructor-led courses into synchronous live online courses, or asynchronous e-learning courses, it’s important to focus on courses that provide the expected results. The conversion process can be time consuming and costly, so make sure you are investing your resources in the training that will provide a return on investment (ROI) or a return on expectations (ROE).
When evaluating courses, we have two goals:
1. Determine the relationship between Actual Results and Expected Results.
2. Determine why these relations exist the way they do based on the outcomes.
For this article, I will focus on the first goal: Determine the relationship between actual and expected results to identify the outcome.
There are five potential outcomes (two positive and three negative). Let’s see what those potential outcomes are, using the following scenario: We designed a training program to reduce call wait times for customers. The current call wait time is four minutes (actual results) and the goal post-training is to reduce the wait time to two minutes (expected results).
To identify the results and determine if the performance gap (expected performance – actual performance) was closed, we would gather data at a Level 3 (performance) and Level 4 (results) evaluation.
Level 3 allows us to identify if the learners applied the new skills and knowledge on the job. Ideally, this is evaluated immediately following the course up to six months after the course.
Level 4 allows us to identify if the learners’ performance impacted the organization. This is evaluated between three and twelve months after the course.
Let’s get back to our scenario. Once we gather data at Levels 3 and 4, we can begin to analyze that data. The analysis will produce five potential outcomes.
1. Exceeded Expectations: Our training yields the results we wanted and more! The wait times were reduced from four minutes to one minute.
2. Met Expectations: Our training yields the results we wanted. Our wait times are reduced from four minutes to two minutes.
3. Below Expectations: Our training didn’t quite hit its mark. We reduced the wait times from four minutes to three minutes.
4. Stayed the Same: Our training did nothing to meet our goal. In fact, the wait times stayed the same. Customers are still on hold for four minutes.
5. Below Baseline: Our training did not meet its goal. In fact, wait times increased to five minutes after training.
Based on our scenario, if we were asked to convert this course to a virtual classroom or e-learning course, we would only want to convert it, without any changes or updates, if we exceeded or met expectations.
When the results are not met, converting would be a waste of time and resources as it would still not yield the results we wanted. Before converting, we would dig a little deeper and analyze why the course fell short of the goal. Was it due to variables out of our control after training? Was it because the course design was not performance-based? Or maybe the performance gap was not related to a lack of skills and knowledge at all. Maybe the gap is caused by something training isn’t able to solve.
For more on the evaluation process and how to build and communicate a compelling case for the effectiveness of your training programs, check out the Evaluation of Training workshop.