This is part four of our Kirkpatrick Model series, by Hannah Brenner, in which we explore how to measure the results of training evaluation and determine desired behaviors. If you haven’t yet, check out part one here, part two here, and part three here.
If you’ve been sticking it out with me so far, the hard part is over. You’ve already tied your training to strategic initiatives, identified the milestones (leading indicators) to reaching your goal, and know the critical behaviors that are needed to get there.
You’ve probably devoted more time to preparation than ever before and you’ve yet to develop any training. Well don’t worry, we’re finally back into familiar territory – evaluating the learning.
Level 2: Learning is defined as “the degree to which participants acquire the intended knowledge, skills, attitude, confidence and commitment based on their participation in the training.”
Knowledge and Skills
The first part of this evaluation is simple – did participants learn what they were supposed to learn?
As a trainer, you want to feel confident that participants leave the session with knowledge and skills they can put into action. There are various ways to do this, but the most common is with some form of testing.
Think again to our road trip, to even get in the car and begin your journey, you need a driver’s license. The only way to get this license is to pass both the written and driving skills tests. Granted, for some of us this was more recent than others, but we can all think back to getting that license for the first time.
First, we had to go through classroom instruction and learn the rules of the road such as who has the right of way (in case you’ve forgotten – it’s still pedestrians), what the different street signs mean, and what rain does to the road. Then, you spent time behind the wheel practicing. Once your training period concluded, you went to the DMV and took both knowledge (written) and skill (driving) tests.
If your driver’s education teacher was anything like mine, he knew exactly what was on those tests and reinforced these points day after day. That is, he knew what was most important and focused his efforts there. Yes, we learned items that were not tested, but most our time was spent on what the state determined was essential to know.
The same should be true for your employee training program. Think about what is most important to learn – this should be what is covered in testing and where you spend considerable time training.
One of the areas I always forgot to include in evaluating was the participants’ confidence. I will admit, I always just assumed if someone had the knowledge, he/she would be confident in applying it; but that is not the case. This is an area that needs to be measured. But, like most of what we’ve covered, it doesn’t need to be complicated.
In fact, there are a couple of simple ways to measure confidence:
1. Poll the participants – It can’t be that easy, can it? Yes, it can. At the beginning of a session, simply take a poll on how confident people are with the skill/information. At the end of the session, poll again. You can do this as a show of hands, rating scale via an app, or written and handed in. In less than 5 minutes you can assess how confident the room is upon leaving.
2. Have a discussion – Open the floor and talk about applying the knowledge and skills learned. Ask what they are most confident about, what worries them, what challenges they may face and if they know how to combat them. Having an open discussion can clue you in to the participants’ confidence and commitment to the training.
Speaking of commitment, this is another area we often forget to evaluate in Level 2.
Understanding the participants’ commitment will give valuable insight into what’s to come with your level 3 and 4 evaluations. Understanding the level of commitment can also help you identify the participants’ attitude towards the training. We know that if participants are not committed and have a negative attitude, they are not likely to apply training on the job.
Your training plan should include explaining what’s in it for the participants and how the information benefits them, as this is likely to increase commitment levels. However, there could be external factors impacting their commitment as well.
One of the most common objections we hear is that the information learned isn’t realistic back on the job. That is, there are external forces preventing participants from being committed to implementing what they have learned. This could be due to competing priorities, lack of accountability, or lack of management reinforcement/support, just to name a few.
Your job as the trainer is to determine why the lack of commitment exists. For “until the on-the-job barriers and challenges are cleared, even the most successful training program will yield little or no organization impact, and therefore will not be successful in the eyes of the stakeholders.”
Remember what I said about management buy-in in the last post? Having those managers on board from the start can minimize these roadblocks back on the job.
Because it is so familiar, Level 2 is probably the easiest to evaluate and where many trainers spend much of their time. Yet, it is important to remember that “contrary to popular belief, most stakeholders are really not interested in Level 2 data. They simply expect that by the time participants leave training, they know what to do.”
Knowing this, trainers should keep in mind that Level 2 data is more for them, as the instructor, and thus allocate time and resources accordingly. Then, you can begin to plan the training session(s) and how to incorporate Level 1: Reaction.
See how microlearning and post-training reinforcement facilitate knowledge transfer from training, while increasing employees’ confidence and commitment to what they’ve learned!
Hannah Brenner is a Client Success Consultant with BizLibrary. She discusses training strategies and works with her clients to constantly improve their training program and see a positive return on investment.