The last dimension Kevin covered in his presentation was motivation. You can have the cleanest, most straightforward training in the world—but your learner won’t get much out of that training if it doesn’t keep them motivated. Chief among motivational considerations are feedback and the expertise reversal effect.
Feedback
Learners don’t get anything out of feedback like “That’s incorrect; try again,” because it’s not helpful or enlightening at all. If you want the learner to benefit from assessments, you need to explain why the wrong answers they choose are incorrect. At the same time, however, you don’t want to give them the correct answer right away. If they keep submitting incorrect answers, continue to help them understand why those choices are incorrect—and when they pick the right one, explain to them why it’s correct. If they select a wrong answer at first, a helpful explanation will motivate the learner to reflect on why their choice was wrong, and guide them to the correct answer. They will eventually make the right conclusion after careful consideration, and that “reward” for their effort will motivate them to continue with the training.
You could also make the feedback more interesting by presenting it in a less traditional way. Instead of pop-up boxes, how about a sliding scale that indicates whether your learner is getting warmer or colder? Another effective strategy—one that draws on reinforcement—is to compare the learner’s answers to those of industry experts. If the learner is told that they “agreed with our experts six times” when they’ve selected the correct answer, it will motivate them to keep going and see how many of their other answers square with expert opinion.
Expertise Reversal Effect
The simple explanation behind the expertise reversal effect is that a learner’s existing knowledge affects how well they will comprehend instruction. For instance, novices will understand content that is designed for beginners more easily than experts will. This is because experts have already built up complex schemas for that content knowledge in their minds. More detailed explanations can work against those schemas. Experts tend to spend time fitting those explanations into their sophisticated schemas, and this added step can put a strain on their learning. By the same token, instruction that is geared toward experts naturally assumes a certain level of knowledge. Novices who do not possess this knowledge—and have, therefore, not yet built those necessary schemas—will not be able to follow the instruction as well as experts.
Often times, your instruction will be given to audiences with varying levels of prior knowledge. Bearing in mind the expertise reversal effect, how can you make your learning flexible enough to accommodate learners with different backgrounds? For starters, you can add a pretest to the beginning of your training. If the learner has the opportunity to demonstrate, at the outset, which areas they are already familiar with, you can have them go down a path in the course that is tailored to teach what they don’t know. This will spare them a lot of boredom and pointless page-turning.
Another strategy for dealing with the expertise reversal effect involves using layered hints in your assessments. An expert may be able to figure out the answer to a question with ease, but what about a novice? Include a “Hint” button on the interface that gets the learner gradually closer to figuring it out. If it’s a math problem, remind them of a certain theorem discussed in the lesson. After clicking the button once, change its text to read “Need another?” and prompt the lesson to give out another small hint when clicked. If the hints are layered in this way, the learner can benefit from using as many as they need before they are able to figure out the correct answer. If you could configure the course so that it acknowledges the number of hints being given, and then uses that data to modify the instructional path (i.e., the instruction becomes easier or more difficult accordingly), that would be a huge bonus. Adaptive training works with the learner; it hears them out when they give their input, and reacts accordingly. For that reason, the more adaptive your training is, the more pleased the learner is likely to be with it.
In his presentation, Kevin demonstrated that there is a legitimate science behind the enhancement of training. You can optimize the structure of your tests to guarantee a higher retention of information, and employ contextual and metacognitive strategies to transfer that knowledge from theory to practice. Keeping the learner motivated throughout this process is the thread that holds it together—high motivation translates to higher results. All of these strategies constitute a scientifically proven method for making your training better.
“And that,” Kevin concluded, “is why science is cool.”
Adib Masumian is an elearning designer in MicroAssist’s Curriculum Development Group.