If you have any spare time to allocate at a training conference, schedule a session that examines how the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) instructional design model is now considered irrelevant. Oh, and that topic works well for books, magazine articles, and yes, even blogs.
Michael W. Allen and Richard Sites have identified the following seven problems with the ADDIE model:
1. Comprehensive analysis up front is unrealistic. We need to conduct a quicker analysis and continue to analyze throughout the process.
2. Analysis often overlooks essential success factors such as hidden expectations and who is really in charge.
3. Specs and even storyboards can miscommunicate.
4. Creativity becomes a nuisance to the schedule.
5. Downstream insights are faults that may become trouble.
6. Performance outcomes are rarely measured, so success is based on schedule and cost minimization.
7. Post-tests provide little useful information.
I’d like to address each of these suggested weaknesses and offer an argument in support of ADDIE:
1. Analysis really needs to be done up front to decide:
- In relation to a performance gap, what has been identified as improved performance? You can’t hit a target that hasn’t been identified. If the subject-matter is new, what is the required performance in the workplace?
- What characteristics of the target audience will affect the design of training (e.g. reading level, motivation, existing job knowledge)?
- What does accurate performance of the task(s) look like? In other words, how do you perform the task(s) identified in the analysis phase to standard on the job?
2. Identification of stakeholders and a sponsor and their expectations should be part of the analysis process.
3. A series of checkpoints should be built into the ADDIE process. In fact, the acronym itself almost suggests where the milestones should be. One common technique used in many organizations is sponsor approval of some high-level design plan before the detailed work is done. Such a plan minimizes not only ineffective communication but also rework.
4. Creativity is part of the job both instructional designers and developers are paid to do. However, that creativity can only be consistent with the program constraints such as deadline and budget.
5. The ADDIE process is a cycle that is in most organizations iterative. A training product is generally not static. It will be tweaked and modified, not only at the validation and evaluation steps, but throughout the entire process.
6. If the value of training is to be clearly defined, performance outcomes must be measured. Intuitively, people see the value of training. But you can’t rely on intuitive evaluations. Unless you can document performance improvement and organizational results, you are asking people to simply trust that your training is beneficial. This is not something you want to wait to do when people are trying to justify every organizational dollar spent.
7. If the post-test relates to the performance objective and mirrors the organizational standard of performance in the workplace, it becomes a very sensitive and accurate measure of the outcome—and value—of training.
In my opinion, as long as you keep in mind that the ADDIE instructional design process isn’t set in concrete (e.g. all tasks are not necessarily sequential or even need to be accomplished on every project) and apply the process while gaining buy-in from stakeholders at key points, you’ve got an effective roadmap that works efficiently with the creativity and effectiveness of your training department.
Bottom line, like me, the ADDIE process was born after World War II and is still alive today.