The Five Pillars of Team Training
Most instructional designers know ADDIE (Analyze, Design, Develop, Implement, Evaluate). It is the workhorse model of the training profession, and it works well when the unit of learning is an individual. The trouble is that ADDIE was not built for teams. When the learner is a team rather than a person, several things change at once: the needs assessment is different, the practice has to happen in a particular way, and the evaluation has to be conducted at a level of analysis that ADDIE never explicitly addresses.
To bridge that gap, Salas et al. (2015) proposed the Five Pillars of Team Training: Need, Climate, Design, Evaluate, and Sustain. The pillars do not replace ADDIE. They extend it, naming five things a practitioner must get right when the learner is a team. A team training program that addresses all five pillars is far more likely to produce durable behavioral change than one that addresses only the design step.
This post lays out each pillar, what it asks of the practitioner, and where it most often goes wrong.
Pillar 1: Need
The first pillar asks a deceptively simple question: What teamwork does this team actually need to learn? Note the specificity. Team training is not generic team-building. It targets concrete teamwork competencies (communication, coordination, mutual performance monitoring, shared mental models, backup behavior), and the only way to know which competencies a team needs is to assess them at the team level (Salas et al., 2012).
A team-level needs assessment looks different from an individual one. Rather than testing what each member knows in isolation, the practitioner observes the team in real or simulated work, interviews members about coordination breakdowns, examines incident or near-miss data, and triangulates with leadership about strategic priorities. The output is not a list of knowledge gaps. It is a profile of teamwork competency gaps, typically expressed in the language of attitudes, behaviors, and cognitions (the ABCs of teamwork) (Cannon-Bowers et al., 1995).
Where it goes wrong: practitioners often skip this pillar because the sponsor has already decided what training to buy. The needs assessment then becomes window-dressing for a predetermined intervention.
Pillar 2: Climate
The second pillar is the one most often skipped, and the one that most often kills the training. Climate refers to the surrounding organizational conditions that determine whether team training can land: visible leadership sponsorship, psychological safety, and members’ motivation to learn (Edmondson, 1999; Salas et al., 2015).
If the sponsor does not open the session and stay through the debrief, the team learns this is not real. If members do not feel safe enough to make mistakes in front of each other, they will not engage with the practice activities that make team training effective. If members cannot articulate why this training matters to their daily work, the content will slide off.
A trained team-training practitioner therefore diagnoses climate before designing content and is willing to delay or refuse delivery until the conditions are right. The conversation that produces that delay is often the highest-leverage conversation the practitioner has all year. It typically takes place with the sponsor, not the team.
Pillar 3: Design
The third pillar covers what most people picture when they hear “training”: the curriculum itself. The team training literature provides a four-step minimum for any team training intervention: Information, Demonstration, Practice, Feedback (Baker et al., 2003).
Information tells the team what good teamwork looks like in concrete behavioral terms. Demonstration shows the behavior in action, often via video or expert modeling. Practice requires the team to enact the behavior in a setting that approximates the real working context. Feedback names what was observed using the language of behavioral markers, not vague encouragement, but specific behaviors tied to validated criteria.
The non-negotiable element is that practice must happen in a team. Solo exercises in a team-training context are a contradiction. They are also one of the most common signs that an organization is delivering individual training under a “team” label. If a curriculum lacks an in-team practice step, it is not team training, regardless of how it is marketed.
Pillar 4: Evaluate
The fourth pillar asks: How will we know whether this worked? For team training, evaluation has to happen at the team level, not the individual level. Kirkpatrick’s classic four-level hierarchy (Reaction, Learning, Behavior, Results) is still useful, but Levels 3 and 4 must be measured on the team as a unit (Kirkpatrick & Kirkpatrick, 2016; Salas et al., 2012).
In practice, this means three commitments before training begins: (1) a behavioral marker tool selected and matched to the team’s context, (2) trained observers with adequate inter-rater reliability, and (3) a measurement cadence that includes at minimum a baseline observation, an immediate post-training observation, and a 90-day follow-up. Without these three commitments in place beforehand, evaluation almost always collapses back to a Level 1 reaction survey, which tells the sponsor whether the audience enjoyed the session and almost nothing about whether the team’s behavior changed.
The most common failure here is measuring only individual learning gains and reporting them as evidence of team-level change. Cognitive, affective, and behavioral outcomes all have an individual and a team level of analysis. Practitioners must measure both, or at minimum the team level, before they can credibly claim the training worked.
Pillar 5: Sustain
The fifth pillar is the newest and the one most often missing entirely. Team-level behaviors decay faster than individual technical skills. Without deliberate reinforcement, roughly 70% of team-level training gains require booster events to persist beyond six months (Salas et al., 2008).
Sustain requires three things on the calendar before training begins: a Day 30 team debrief (a 30-minute revisit, not a re-run), a Day 90 behavior re-observation using the baseline rubric, and a Day 180 climate audit to confirm the conditions that supported training are still in place. These dates are not aspirational. They are calendared in advance, owned by named people, and treated with the same gravity as the training event itself.
Most organizations treat team training as a one-time event (a workshop, a retreat, a two-day off-site) and then wonder three months later why nothing changed. The fix is not longer training. It is shorter, repeated, and supported.
Putting the Pillars Together
The Five Pillars work as a sequence and as a checklist. As a sequence, they map a project: assess need, secure climate, design the intervention, plan evaluation, calendar sustain. As a checklist, they expose where a program is most likely to fail. Practitioners can, and should, ask of any team training initiative: Have we addressed all five? If the answer is no on two or more pillars, the program is unlikely to produce durable change, and the practitioner’s most useful contribution is to flag the gap before delivery rather than after.
The pillars are not theoretical. They are a synthesis of what the team training literature has learned, often the hard way, about why well-designed curricula fail to land. A practitioner who internalizes the five pillars treats team training as a system of interlocking commitments, not a content-delivery event, and that shift in stance is where the durable results come from.
See Also: Other Posts in This Series
References
Baker, D. P., Salas, E., & Cannon-Bowers, J. A. (2003). Team task analysis: Lost but hopefully not forgotten. The Industrial-Organizational Psychologist, 35(3), 79–83.
Cannon-Bowers, J. A., Tannenbaum, S. I., Salas, E., & Volpe, C. E. (1995). Defining team competencies and establishing team training requirements. In R. A. Guzzo & E. Salas (Eds.), Team effectiveness and decision making in organizations (pp. 333–380). Jossey-Bass.
Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383.
Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick’s four levels of training evaluation. ATD Press.
Salas, E., Cooke, N. J., & Rosen, M. A. (2008). On teams, teamwork, and team performance: Discoveries and developments. Human Factors, 50(3), 540–547.
Salas, E., Shuffler, M. L., Thayer, A. L., Bedwell, W. L., & Lazzara, E. H. (2015). Understanding and improving teamwork in organizations: A scientifically based practical guide. Human Resource Management, 54(4), 599–622.
Salas, E., Tannenbaum, S. I., Kraiger, K., & Smith-Jentsch, K. A. (2012). The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13(2), 74–101.
