What the learning science actually says about short-form training, and how to design units that people remember a week later.
What microlearning actually is
Short, focused, single-objective learning units. Usually 2 to 10 minutes.
Microlearning is training built as short, focused units, each designed around a single learning objective. Most units run between two and ten minutes and are designed to be consumed in one sitting. Good microlearning is tightly scoped. Every element in a unit earns its place against one clear goal.
The most common failure mode is taking a long course and chopping it into arbitrary chunks. That removes context without adding focus, and the result is a pile of disconnected fragments that are harder to learn from than the original. The key word in microlearning is not micro. It is focused.
Microlearning is usually delivered in a series. A single three-minute unit on its own is a video. A series of three-minute units, spaced across a week or a month, coordinated around a larger skill or subject, is microlearning. The difference matters because most of the evidence that microlearning works points back to properties of the series: spacing, repeated retrieval, and manageable cognitive load.
Six findings from memory and cognition research that explain why well-designed microlearning works.
Without reviewWith review at days 1, 3, and 6
Illustrative only. Values based on the shape of Ebbinghaus's 1885 forgetting curve and the pattern reported in modern replications. The real decay rate depends on material difficulty, prior knowledge, and context. Murre and Dros, 2015 (PLOS ONE)
Forgetting happens fast
Ebbinghaus, 1885
Hermann Ebbinghaus ran memory experiments on himself using nonsense syllables and showed that, without reinforcement, we forget rapidly at first and more slowly after that. The exact numbers depend on the material, but the shape of the curve is robust and has been replicated in modern studies. Microlearning earns its keep by scheduling brief re-exposures before the decay gets steep.
George Miller's famous paper introduced the "magical number seven, plus or minus two" as a recurring limit that shows up across perception, judgment, and immediate memory tasks. Later working-memory research by Nelson Cowan (2001) points to a tighter practical limit of about four chunks. Either way the takeaway for training is the same: a single unit that tries to introduce ten new ideas at once overruns capacity. Microlearning aligns the size of the learning unit to the size of the bucket.
Sweller, 1988; expanded by Sweller, van Merrienboer, and Paas, 1998
John Sweller's Cognitive Load Theory distinguishes intrinsic load (the inherent difficulty of the material) from extraneous load (effort added by poor design). A third category, germane load (effort that builds understanding), was added in later work. Short, focused units cut extraneous load by removing unrelated material. The best microlearning keeps intrinsic load productive and drives extraneous load toward zero.
A meta-analysis of hundreds of experiments on distributed practice found that spacing out study sessions produces substantially better long-term retention than packing the same time into one session. For microlearning this is the single most important finding. A series of five-minute units spread across a week beats a single thirty-minute session on the same content.
In a series of now-classic experiments, practicing retrieval produced better long-term retention than re-reading the same material for the same amount of time. This is the testing effect. For microlearning, it means every unit should end with a retrieval moment: a question the learner has to answer from memory, not a summary they passively read.
The concept of desirable difficulties captures one of the most counter-intuitive findings in learning research: conditions that make learning feel harder in the moment often produce better long-term retention. Re-reading notes feels productive but does not stick. Quizzing yourself, spacing, and interleaving topics feel less confident but retain better. Microlearning that feels slightly effortful is doing its job.
Match the method to the content. Microlearning is not a universal format.
Good fits
Procedural refreshers, like how to run a report or reset a password
Compliance top-ups on specific policies that change year to year
Product or policy updates aimed at people already familiar with the topic
Onboarding delivered as a drip across the first days and weeks
Just-in-time performance support at the moment of need
Reinforcement of material already taught in a longer experience
Poor fits
Complex conceptual frameworks that depend on integrated understanding
Skills that require long stretches of deliberate practice
Deep reflective work or collaborative problem solving
First exposure to safety-critical procedures
Certifications that require comprehensive coverage and assessment
Anything where the main value comes from the learner feeling uninterrupted
Is microlearning right for your training?
A quick self-check. Answer each question honestly about your specific context.
1Can the learning objective be stated in one sentence?
2Is this a refresher, update, or reinforcement of something learners have seen before?
3Will learners often access the material on mobile or between other tasks?
4Is the topic something a learner might need to look up just in time?
5Will learners come back to the material multiple times over days or weeks?
6Does the material require long, uninterrupted practice to master?
7Does success depend on integrating many new concepts at once?
Answered 0 of 7
Design principles for effective microlearning
Six rules derived directly from the research above.
One objective per unit
Every unit should resolve one specific learning objective. If you need to use the word and to describe what a unit covers, split it. This single discipline does more for learning outcomes than any authoring feature.
Aim for the three to seven minute sweet spot
Short enough to sustain attention, long enough to include a hook, a concept, and a retrieval moment. Ultra-short sixty-second units rarely leave room for retrieval, which is where most of the learning happens. Longer than ten minutes and you are building a lesson, not a micro-unit.
Active retrieval, not passive review
Every unit ends with the learner having to retrieve something, not recognize it. A summary slide is not retrieval. A question that requires recall is. This is the single highest-leverage design change you can make.
Space the units over time
Microlearning is a series, not a single unit. Distribute the series across days or weeks. Cepeda and colleagues' follow-up study (2008) found that the optimal spacing gap scales with the target retention interval: roughly ten to twenty percent of the delay works well when learners need to remember material for weeks at a time. For a skill you want people to retain for a month, space units across about a week.
Treat mobile as a design discipline
Most just-in-time microlearning is consumed on a phone between tasks. That is not a responsive design problem. It is a content design problem: short paragraphs, heavy use of visuals, clear tap targets, no dense tables or multi-column layouts.
Accessible by default
Microlearning is often used for compliance and onboarding, which means every learner in the organization encounters it. Caption your video, check your colour contrast, and support keyboard navigation. For a full checklist, see our accessibility checklist for course authors.
What a good five-minute unit looks like
A working template. The exact timings flex with the content, but the five-phase shape holds up well across most topics.
0:00
1
Phase 1 at 0:00. Hook
20s
Why this matters now
Prime attention and intrinsic motivation before the concept lands.
0:20
2
Phase 2 at 0:20. Core concept
90s
One idea, clearly stated
Single objective matches working-memory limits and reduces extraneous load.
1:50
3
Phase 3 at 1:50. Worked example
90s
Apply the concept to a realistic case
Worked examples reduce load during initial learning more than practice problems alone.
3:20
4
Phase 4 at 3:20. Retrieval check
60s
Learner retrieves, not recognizes
Retrieval practice is itself a learning event, not just an assessment.
4:20
5
Phase 5 at 4:20. Spaced cue
40s
Pointer to a follow-up at day 3
Spacing beats cramming for long-term retention.
Common microlearning formats
Each format has different strengths. Match the format to the objective.
Format
Duration
Cognitive load
Best for
Failure mode
Short video
2 to 3 min
Medium
Demonstrating a task, humanizing a policy, quick concept overview
Passive watching. Learners press play and drift. Needs a retrieval moment after.
Scenario or decision
3 to 5 min
High
Judgment calls, edge cases, applying policy to real situations
Contrived options where the "right" answer is obvious. Kills engagement.
Expensive to build well. Shallow branches feel scripted.
Micro-simulation
3 to 5 min
Variable
Procedural skills, software tasks, safe-failure practice
Build complexity balloons. Scope creep turns it into a mini-course.
Job aid or infographic
Under 1 min, on demand
Low
Reference at the moment of need. Not a learning event on its own.
Treated as training. A job aid replaces memory, it does not build it.
Measuring what matters
Completion rate is a compliance metric. Learning needs different signals.
Most microlearning programs report on completion rate because it is the number the LMS gives you by default. Completion tells you the learner clicked through. It does not tell you whether they learned anything.
Four signals that move closer to actual learning:
Delayed retention checks. Quiz learners a week after the unit, not immediately after. Immediate scores mostly measure short-term memory and interface familiarity. Delayed scores measure learning.
On-the-job indicators. Error rates, ticket reopens, time to resolve, policy adherence. Pick one metric that would move if the training worked, and track it.
Granular interaction data. Standards like xAPI and cmi5 let you capture every retrieval moment, not just course completion. Question-level data tells you which concepts are not sticking.
Kirkpatrick levels. The Kirkpatrick model distinguishes four levels of training outcome: reaction, learning, behaviour, and results. Most programs stop at level one. Getting to level three is where real value shows up. Kirkpatrick Partners overview
Six common pitfalls
Most microlearning that fails, fails in one of these ways.
1. Length without focus
A three-minute unit that covers four ideas is not microlearning. It is a short unit. Focus is the point.
2. No spacing
Ten units pushed out on the same day is not microlearning. It is a course in small pieces. Spacing is where the long-term retention comes from.
3. Passive consumption
No retrieval at the end means the unit tested nothing. Retrieval practice is not a nice-to-have. It is the single highest-leverage part of the unit.
4. Compliance dumps in disguise
Chopping a thirty-minute compliance module into ten three-minute modules does not make it microlearning. It makes it more clicks for the same content.
5. No measurement beyond completion
If the only reported metric is completion rate, you do not know whether the program works. Pick at least one delayed or on-the-job measure up front.
6. Fragmented experience
Ten isolated units with no connective tissue do not add up to a program. Learners need a clear map of where a unit fits in and what comes next.
Putting it into practice
Building microlearning in Slate
Slate is an AI-powered eLearning authoring tool. A few Slate features map directly to the principles above. Use what is useful, ignore the rest.
Short lessons in minutes
AI lesson generation at the Overview depth setting produces four to six blocks of four hundred to six hundred words, suitable for a three to five minute unit. Use it to draft a unit in one pass, then refine.
Retrieval built in
Every lesson can include inline knowledge checks with AI-generated questions. Place one at the end of each unit and you have the testing effect baked in by default.
Mobile first, without extra work
Slate courses are responsive by default, and device visibility controls let you show or hide blocks per device so a unit can be shorter on mobile than on desktop when that helps.
Narration in 13 languages
Per-block narration with AI voices supports audio-first microlearning for learners who listen between tasks. Narration exports alongside the course so it works offline.
Works with any LMS
Export as SCORM 1.2, SCORM 2004, xAPI, or cmi5. Modern standards give you the question-level data you need to get past completion-rate reporting.
Estimate seat time
Our free seat time calculator uses research on reading rates to estimate unit duration. Useful when you are deciding whether a draft belongs in a three, five, or ten minute slot.
Bjork Learning and Forgetting Lab, UCLA . Robert and Elizabeth Bjork's lab, the canonical home for desirable difficulties and much of the modern work on spacing and retrieval.