SuperMemo Extended

Extend SuperMemo to cover Short Term Memory and Long Term Retention

SME-1: Extending SuperMemo's SM-8 Algorithm for Exam Preparation and Long-Term Retention

I've been working with SuperMemo since the early 1990s. I really admire what Wozniak has built. A big factor that made me step away from it was that I couldn't use it in "Exam" mode - there was no way to tell the algorithm "I have a test in 3 weeks, compress everything." And since I was mostly working in Exam mode, I couldn't make it work for me.

But meanwhile, the SM-8 algorithm has been extensively documented by Wozniak himself, and the spaced repetition community has discussed and analyzed its internals widely - from the O-Factor and R-Factor matrices to the U-Factor mechanism and the Forgetting Index parameter. Based on that public knowledge, I've been working on an extension that allows both long-term spacing AND exam-mode compression. I call it SME-1 (SuperMemo Extended, version 1).

Why SM-8 Over SM-2 or FSRS

When I look at SM-8, the fundamental difference over Anki's scheduler (SM-2 based) or even FSRS is the U-Factor (Usage Factor).

Consider this scenario: a card is scheduled for review on day 10, but life happens and you don't review it until day 70. You still remember it. What should the algorithm do?

  • Anki/SM-2: Uses the scheduled interval (10 days) as the base. Next interval = 10 * easiness_factor. You might get 15-25 days. That's absurd - you just proved you can retain it for 70 days.
  • FSRS: Better - it has a stability model and is fully open-source with excellent community support. But its 19 parameters are static between optimizer runs. It doesn't adapt in real-time to your actual review patterns.
  • SM-8: Uses the actual interval (70 days) as the base:

usedInterval = 70 (actual gap)
scheduledInterval = 10 (what was planned)
uFactor = usedInterval / scheduledInterval = 7.0

SM-8 then looks up the O-Factor matrix using GetRepetitionsCategory() which computes WHERE in the matrix this card sits based on the actual interval used (70 days), not the scheduled one (10 days). If you got it right, the next interval is computed from the O-Factor at that higher repetition category - giving a much longer interval.

The key relationship in the algorithm:

uFactor = (uFactor * usedInterval) / scheduledInterval

This scales the U-Factor proportionally. Then GetNewInterval() uses usedInterval * EstimatedOF(...) - so if usedInterval is 70, the next interval is 70 * OF, not 10 * OF.

The 3D Retention Matrix

The U-Factor is not just a scalar - it's the third dimension of SM-8's empirical retention matrix (ret[20][20][20]):

  • Dimension 1 (20 categories): A-Factor (item difficulty)
  • Dimension 2 (20 categories): Repetition category (how many successful recalls)
  • Dimension 3 (20 categories): U-Factor category (actual vs scheduled timing)

That's 8,000 cells of empirical data that SM-8 builds up over time. Every review updates the matrix. Different timing ratios produce different empirical retention rates, and SM-8 learns from all of them. This is what makes it self-optimizing - the algorithm literally rewrites its own scheduling tables based on your performance.

After 200-300 reviews, SM-8 knows your memory characteristics better than any fixed-parameter algorithm can.

The Problem: No Exam Mode

The SM-8 community - and the broader spaced repetition community - mostly discusses long-term retention and managing daily review load. People with 100,000+ cards in Anki or SuperMemo care about minimizing the number of reviews per day while maintaining retention. Longer intervals = fewer daily reviews = sustainable.

But there's a completely different use case: exam preparation. Here you have a deadline. You need to see every card before the exam. You want shorter intervals. The daily load is high but temporary.

These are fundamentally different optimization targets:

Exam PreparationLong-Term Retention
Optimize forCoverage before deadlineMinimum daily load
Interval strategyCompress, cap at examGrow as long as possible
Overdue reviewKeep it short anywayReward it, extend interval
Daily loadHigh, temporaryLow, sustainable
Forgetting Index3-5 (aggressive)10-12 (relaxed)

SM-8 has no built-in concept of "I have an exam in 3 weeks." Neither does FSRS, Anki, or any other scheduler I know of. You can hack it with filtered decks or manual interval caps, but there's no principled integration.

SME-1: The Extension

SME-1 adds a Study Phase concept on top of SM-8:

Short-Term Phase (Active Learning / Exam Prep)

For initial acquisition and exam preparation. Intervals are tight, daily load is high, the goal is mastery before a deadline.

Focus LevelForgetting IndexMax IntervalNew Cards/DayReview Cap
Relaxed814 days550
Moderate57 days10100
Intense31 day20200

When an exam date is set, FI compresses dynamically as the exam approaches:

  • More than 60 days out: use focus-derived FI
  • 30-60 days: FI = min(focus FI, 5)
  • 14-30 days: FI = min(focus FI, 3)
  • 7-14 days: FI = 2
  • Less than 7 days: FI = 1

Long-Term Phase (Retention / Maintenance)

For sustainable long-term memory with minimum daily effort. Intervals grow freely, the optimization record's learned matrices drive everything.

Focus LevelForgetting IndexMax IntervalNew Cards/DayReview Cap
Relaxed12180 days330
Moderate1090 days550
Intense830 days10100

HIIT Mode: Interval Training for Memory

Borrowing from physical fitness, SME-1 also supports a HIIT (High Intensity Interval Training) approach to reviews. Rather than maintaining a constant intensity level, HIIT alternates between intense and relaxed scheduling on a per-card basis:

  • Even-numbered repetitions: Intense FI (tight interval, reinforcing aggressively)
  • Odd-numbered repetitions: Relaxed FI (longer interval, testing true retention)

The theory is the same as physical HIIT - alternating between high effort and recovery produces better adaptation than either constant high effort (burnout, diminishing returns) or constant low effort (insufficient stimulus). For memory, alternating between "show it again soon" and "wait and see if you really know it" may produce stronger encoding than uniform spacing.

This is experimental - we don't yet have data on whether it outperforms consistent moderate scheduling. But the SM-8 framework makes it trivial to implement since the requestedFI can vary per review without disrupting the optimization matrices.

The Key Insight: Requested FI

SM-8's requestedFI parameter (default 10, meaning 10% forgetting probability = 90% retention) is the clean, principled lever for all of this. The formula:

intervalMultiplier = ln(1 - requestedFI/100) / ln(0.9)

  • At FI=10 (default): multiplier = 1.0 (standard SM-8 behavior)
  • At FI=5: multiplier = 0.47 (intervals roughly halved)
  • At FI=3: multiplier = 0.28 (intervals cut by ~70%)
  • At FI=1: multiplier = 0.09 (near-daily reviews)

This is not a hack - it's built into SM-8's core interval formula. We're just using it at different operating points depending on the study phase.

Typical User Journey

  1. Start learning - Short-Term mode, Moderate focus. Tight intervals, 10 new cards/day.
  2. Build mastery - Adjust focus per subject. Intense for weak areas, Relaxed for strong ones.
  3. Set exam date - FI compresses automatically as the exam approaches.
  4. Take exam - Switch to Long-Term mode. Intervals expand, daily load drops dramatically.
  5. Maintain - Long-Term Relaxed. See cards every few months.
  6. New exam comes up - Switch back to Short-Term. Intervals compress again.

The optimization record built during short-term study carries over to long-term mode. All that empirical data about your memory characteristics - the O-Factor matrix, the R-Factor matrix, the retention matrix - it's all preserved. Long-term mode doesn't start from scratch; it starts from where short-term left off, with a proven model of your memory. And when you switch back to short-term for a new exam, that model is still there, making the algorithm even more accurate the second time around.


SME-1 is open to discussion. The core insight - using SM-8's requestedFI as a principled lever for exam-mode compression while preserving the self-optimizing matrix system for long-term retention - seems too obvious to not have been tried before. If anyone has done similar work, I'd love to hear about it.