← All articles
Product

Adaptive learning isn't a setting — it's the whole product

Most apps adjust a difficulty slider. A real tutor rewrites the next moment based on what your child just said.

Tim de Vallée7 minTBD

Last Tuesday my son Remi tried to read the word "ocean." He said "oh-cee-an," paused for four seconds, then whispered "no, that's wrong" before I could open my mouth. The tutor he was using — ours — didn't move on, didn't praise him, didn't drop the difficulty. It asked him what part of the word he wasn't sure about. He pointed at the "c." They worked on the soft-c rule for ninety seconds and went back to the sentence. He read it correctly and grinned at the screen like he'd just won something.

That ninety seconds is the entire product.

What "adaptive" usually means

Open any of the big early-learning apps and look for the word "adaptive." You'll find it on the pricing page, in the App Store description, sometimes in the company's name. Then look at what's actually happening under the hood. In most cases, "adaptive" means one of three things:

  • Difficulty laddering. The child gets a question right, the next one is harder. They get it wrong, the next one is easier. The ladder is fixed in advance — every kid climbs the same rungs.
  • Branching trees. Content authors pre-write a decision tree: if the child misses a long-vowel question, route them to long-vowel practice node 14. There may be hundreds of nodes, but every path was hand-built by a curriculum designer six months before your child sat down.
  • Spaced repetition timing. The app remembers what your child got wrong and surfaces it again on a schedule. This is genuinely useful — it just isn't the same thing as a tutor responding to what your child is doing right now.

None of this is bad. Difficulty laddering and spaced repetition are well-supported by decades of cognitive science. The problem is the word "adaptive" doing work it can't do. A fixed tree, no matter how big, can only respond to the inputs the authors anticipated. It can't react to the fact that your child just said "I'm too tired for the hard ones" in a small voice. It can't notice that they paused for eleven seconds before guessing. It can't tell that they're guessing at all.

What a tutor actually does

Think about a human tutor working with a six-year-old on reading. The good ones don't follow a script. They listen to the exact sounds the child makes, watch their face, time the pauses, and decide what to say next based on a hundred small signals. Sometimes the right move is harder material. Sometimes it's easier. Sometimes it's a joke, a story, a question about the picture, or a complete left turn into the alphabet because the kid keeps confusing "b" and "d" and nothing else matters until that's resolved.

That kind of moment-to-moment responsiveness is what the research community calls contingent scaffolding — adjusting support based on the learner's current performance, not their average performance over the last ten questions. Lev Vygotsky called the space where this happens the zone of proximal development, and a practitioner overview from the Reading Rockets project describes how skilled teachers move in and out of that zone in real time.

A decision tree can't do this. Not because the authors are lazy — because the combinatorial space is too large. A child's reading session has thousands of possible micro-states, and pre-writing a useful response to each one is a problem no curriculum team can finish.

How Lumikids does it differently

Lumikids is built on Anthropic's Claude. When your child speaks, four things happen before the tutor responds:

  1. Wispr Flow transcribes what they actually said, including the false starts, the half-words, and the "ums" — not a cleaned-up version.
  2. Claude reads that transcription alongside what was on screen, what the child has done in the last few minutes, the length of the pause before they spoke, and where they are in the broader skill map.
  3. Claude generates the next thing the tutor should say, plus the next thing the screen should show.
  4. ElevenLabs speaks the response in under a second, in the same voice every time.

The decision isn't "this child got the answer wrong, drop the difficulty by one." The decision is closer to: "This child knows the short-a sound but stumbled on it twice in the last minute, paused for six seconds before guessing, and just said 'this is hard.' I'm going to acknowledge that it's hard, drop back to a word they already know to rebuild momentum, then come back to short-a from a different angle." That's a sentence we never wrote in advance. Claude wrote it in the moment, for that child, based on what just happened.

We call this conversational adaptation, to distinguish it from curriculum-based adaptation. Both have a place. Curriculum-based adaptation is great for scope and sequence — making sure your child encounters all the phonics rules in a sensible order. Conversational adaptation is what handles the actual moment of struggle.

Curriculum-based vs. conversational, in one table

Curriculum-based adaptationConversational adaptation
AuthoredIn advance by curriculum designersIn the moment by the model
Reacts toRight/wrong answersWords, pauses, tone, history
CoverageWide skill map, predictableThe exact micro-moment in front of the child
Best atScope, sequence, completenessStuck moments, surprises, emotion
Worst atAnything the authors didn't predictGuaranteeing every skill gets equal time

We use both. The curriculum map is real — your child is moving through a sequence of reading skills with a defensible order. The conversational layer is what makes the trip feel like working with a tutor instead of taking a test.

The honest tradeoffs

Conversational adaptation is not free. There are three real costs, and any parent evaluating us deserves to hear them.

Consistency. A fixed tree gives the same response to the same input, every time. Claude doesn't. Two children making the same mistake on the same word may get slightly different responses — different examples, different metaphors, different jokes. The pedagogical intent is the same; the surface text varies. For most kids this is a feature (it feels more like a person). For some, especially kids who thrive on routine, it can be disorienting. We're working on a "steady mode" for families who want less variation.

Predictability for parents. When the tutor's next move is decided in the moment, we can't show you a flowchart of what your child will encounter next week. We can show you what they encountered today, in detail — that's what the parent dashboard is for. But "here's the exact next ten lessons" isn't something we can promise.

Quality assurance is harder. Testing a fixed tree is bounded: you can walk every path. Testing a model that generates new responses is open-ended. We have automated evaluations running constantly against thousands of synthetic kid interactions, plus human review of real sessions (with parental consent), plus Sentry and PostHog monitoring for anomalies. It's more work than QA-ing a static app, and we won't pretend otherwise.

Why this matters for kids who learn differently

The kids who benefit most from conversational adaptation are the ones who get the least out of fixed-tree apps.

Children with dyslexia often have specific, individual error patterns — one child confuses "b" and "d," another flips short vowels, a third reads accurately but slowly. The International Dyslexia Association's parent fact sheets describe how varied the profiles are. A decision tree built for the average struggling reader will miss most of these patterns. A model reading the actual transcript can spot them in a single session.

English Language Learners (ELLs) bring vocabulary and phonology from another language into English reading. A tree author can't anticipate the specific transfer errors a child whose first language is Tagalog or Portuguese or Mandarin will make. Claude can read the error, recognize the pattern, and adjust without needing the curriculum team to ship an update.

Kids with attention differences need pacing that responds to them, not to a schedule. Sometimes that's shorter chunks. Sometimes it's a sudden detour into something fun before coming back. Sometimes it's ending the session early because today isn't the day. A fixed tree can't make those calls. A tutor — human or otherwise — can.

None of this means Lumikids is a replacement for a reading specialist, a speech-language pathologist, or a teacher who knows your child. It means the daily practice between those professional sessions can actually be responsive instead of robotic.

What to look for in any "adaptive" app

If you're evaluating any AI tutor — ours or anyone else's — ask the company one question: "When my child does something the curriculum team didn't predict, what happens?" If the answer is "we route them to the closest matching node," it's a tree. If the answer involves a language model reasoning about the specific moment, it's conversational. Both can be good products. They are not the same product, and "adaptive" shouldn't be allowed to mean both.

Remi is six now. He's read more this year than I expected, and I've watched the tutor make calls I didn't write and wouldn't have thought of. That's the bar. Anything less is a slider with a marketing budget.

Try the beta at lumikids.dev — Remi's class is full, but we're adding families weekly.

Image brief

  • Hero image: A six-year-old at a small table speaking to a soft-edged tablet, with branching paths of light unfolding outward in real time — picture-book lighting, no logos.
  • Inline image 1: Side-by-side diagram of a static decision tree (rigid, pre-drawn branches) next to a "live" branching pattern that grows as the child speaks. Placement: after the "What 'adaptive' usually means" section.
  • Inline image 2: Annotated screenshot of a Lumikids session showing the child's transcript, the pause timing, and the tutor's response choice, with arrows explaining each input Claude considered. Placement: after the "How Lumikids does it differently" section.

Internal link suggestions

  • "How my four-year-old taught me to build an AI tutor" — anchor: the founding story behind Lumikids
  • "Voice-first learning: why we built around speech, not taps" — anchor: why we built around the child's voice
  • "A parent's framework for evaluating any AI tutor" — anchor: a parent's framework for evaluating any AI tutor

Editor's note

The "ocean" anecdote is composite — please confirm the specific word and pause length with Tim before publishing, or swap in a Remi moment Tim wants on record. The Reading Rockets and International Dyslexia Association links are to stable parent-facing pages but should be re-checked at publish time. The "steady mode" feature is on the roadmap but not shipped; if it's been deprioritized, soften that paragraph to "we're considering" rather than "we're working on."

One more thing —

Lumi is in open beta and free for the first 100 families. If reading time at your house ever feels harder than it should, we built this for you.