The first time my wife Kate asked me how Remi's session went, I realized I had built a tutor I could not actually inspect. The voice loop worked. Remi was reading. But when Kate said, "Show me where he got stuck," I had logs in three different tools and no way to play back the moment. That was the day I started building the parent dashboard.
Most learning apps give parents a green checkmark and a weekly email. "Remi completed 4 lessons. Great job!" That is not observability. That is a participation trophy with a progress bar. If an Artificial Intelligence (AI) is talking to your child for twenty minutes a day, you should be able to see exactly what it said, what your child said back, and where the conversation went off the rails. Anything less is asking you to trust a black box with the most important conversation in your house.
What the dashboard actually shows
The Lumikids parent view is built around one idea: every session is a transcript, not a score. When you open the dashboard, you see your child's last session at the top, with a timeline you can scrub. Each moment on the timeline is a turn — Lumi says something, the child answers, Lumi responds. You can click any turn and read the exact words on both sides. You can play the audio. You can see how long your child paused before answering, because pause length is one of the most honest signals in early reading.
Below the session view, there is a weekly skill snapshot — not a letter grade, but a small set of plain-English statements like "Remi is consistent on short-a words and still working on consonant blends." Those statements are generated from the actual turns, not from a hidden rubric.
Here is the rough layout, rendered as text so you can picture it before we ship the screenshot:
``` +--------------------------------------------------------------------+
| Lumikids — Parent View Remi (age 4) This week ▾ | +--------------------------------------------------------------------+
| TODAY · Tuesday session · 14 min · ended by Remi |
|---|
| [▶ play full session audio] [download transcript] |
| Timeline |
| 00:00 ──●──────●──────●──────●──[stuck 22s]──●──────●── 14:02 |
| intro word1 word2 word3 pause recover goodbye |
| Click any dot to read the turn + play that 10-second clip. |
+--------------------------------------------------------------------+
| WHERE REMI GOT STUCK |
| • 06:41 — "br" blend in "bring." Tried 3 times. Lumi slowed |
| pace, broke the word into onset + rime. Remi got it on try 4. |
| • 10:12 — Lost focus. Asked Lumi about dinosaurs. Lumi answered |
| in one sentence and returned to the story. | +--------------------------------------------------------------------+
| THIS WEEK · skill snapshot |
| Strong: short-a, short-i, sight words from list 1 |
| Emerging: consonant blends (br, cl, st) |
| Watch: attention drops after minute 12 | +--------------------------------------------------------------------+
| SETTINGS |
| Session length cap · Voice profile · Pause sensitivity · Export | +--------------------------------------------------------------------+ ```
That is the whole screen. No streak counter. No badges. No leaderboard. No notification asking you to come back tomorrow.
Why we lead with the transcript
Parents who have used Lexia, IXL, or Khan Academy Kids will notice what is missing: the gamified summary view. We removed it on purpose. A summary tells you the app's story about your child. A transcript lets you build your own. If Lumi misreads a moment — and it will, sometimes — the transcript is how you catch it.
The audio playback matters more than I expected. The first time Kate listened to a clip of Remi reasoning out loud about why a duck was sad in the story, she heard him doing something she had never seen on paper. That clip is now on our phones. No worksheet can do that.
Why we built it this way
Three reasons, in order of importance.
First, parents are the safety layer. I do not believe any company that says their AI is fully safe for children. Claude is trained carefully, our system prompt is tight, we filter content on the way in and on the way out — and a four-year-old will still occasionally pull the conversation somewhere unexpected. The dashboard means you, the parent, are not relying on our word. You can read what happened. If something is off, you see it the same day, not after a quarterly report.
Second, learning is not legible from outside. When my son struggles with the "br" blend, that is a fact about phonological processing — not a fact about Remi being "behind." A traditional report card flattens that into a number. The dashboard preserves the moment so you can see the strategy Lumi used and decide if it matched your child. Sometimes you will disagree with how Lumi handled it. That feedback loop is part of the product, not a bug in it.
Third, you are paying for this. When you pay for an AI tutor, you should get the same level of visibility a one-on-one human tutor would give you over coffee. "Here's what we worked on. Here's where she got stuck. Here's what I would try next." Anything less is selling you a service you cannot evaluate.
For more on the safety side of this — what the guardrails actually do and where they fall short — see our piece on what safe AI for kids actually means.
How Sentry and PostHog feed the dashboard
Two tools sit underneath the parent view, doing very different jobs.
Sentry catches technical issues — a voice synthesis call that timed out, a model response that came back empty, a recording that failed to upload. Parents never see Sentry directly. What they see is the consequence: if a session had a technical hiccup, the timeline marks it honestly. "Audio for this turn did not record" beats pretending it did.
PostHog tracks learning patterns at the session level — how long the child engaged, where pauses clustered, which prompts produced longer responses. We use it the way a teacher uses a clipboard. The dashboard surfaces the parts of those patterns that are useful to a parent (attention drop-offs, recurring stuck words) and hides the parts that would just be noise (per-turn token counts, model latency in milliseconds).
Both tools are configured to never link product analytics to a child's identity. PostHog sees a session shape; it does not see "Remi." Sentry sees an error stack; it does not see a transcript. The transcript itself lives in our own database, encrypted, scoped to your family account, and exportable at any time.
What we deliberately do not track
This is the section most learning-app companies skip, which is exactly why it should be first.
- No microphone listening outside of a session. The mic is only hot when Lumi is in a turn. No ambient recording. No "wake word" sweep.
- No advertising identifiers. Lumikids does not ship a Software Development Kit (SDK) that fingerprints your child's device for ad networks.
- No third-party data sharing for marketing. Transcripts are not sold, licensed, or used to train external models.
- No engagement scoring tied to notifications. We do not measure how often we can get your child to come back. We measure whether the sessions they do have actually help.
- No "emotion detection" from voice. A handful of vendors in this space claim to read a child's emotional state from audio. The science is thin, the privacy implications are heavy, and we decided early on it was not a road we wanted to walk down.
The legal floor here is the Children's Online Privacy Protection Act (COPPA) Rule, which sets baseline requirements for any service collecting data on kids under 13 in the United States. Meeting COPPA is the start, not the finish. Most of the items above are stricter than the rule requires.
What is still missing
I will be honest: the dashboard does not yet do everything I want it to. We are still building the comparison view that shows progress across two- and four-week windows. The skill snapshot today is generated weekly, not on-demand. Audio search ("find the moments Remi laughed") is on the roadmap but not shipped. And we have not yet built a co-parent view that lets two adults see the same data without sharing a login.
If you try the beta and the dashboard does not answer the question you actually have, tell us. That is how the next version gets built.
For a broader checklist on what to demand from any AI tutor — not just ours — read a parent's framework for evaluating any AI tutor.
The bottom line
A dashboard is not a marketing surface. It is the place where trust is either earned or quietly lost. We built ours to show you the conversation, not a score; to log the technical truth, not a sanitized version; and to leave out the data we do not need. If you want to see what your child actually did on Lumikids today, you can. That is the whole point.
Join the beta and we will set up your parent view in the first session.
Image brief
- Hero image: A laptop on a kitchen counter showing a clean dashboard with a child's reading session timeline, soft morning light, a juice cup beside it.
- Inline image 1: A close-up annotated wireframe of the session timeline with a "stuck" marker highlighted — place it directly after the ASCII layout block.
- Inline image 2: A simple two-column diagram labeled "What we track / What we don't" with icons for mic, ad identifier, transcript, and emotion detection — place it inside the "What we deliberately do not track" section.
Internal link suggestions
- "What 'safe AI for kids' actually means (and what it doesn't)" — anchor: what safe AI for kids actually means
- "A parent's framework for evaluating any AI tutor" — anchor: a parent's framework for evaluating any AI tutor
- "How my four-year-old taught me to build an AI tutor" — anchor: the founding story behind Lumikids (optional, for the opening paragraph if Tim wants to backlink)
Editor's note
Tim — three flags for your review. (1) Confirm the Remi anecdote about the "duck was sad" audio clip; I wrote it as a representative moment, swap in a real one if you have it. (2) Confirm the "session ended by Remi" label is wording you want on the dashboard, vs. "session ended by child." (3) The five "what we don't track" bullets are written as commitments — please verify each one matches current implementation, especially the no-third-party-SDK and no-emotion-detection lines, before we publish.
Lumi is in open beta and free for the first 100 families. If reading time at your house ever feels harder than it should, we built this for you.