Reading Lab
AI-powered phonics practice for early readers
Project info
Company: eSpark Learning
Timeline: 5 months from inception to beta delivery
Product team: PM, product designer, 2 learning designers, 4 engineers
My role as lead product designer
Lead discovery efforts with stakeholders, students and teachers.
Plan, execute, test and deliver all designs.
Work with learning designers and devs to ensure quality.
Art direction for the character.
Impact
eSpark Reading Lab was released publicly on August 1st 2024 Read public announcement
Problem
Beginning readers need to learn the letters, their sounds and and how letters and sounds make words. For a teacher to support each individual student at their own level and track their progress is a manual and time consuming process.
Many US states are adopting instructional approaches aligned to the Science of Reading; a rigorous and research-based method to teach the building blocks of words.
With emerging speech technologies it has become much easier to analyze speech at an academically rigorous level. How can we use this technology to allow beginning readers to have fun practicing letters, sounds and words independently, at their own level and pace, while giving teachers insight into their student’s progress?
Keeping track of student progress in foundational reading skills is mandatory and an incredibly manual and time-consuming process.
What did we build?
Reading Lab consists of short, sequential lessons where students practice independently reading words with certain sound characteristics.
Fennec, a cute, big-eared fox, guided students to practice reading words at their level.
Student speech is analyzed using Microsoft’s Azure Speech API.
If the student struggles reading the word, a targeted micro-intervention helps the student address the specific mistake.
When a student makes enough progress they get choose their 2 favorite words. eSpark generates a custom decodable story for the student to read by themselves
Teachers can see their students’ progress and hear their kids sounding out words and reading their decodable stories
Read more about Reading Lab in the official product announcement.
Our approach
Micro interventions
At the core of Reading Lab are “micro-Interventions” that launch after a student struggles with a word or sound. Micro-interventions are multimodal learning moments where students can learn and practice foundational reading skills.
In the classroom, teachers often use physical objects like chips or Play-doh to help kids sound out words. We created fun animated dots and sliders kids can interact with when sounding out words.
In a "sound-out grapheme with dots" micro-intervention Fennec models sounding out a word by tapping colored dots and then blending the sounds together into the word. Then it's the student's turn to try this themselves.
Fennec
Since the main audience for this activity is non-reading children, we designed a new character especially to help kids interact with the application while learning reading fundamentals.
“Thinking” When waiting for the AI-analysis of student audio, Fennec patiently waits.
“Lip-flap” Simple lipflap animation when Fennec gives instructions.
A next version of Reading Lab will include lip-synced Lottie animations to help kids with pronunciation.
“Listening” When a child is supposed to sound out a word, Fennec stretches out their ears to listen closely.