Eedi Showing How AI Tutoring Can Deliver Personalized Learning Safely and Effectively
By Learning Engineering Virtual Institute
High-quality one-to-one tutoring is one of the most effective ways to boost student learning, but it’s also one of the hardest to scale. Personalized support is expensive, requires a large pool of dedicated educators, and is often inaccessible to many students and classrooms.
LEVI Math team Eedi might have a solution, as their new research offers an exciting look at how AI technology can make effective tutoring available at scale, showing that students who received AI-assisted tutoring support were more successful at solving novel problems than their peers who received human-only tutoring or none at all.
Eedi’s work continues to demonstrate the LEVI Math’s commitment to using technology to double the rate of math learning in middle school math with a focus on affordability.
Combining Eedi & LearnLM to Accelerate Math Learning
Eedi, a UK-based mathematics learning platform that diagnoses student misconceptions and delivers targeted support, teamed up with LearnLM, Google DeepMind’s family of AI models fine-tuned specifically for teaching and learning. By integrating LearnLM directly into Eedi’s platform, researchers created a blended tutoring system where the AI could draft personalized guidance and explanations while Eedi provided the structured curriculum and diagnostics. This combination allows for an education-focused AI model to be embedded within an established learning platform that includes human oversight.
Researchers conducted a randomized controlled trial with 165 students across five UK secondary schools. Students working on the Eedi mathematics learning platform were randomly assigned to receive either traditional static support, like pre-written hints, or real-time tutoring via online chat. Within the tutoring group, interactions were further randomized so students would work either with a human tutor or with LearnLM supervised by an expert tutor.
To ensure safety and pedagogical quality, every response that LearnLM generated was reviewed by an expert tutor before it reached the student. Tutors could approve, edit, or rewrite the AI’s messages, providing an important human-in-the-loop safeguard.
The Results
What they found is exciting:
- LearnLM performed at least as well as human tutors on every measured learning outcome, and, in some cases, it even performed better.
- Students receiving tutoring with LearnLM were 5.5 percentage points more likely to solve novel problems than peers tutored only by humans, successfully answering newly posed problems at a rate of 66.2% compared to 60.5%.
- LearnLM proved to be a trustworthy source of pedagogical instruction, with the supervising tutors approving over 76 percent of its messages without changes or with only minimal edits (changing one or two characters; e.g., deleting an emoji).
Takeways
- Real-world classroom evidence: Unlike many AI studies conducted in labs or simulations, this trial took place in a rigorous, real-world classroom settings with students and teachers.
- Human-AI collaboration: The success of LearnLM wasn’t due to replacing educators, rather it was due to supporting them. Tutors reported that the AI was especially good at generating Socratic questions and deeper reflection prompts, and some tutors even said they learned new instructional strategies from the model.
- Potential for scaling tutoring: If generative AI can reliably augment or extend high-quality tutoring under proper supervision, it could address long-standing equity gaps in education by making personalized support more accessible.
Continuing the Work
This research offers promising evidence that AI, when designed with pedagogical principles and human-in-the-loop safeguards, can effectively support learning. At LEVI, we’re excited to continue to support Eedi’s and every one of our LEVI Math teams’ work in an effort to democratize and accelerate math learning.