AI as a Personal Tutor? Only If It Makes Students Think

There has been a growing conversation recently around the role of AI in education, particularly its potential to act as a personal tutor for every child. A recent announcement from the Department for Education suggested that up to 450,000 disadvantaged pupils could benefit from AI tutoring tools. These tools are set to be trialled in secondary schools later this year, with the aim of providing personalised academic support at scale and, importantly, at no cost to those who need it most.

On the face of it, this is incredibly exciting. Access to high-quality tutoring has always been one of the most effective ways to improve outcomes, yet it has also traditionally been one of the least equitable forms of support. For many families, it simply is not an option. So the idea that we could offer something similar and with consistency at scale nationally, begins to feel like a genuine shift rather than just another initiative. But as with most things in education, the detail matters. And this is where we need to tread carefully.

The Risk: AI as a Glorified Search Engine

A lot of the scepticism around AI tutoring is not only understandable, it is necessary. Strip it back, and many people see these tools as little more than an upgraded search engine. A student asks a question, gets an answer, and carries on. That might feel helpful in the moment. It might even feel efficient. But it is not learning.

The issue is not access to answers. Students have never had more access to information than they do now. The issue is how they arrive at the answer. If a student is simply receiving answers without having to think, grapple, or make sense of the material, then the learning process has effectively been bypassed. Over time, that creates dependency. Students become used to the idea that when they get stuck, the solution is to ask something else to do the thinking for them. That is not what we would accept in a classroom, and it is not what we would expect from effective tutoring either.

“Memory is the Residue of Thought”

If AI tutoring is going to work, it has to get one thing right above everything else. It has to make students think. As Daniel Willingham puts it, “Memory is the residue of thought.” It is a simple line, but it carries so much weight. If we want students to remember something (and to be able to use it when it matters), they need to think about it deeply and effortfully in the first place. Not just see it or hear it, but actually think about it. Simply being exposed to an answer, or even understanding it in the moment, is not enough to secure it in our long-term memory.

This is where the risk sits with AI. If the default setting is to provide quick answers, then it might help in the moment, but it will not enable the learning to stick. We have all seen that in classrooms. A student can follow along, nod in the right places, even say they understand. But if the thinking has not happened, the learning has not either. However, if these new AI tutoring platforms can be designed in a way that actively promotes thinking, then they have the potential to become a powerful addition to the learning process rather than a shortcut around it.

What Would Effective AI Tutoring Actually Look Like?

If AI is to replicate the impact of a human tutor, then it needs to mirror the process of effective teaching, not just the outcomes. High-quality tutoring is not defined by the answers given, but by the questions asked, the misconceptions uncovered, and the thinking that is generated throughout the interaction. That is the bar AI needs to reach. For example, an interaction with an AI tutor might start with something as simple as the AI asking, “What do you already know about this topic?” or “Talk me through your thinking so far.” Those kinds of questions immediately shift the dynamic. The student is no longer a passive recipient. They are involved from the outset, and that matters when it comes to learning. From there, it becomes a conversation. Small pieces of input, followed by prompts to explain, clarify, or apply. Not long blocks of information, but a steady back-and-forth that keeps the student cognitively engaged. The kind of interaction where you cannot just sit back and let it wash over you.

Crucially, it would also need to build in those moments we rely on in the classroom. Checking for understanding. Asking why something works, not just what the answer is. Pushing students to think a little harder, to consider a different example, to apply their knowledge in a new context. Scenario-based questions in which students have to apply their learning in order to solve can be particularly powerful, because they force students to do something with what they have learned, ensuring that students are not just receiving information but actively working with it.

The Opportunity: Levelling the Playing Field

If we can get this right, the implications are hard to ignore. For a long time, there has been a clear gap when it comes to support outside of school. Some students benefit from private tuition, extra resources, and tailored preparation in the run-up to exams. In contrast, many disadvantaged students may rely almost entirely on what happens within the classroom. That is why this proposal from the Department for Education matters.

If these tools are genuinely effective and accessible, then we are looking at the possibility of high-quality personalised support being available at scale to students who have never had that before. An on-demand tutor, available anytime anywhere, could make a real difference. Not as a replacement for teaching, but as an extension of it. A way of reinforcing, revisiting, and building confidence outside the classroom whenever it is needed. If that becomes a reality, then we are not just talking about innovation. We are talking about equity.

This Isn’t Theoretical—It’s Already Starting

What is interesting is that this is not some distant idea. We are already starting to see elements of it emerge in existing technologies. Tools such as Google Gemini are starting to incorporate ‘guided learning’ features that move beyond simply providing answers. Instead, they use structured prompts, questioning, and staged explanations to keep the learner actively engaged in the process - closer to what we would recognise as effective teaching. It is early, and there is still a long way to go, but the direction is encouraging. The shift away from passive use of AI tools towards something more interactive is already happening.

The Bottom Line

In the end, the success of AI tutoring will not be determined by how advanced the technology becomes, but by how effectively it engages students in thinking. If it makes things easier by removing the need to think, then it will weaken learning. If AI becomes a shortcut to answers, it will reinforce passive habits and limit long-term retention. However, if it is built to scaffold thinking - to question, to probe, and to challenge, then it has the potential to transform how students learn beyond the classroom. That is the challenge now. Not just building tools that work, but building tools that think carefully about how learning actually happens. The goal should not be to replace thinking, but to demand it.

Get this right, and it could be something really special.

Next
Next

From Feedback to Follow-Through: Creating a culture for feedback to have impact