
Thought Leadership
Not finding what you’re looking for?
In the race to implement AI solutions, many organizations are discovering a frustrating paradox: AI is brilliant at efficiency but often terrible at intimacy. It can scale communication, but it struggles to understand why a specific challenge matters to a specific learner.
True innovation in education happens at the intersection of academic tradition and market responsiveness. To move forward, we must move beyond the idea of AI as a simple chatbot and toward AI as a Study Partner that builds a genuine learning relationship.
The most successful learning environments rest on the ability to provide support exactly when it is needed, not just during scheduled hours. In today’s digital landscape, many students, especially those who are non-traditional learners, aren’t doing their work on a 9-5 schedule. It’s very plausible that a student might encounter a concept that doesn't "click" late on a Sunday night, when they traditionally wouldn’t have any access to help.
With this in mind, and the powers of GenAI now available, we sought to help those learners. At Kaplan Professional UK, we launched our TutorBot, which led to a 73% reduction in emails to Academic Support within just three weeks. The insight here isn't just about reducing volume. It’s about providing data-informed context that makes support proactive rather than reactive. When learners get their questions answered immediately, it prevents temporary confusion from turning into long-term disengagement.
Perhaps the greatest hurdle in digital learning is the "Open Question.” This refers to the complex tasks requiring critical analysis where there isn't a simple right or wrong answer or multiple-choice response. For years, the industry assumption was that AI feedback could only handle multiple-choice metrics.
But with research and testing, we’ve found a way to optimize AI to not only answer but thrive helping with open question responses. Our Open Question Feedback at Kaplan Professional UK proved that AI can provide tailored feedback on sophisticated written responses that match the quality of expert human review. In fact, 78% of our learners now rate this AI feedback as "at least as good as" human instructors. The lesson for leaders is clear: use AI for what it does best, tracking details and maintaining consistency. But that’s not to say that we don’t need human impact. In fact, AI frees up the tedious parts of our job, so that human energy can be reserved for what we do best: intuition, empathy, and judgment.
One of the less obvious but most powerful capabilities of TutorBot is its ability to surface the cognitive layer of learning, not just the final response.
Every interaction contains signals about how a learner is thinking: what they retrieve easily, where they hesitate, which misconceptions recur, how they respond to feedback, and whether they can transfer ideas into new contexts. Unlike traditional assessment, which captures performance at a single point in time, conversational AI captures learning in motion.
This matters because learning rarely fails due to a lack of effort. It fails when learners are overloaded, misdirected, overconfident, or stuck on faulty assumptions. By analysing patterns across TutorBot conversations, we can begin to see these breakdowns as they happen, not weeks later in an exam result.
This creates a new kind of learning data. Not just right or wrong, but indicators of confidence, persistence, strategy use, and conceptual stability. In effect, TutorBot allows us to observe whether learners are operating within an appropriate level of challenge and whether support is helping them progress or inadvertently masking gaps.
Because TutorBot can pose questions, probe understanding, and adapt its responses, it functions as a continuous formative assessment layer rather than a static support tool. Over time, this generates structured evidence about what learners can do independently, what they can do with guidance, and where progress stalls.
That distinction is critical. It allows us to move beyond crude metrics such as content completion or time-on-task and toward more meaningful indicators of readiness and risk. Patterns in question attempts, revision of answers, and response to feedback can help signal when a learner is consolidating understanding or when confidence is outpacing competence.
This opens the door to genuinely predictive learning analytics. Not prediction based on demographics or historical averages, but on observable learning behaviour. The same system that helps a learner think through a problem can also help educators identify who may need timely intervention and why.
Taken together, these capabilities point to a shift in how assessment itself is conceived. TutorBot is not designed to replace exams or human judgement. Instead, it turns assessment into a low-stakes, continuous process that actively drives learning rather than merely recording outcomes.
By combining immediate support with delayed insight, we can align technology with what decades of learning research tells us: that durable learning comes from challenge, feedback, and reflection, not from being told the answer.
This is where AI moves beyond efficiency and into intimacy. Not by mimicking empathy, but by paying close attention to how learners think, struggle, and grow.
Ultimately, the most compelling business case for this tech-human partnership is the impact on the learner's mindset. By providing 24/7 personalized guidance, we saw exam confidence jump from 85% to 93%.
When technology is used to empower student thinking rather than replace it, we build the one thing an algorithm cannot replicate: trust.