Tag Archives: AI in Education

Founder of Sapio Ltd and a leading voice in AI and digital strategy, Laura Knight empowers education leaders worldwide to harness technology responsibly, build inclusive cultures, and turn innovation into real impact.
February 4th

AI Is Already in the Classroom – The Question Now Is How Schools Lead

The debate about artificial intelligence in education has moved on. At the AI in Education Institute event hosted at York St John’s University, there was little sense that schools were still deciding whether AI belonged in the system. That question, most delegates agreed, has already been answered in practice. The sharper, more pressing issue now is leadership: how schools make sense of AI’s growing presence, how they set boundaries, and how they retain professional judgement in a period of rapid change.

From the outset, the tone was measured rather than breathless. AI was discussed not as a future disruption, but as a present reality. More than half of teachers have used an AI tool for school work in the past week, according to figures referenced during the event. Only a small minority say they have never used one at all. AI, in other words, is no longer an experiment at the edges of education. It is already embedded in planning, feedback, administration and decision-making – often quietly, and not always consistently.

The keynote address from Laura Knight framed the challenge with clarity. The problem, she argued, is not access to technology, but confidence in its use. Teachers are experimenting, sometimes highly effectively, but not always openly. While many feel comfortable using AI tools, fewer feel fully at ease discussing how they use them with colleagues. That gap – between practice and shared understanding – is where risk, inconsistency and anxiety can take hold.

Rather than advocating sweeping policies or rapid roll-outs, the emphasis throughout the morning was on deliberate leadership. Schools were encouraged to be clear about purpose before they worry about platforms. Why is AI being used? Which problems is it intended to solve? And which decisions must remain firmly human? Without that clarity, there is a danger that schools chase the “next big thing”, adopt tools at speed, and lose sight of what teaching and learning are actually for.

Several speakers returned to the same underlying concern: velocity. AI systems evolve far faster than traditional school improvement cycles. Leaders are being asked to make decisions in an environment where accountability measures are still rooted in older models, and where professional development has not always kept pace. The result can be fragmented adoption – pockets of innovation alongside uncertainty, and sometimes silence.

From here, the conversation became more searching. One slide posed a deceptively simple question: are schools “winning or losing” when it comes to AI? Measuring success purely in terms of efficiency or productivity, delegates were warned, risks missing the point. Education is not an optimisation problem to be solved.

Humans, as one slide put it, are social, creative and brilliant – but also inconsistent, messy and flawed. That reality, speakers argued, is not something technology should attempt to erase. AI can support human decision-making, but it cannot replace the values that underpin it. Leadership, therefore, is not about removing uncertainty, but about holding it responsibly.

Much of this discussion centred on what was described as “the line”. Above it sit practices that are inclusive, transparent, equitable and values-aligned. Below it lie approaches that are opaque, exclusionary or misaligned with a school’s purpose. The difficulty is not that the line exists, but that it shifts as tools evolve. Drawing it – and redrawing it – is an ongoing act of judgement, not something that can be outsourced to software or policy templates.

This raises uncomfortable questions. Who decides what counts as appropriate use? Who is accountable when AI use drifts into compliance theatre, where policies exist on paper but not in practice? And who ultimately bears the consequences when technology reshapes learning in ways that were not fully intended?

Concerns about “intellectual offloading” also surfaced repeatedly. AI can reduce workload, but there is a difference between support and substitution. When systems begin to shape thinking rather than assist it, schools risk losing professional agency. Risks such as function creep, surveillance capitalism and long-term dependency were not presented as inevitabilities, but as outcomes that require active leadership to avoid.

From here, the focus turned inward – towards performance, feedback and growth. Used carefully, AI can act as a thinking partner rather than a shortcut. Leaders were shown examples of how it can create safe spaces for rehearsal and reflection: role-playing difficult conversations, stress-testing decisions, or drafting responses to challenging scenarios. The value lies not in producing perfect answers, but in sharpening judgement.

This reframes feedback. Instead of being occasional and high-stakes, feedback becomes ongoing, low-risk and developmental. Leaders can draft, critique, iterate and reflect – building confidence through repetition. Mistakes happen privately; learning happens continuously. But this only works, speakers cautioned, if leaders remain firmly in control of the process. AI can surface perspectives, but it cannot define priorities or values. That responsibility remains human.

The final section of the event widened the lens further still, returning to a question that had underpinned every discussion: what does it really mean to do the work? Not to adopt tools, write policies or meet compliance thresholds – but to take responsibility.

The closing focus on data stewardship and digital sovereignty made clear that this is where leadership becomes most visible. Schools were encouraged to move beyond passive acceptance of technology and towards active stewardship: mapping where data flows, assessing who has access, clarifying purposes, stress-testing assumptions and setting boundaries. Safeguarding, in this context, extends beyond physical and online safety to include pupils’ digital identities over time.

Delegates were urged to treat data as something held in trust, not something exchanged for convenience. That means asking difficult questions of suppliers, understanding contractual language, and resisting systems built on opacity or behavioural surplus. Vendor lock-in, it was argued, is not just a technical risk, but an ethical one – limiting future choice and narrowing professional autonomy.

Trust, speakers concluded, is built through visibility. Clear, accessible policies for staff, pupils and families are not bureaucratic add-ons, but signals of intent. Stewardship, when done well, becomes a public act of leadership – one that reassures communities that innovation is being handled with care rather than haste.

As the AI in Education Institute event at York St John’s University drew to a close, the mood was neither alarmist nor celebratory. It was pragmatic. AI is already reshaping education. The real question is whether schools lead that change with clarity and purpose, or allow momentum to make decisions for them. Doing the work, in this moment, means choosing values over novelty, judgement over speed, and responsibility over delegation.

Dr Beth Lane - York St John University launches national Institute of AI Education
February 4th

York St John University Launches National Institute of AI Education

York St John University has hosted the launch of the new Institute of AI Education, bringing together teachers, researchers, school leaders and policymakers to consider how artificial intelligence should be understood and used across England’s education system.

The event, held at the university’s Creative Centre in York, marked the formal introduction of a research-led initiative focused on embedding AI literacy, critical thinking and learner agency across classrooms, teacher training and leadership practice. Organisers described the institute as a “work in progress by design”, inviting schools and educators to help shape its direction from the outset.

Based in York, the institute will operate on a national “hub and spoke” model, supporting regional networks while working closely with schools, universities and researchers. Rather than positioning AI as a standalone subject, the approach aims to weave AI literacy through existing curricula.

Opening the event, speakers said the central question for education was no longer whether AI would have an impact, but how schools and systems could respond responsibly, equitably and in the long-term interests of children and young people.

That focus on people rather than technology set the tone for the day. In their founders’ address, co-founders Beth Lane and Narinder Gill shared personal journeys that reflected the institute’s wider values.

Dr Beth Lane’s Journey from leaving school at 16 to PHD in Computer Science

Dr Lane spoke about leaving school at 16 and starting work on a supermarket checkout before moving into industry, completing an apprenticeship and later returning to education to study computer science and complete a doctorate. Her story, she said, was a reminder that education systems must recognise potential at every stage of life, not only through traditional academic routes.

Narinder a people-first education leader and executive coach, focused on building resilient communities and helping schools and systems drive meaningful, lasting change.

Narinder Gill described a career shaped by public service and education policy, including work with the Department for Education, the Association of Education Advisers and regional leadership across Yorkshire and the Humber. Her experience highlighted the importance of strong systems, trusted professionals and local communities in turning innovation into meaningful change in classrooms.

Together, the founders argued that AI cannot be embedded effectively in education without being grounded in lived experience – of teachers under pressure, of learners who do not fit neat categories, and of communities navigating rapid technological change.

The institute’s ambition, they said, is not to accelerate adoption for its own sake, but to support thoughtful, ethical progress. Its work will focus on helping children learn not just how to use AI tools, but how to question them, understand their limitations and reflect on their own thinking and emotional responses.

As the launch drew to a close, the message was clear: the future of AI in education will not be shaped by technology alone, but by people, diverse pathways and a shared commitment to evidence, trust and collaboration.