The debate about artificial intelligence in education has moved on. At the AI in Education Institute event hosted at York St John’s University, there was little sense that schools were still deciding whether AI belonged in the system. That question, most delegates agreed, has already been answered in practice. The sharper, more pressing issue now is leadership: how schools make sense of AI’s growing presence, how they set boundaries, and how they retain professional judgement in a period of rapid change.
From the outset, the tone was measured rather than breathless. AI was discussed not as a future disruption, but as a present reality. More than half of teachers have used an AI tool for school work in the past week, according to figures referenced during the event. Only a small minority say they have never used one at all. AI, in other words, is no longer an experiment at the edges of education. It is already embedded in planning, feedback, administration and decision-making – often quietly, and not always consistently.
The keynote address from Laura Knight framed the challenge with clarity. The problem, she argued, is not access to technology, but confidence in its use. Teachers are experimenting, sometimes highly effectively, but not always openly. While many feel comfortable using AI tools, fewer feel fully at ease discussing how they use them with colleagues. That gap – between practice and shared understanding – is where risk, inconsistency and anxiety can take hold.
Rather than advocating sweeping policies or rapid roll-outs, the emphasis throughout the morning was on deliberate leadership. Schools were encouraged to be clear about purpose before they worry about platforms. Why is AI being used? Which problems is it intended to solve? And which decisions must remain firmly human? Without that clarity, there is a danger that schools chase the “next big thing”, adopt tools at speed, and lose sight of what teaching and learning are actually for.
Several speakers returned to the same underlying concern: velocity. AI systems evolve far faster than traditional school improvement cycles. Leaders are being asked to make decisions in an environment where accountability measures are still rooted in older models, and where professional development has not always kept pace. The result can be fragmented adoption – pockets of innovation alongside uncertainty, and sometimes silence.
From here, the conversation became more searching. One slide posed a deceptively simple question: are schools “winning or losing” when it comes to AI? Measuring success purely in terms of efficiency or productivity, delegates were warned, risks missing the point. Education is not an optimisation problem to be solved.
Humans, as one slide put it, are social, creative and brilliant – but also inconsistent, messy and flawed. That reality, speakers argued, is not something technology should attempt to erase. AI can support human decision-making, but it cannot replace the values that underpin it. Leadership, therefore, is not about removing uncertainty, but about holding it responsibly.
Much of this discussion centred on what was described as “the line”. Above it sit practices that are inclusive, transparent, equitable and values-aligned. Below it lie approaches that are opaque, exclusionary or misaligned with a school’s purpose. The difficulty is not that the line exists, but that it shifts as tools evolve. Drawing it – and redrawing it – is an ongoing act of judgement, not something that can be outsourced to software or policy templates.
This raises uncomfortable questions. Who decides what counts as appropriate use? Who is accountable when AI use drifts into compliance theatre, where policies exist on paper but not in practice? And who ultimately bears the consequences when technology reshapes learning in ways that were not fully intended?
Concerns about “intellectual offloading” also surfaced repeatedly. AI can reduce workload, but there is a difference between support and substitution. When systems begin to shape thinking rather than assist it, schools risk losing professional agency. Risks such as function creep, surveillance capitalism and long-term dependency were not presented as inevitabilities, but as outcomes that require active leadership to avoid.
From here, the focus turned inward – towards performance, feedback and growth. Used carefully, AI can act as a thinking partner rather than a shortcut. Leaders were shown examples of how it can create safe spaces for rehearsal and reflection: role-playing difficult conversations, stress-testing decisions, or drafting responses to challenging scenarios. The value lies not in producing perfect answers, but in sharpening judgement.
This reframes feedback. Instead of being occasional and high-stakes, feedback becomes ongoing, low-risk and developmental. Leaders can draft, critique, iterate and reflect – building confidence through repetition. Mistakes happen privately; learning happens continuously. But this only works, speakers cautioned, if leaders remain firmly in control of the process. AI can surface perspectives, but it cannot define priorities or values. That responsibility remains human.
The final section of the event widened the lens further still, returning to a question that had underpinned every discussion: what does it really mean to do the work? Not to adopt tools, write policies or meet compliance thresholds – but to take responsibility.
The closing focus on data stewardship and digital sovereignty made clear that this is where leadership becomes most visible. Schools were encouraged to move beyond passive acceptance of technology and towards active stewardship: mapping where data flows, assessing who has access, clarifying purposes, stress-testing assumptions and setting boundaries. Safeguarding, in this context, extends beyond physical and online safety to include pupils’ digital identities over time.
Delegates were urged to treat data as something held in trust, not something exchanged for convenience. That means asking difficult questions of suppliers, understanding contractual language, and resisting systems built on opacity or behavioural surplus. Vendor lock-in, it was argued, is not just a technical risk, but an ethical one – limiting future choice and narrowing professional autonomy.
Trust, speakers concluded, is built through visibility. Clear, accessible policies for staff, pupils and families are not bureaucratic add-ons, but signals of intent. Stewardship, when done well, becomes a public act of leadership – one that reassures communities that innovation is being handled with care rather than haste.
As the AI in Education Institute event at York St John’s University drew to a close, the mood was neither alarmist nor celebratory. It was pragmatic. AI is already reshaping education. The real question is whether schools lead that change with clarity and purpose, or allow momentum to make decisions for them. Doing the work, in this moment, means choosing values over novelty, judgement over speed, and responsibility over delegation.


