Report
A pathway towards responsible and ethical AI in education
Introduction
When this report was first published in 2021, AI was an emerging technology. Institutions could consider whether or not to adopt it, and the decision often came down to a choice between vanguard experimentation and caution. Since then, the landscape has transformed.
For many staff and students, AI is not a novelty but a routine part of learning
Generative AI tools such as ChatGPT, Microsoft Copilot, and education-specific platforms are now integrated into virtual learning environments, productivity suites, and everyday workflows. For many staff and students, AI is not a novelty but a routine part of learning, teaching, and administration.
At the same time, regulatory frameworks have matured. The EU has agreed a comprehensive Artificial Intelligence Act, UNESCO has published its global recommendations, and the UK government has set out its sector-led approach through the AI white paper.
The principles that guided the earlier report remain just as valid. Fairness, transparency, accountability, and privacy are still central. What has changed is the context in which they must be applied. Like the original version, in this report we take the approach that abstract frameworks, while valuable, can be too detached from practice. To move from philosophy to pragmatism, this report once again takes a strategic, culture-focused approach. By asking a small set of practical questions, institutions can cut through the mist and focus on what matters most.
The four questions we propose are as follows:
- Does this way of using AI fit our institution’s objectives?
- How can it be implemented in line with our culture and processes?
- Are we ready for it?
- Does it raise particular issues we must address?
These questions will help institutions decide how to engage with AI responsibly, ethically, and in line with their values.
Step one: does this way of using AI fit our institution’s objectives?
The first test of any proposed use of AI is whether it advances the objectives of the institution. But objectives are rarely simple. Most institutions pursue a blend of goals: to widen participation, to ensure fairness, to foster student independence, to strengthen reputation, to improve efficiency, to sustain financial resilience, and increasingly, to reduce environmental impact. Rarely do these goals all point in the same direction. Choosing how to adopt AI often means balancing one against another.
Aligning AI’s use and an institution’s values requires careful thought
This is where institutional clarity of purpose matters most. If workload reduction is a priority, then generative AI may be valuable in streamlining administrative tasks. Yet that same use might be judged differently if the institution’s overriding aim is to strengthen the role of human relationships in teaching. If inclusivity is central, then translation tools or accessibility aids powered by AI may be a good fit — but only if they are implemented in ways that respect privacy and student trust. Aligning AI’s use and an institution’s values requires careful thought: leaders need to weigh how a particular application advances some goals without undermining others.
Evidence should underpin these choices. Institutions should ask not only what a tool claims to achieve, but what proof exists that it delivers. Small-scale pilots, rigorous evaluation, and feedback from staff and students can help ensure decisions are not based on hype or vendor promises. Defining success in advance — whether improved retention, reduced marking time, or enhanced student satisfaction — makes it possible to judge whether the tool is genuinely serving institutional aims.
The perspectives of students themselves are also crucial. They are, after all, the ultimate measure of whether an institution’s objectives are being realised. Our student perceptions of generative AI report found that while most students value AI as a way to study more efficiently and accessibly, many remain concerned about fairness and academic integrity. These views provide an important lens: if students feel a use of AI compromises the values of equity and trust, it is unlikely to be judged as aligning with institutional objectives, however efficient it may appear.
In short, the question is not whether AI can be made to fit our objectives in the abstract, but whether — when tested against competing priorities, supported by evidence, and seen through the eyes of students — it helps the institution to live up to its purpose.
Step two: how can it be implemented in line with our culture and processes?
Even when AI proposals seem to support institutional objectives, their success will depend on the way they are introduced. Different institutions have different cultures, and with them, different processes. Some thrive on piloting new tools quickly, learning from experience, and refining their approach. Others value predictability, preferring slower, staged implementation with thorough consultation. Neither approach is superior: what matters is that the chosen process fits the institution’s character.
The key is to ensure that stakeholders feel involved and respected. For some institutions this may mean co-design workshops or rapid pilots; for others, structured committees and formal consultation. Transparency is critical: staff and students should understand not only what is being introduced, but why, and how it reflects the ethos of their institution. In all cases, adoption should be seen as iterative, with opportunities to gather feedback and adjust course. By aligning implementation processes with cultural temperament, institutions give AI initiatives the best chance of being accepted, sustained, and trusted.
AI literacy is now essential
Step three: are we ready for it?
Readiness used to mean technical capacity: servers, systems, and infrastructure. Today, AI services are often cloud-based and widely available, so readiness is less about hardware and more about people, governance, and culture.
Institutions must ask whether staff and students are prepared to use AI critically and responsibly. AI literacy is now essential. Students need to understand not just how to use AI tools, but how to question their outputs, spot bias, and avoid overreliance. Staff need support to integrate AI into their tasks in ways that enhance rather than replace their professional expertise.
Readiness also means governance. Clear policies on acceptable and unacceptable uses are crucial. Ideally, these should be embedded within existing policies, rather than dealt with separately. Students and staff should be involved in this process, so that up-to-date policies continue to reflect their concerns.
Institutions that invest in skills, governance, and dialogue will be better placed to use AI responsibly. Without such preparation, adoption risks being uneven, inconsistent, and potentially harmful.
Step four: does it raise particular issues we must address?
Finally, institutions must examine the risks and challenges that accompany AI adoption. These have expanded since the first edition of this report.
One is overreliance and deskilling. If students lean too heavily on generative AI to write essays or solve problems, they may lose opportunities to develop critical skills. Teachers, too, may risk professional deskilling if they accept AI-generated materials uncritically.
Institutions must provide clear guidance, favour trusted providers, and set boundaries for safe use
Another is data and privacy. Students may input sensitive personal data into external systems without understanding where it goes or how it is stored. Institutions must provide clear guidance, favour trusted providers, and set boundaries for safe use.
Bias and fairness remain enduring issues. AI systems trained on biased data may replicate existing inequalities, disadvantaging particular groups of students. Continuous monitoring, auditing, and human oversight are essential safeguards.
Furthermore, there are environmental concerns. Large AI models consume significant energy and water. Our recent analysis highlights the scale of these impacts, reminding us that sustainability must form part of the ethical debate. Institutions should weigh environmental costs alongside educational benefits, and press vendors to be transparent about their sustainability practices.
Finally: the details
Responsible adoption of AI is not only about strategy, culture, and readiness; it must also operate within a rapidly evolving legal and regulatory landscape. That said, the principles of fairness, privacy, and respect for individuals and society underpin legal frameworks for AI and data.
The guidance from Ofqual, JCQ, Qualifications Wales, and Ofsted all point to a consistent message: AI in education must never take over human judgment in areas that directly affect students’ futures.
Ofqual says AI can help process or analyse information, but a human must make the final call on grades or assessments. This protects fairness and accountability — no student should be told 'the computer decided.'
JCQ warns against overreliance. This means AI should be a tool, not the system itself. Teachers and exam boards must still check, verify, and interpret results.
Qualifications Wales rules out AI replacing invigilators or generating exams unsupervised. This protects the integrity of assessments and ensures that the process is transparent and credible.
Ofsted stresses that AI should support professional judgment, not override it. This reinforces that teachers and leaders remain central, and AI should enhance, not erode, their role.
Taken together, these rules show a pattern:
- Human oversight is non-negotiable. AI may assist but cannot replace human decision-making in high-stakes areas.
- AI is a support, not a substitute. Its role is to make processes more efficient, not to remove professional responsibility.
- Trust and integrity drive the limits. By keeping humans in the loop, regulators aim to protect fairness, transparency, and accountability.
The underlying message is law and regulation set guardrails, but the foundation is ethical responsibility. These bodies are saying: use AI, but never forget that human judgment, professional responsibility, and fairness must remain at the centre.
Internationally, the EU’s Artificial Intelligence Act (2024) is now law in the EU. It puts AI into four categories of risk: unacceptable, high, limited, and minimal. In Education, the high-risk areas include admissions, grading, and access to qualifications — the decisions that can shape a person’s life. The UK isn’t bound by this law, but it’s still important to know about. It sets a global benchmark, showing where AI needs the greatest care.
In summary, whether in UK guidance or European law, the foundation is ethics. The rules exist to protect fairness, privacy, and respect for individuals and society. Laws can enforce this, but ethics is what guides and informs good practice, day to day.
Conclusion
The four guiding questions outlined in the original report remain a powerful tool, provided they are reinterpreted for this new context.
- Does this way of using AI advance our objectives?
- How can it be implemented in line with our culture and processes?
- Are we ready in terms of skills, governance, and capacity?
- What particular issues and risks must we address?
By asking these questions, institutions can cut through the abstraction of ethical debate and focus on the practical choices that matter. The principles of fairness, transparency, accountability, and privacy remain central. Added to these, concerns such as environmental sustainability and overreliance are of heightened importance.
If guided by strategy, culture, and values, AI can be embedded in ways that truly advance the objectives of institutions and the fundamental promises of education.
Further information
This updated report was first published as a pathway towards responsible, ethical AI (pdf) in October 2021.
About the authors

