Protect lives, not just properties with our Damp and Mould training course
Last Updated: 5th December 2025
Learn how schools can use AI safely with clear policies, risk assessments and safeguarding measures. Practical steps to protect students and support responsible AI use.
Education
Article
The introduction of the Online Safety Act’s child protection measures in July 2025 has raised expectations around safeguarding in schools. Yet despite this shift, the Department for Education’s (DfE) guidance on artificial intelligence in education remains advisory rather than mandatory, leaving schools without clear, enforceable rules.
This lack of clarity has made it difficult for education settings to understand how AI should sit within existing safeguarding, online safety and data protection practices.
Mary-Ann, safeguarding expert at Virtual College, is urging schools to take proactive steps now. She advises leaders to update policies, conduct AI-specific risk assessments, and closely monitor how generative AI technologies are being accessed by staff and pupils.
“We can’t ignore the potential benefits of AI for teaching and learning, as outlined by the DfE,” says Mary-Ann. “But schools must be clear about data protection, copyright and safeguarding risks to ensure students are not exposed to harm.”
Many schools are still relying on outdated digital policies created before the rapid rise of generative AI. These often fail to address risks such as deepfake content, misinformation, algorithmic bias or inappropriate prompts. Below, Mary-Ann outlines practical, actionable steps schools can take to ensure AI is used safely, responsibly and effectively.
1. Conduct AI-Specific Risk Assessments
“Many AI tools aren’t suitable for students because they lack proper safeguards,” Mary-Ann explains. “Schools must enforce minimum age requirements and take a whole-school approach to reviewing risks. Policies need regular reviewing as new evidence about AI’s impact on young people emerges.”
Schools should refer to the DfE’s AI policy and product safety requirements and its Keeping Children Safe in Education (KCSIE) 2025 guidance, available here:
Safeguarding Training for Schools should include integrating risk assessments and the effective review of hazards.
2. Review and Update Web-Filtering and Monitoring Systems
Schools and colleges are already required to have strong web-filtering systems under DfE and statutory safeguarding obligations. But AI introduces new challenges.
“The instant nature of generative AI means filters must work quickly and accurately to identify risks,” says Mary-Ann. “Staff also need clear guidance on how to respond when harmful or inappropriate content slips through.”
Schools should review the DfE’s filtering and monitoring standards and undertake Online Safety training.

3. Ensure a Lawful Basis for Data Collection – Including Sandboxed Environments
AI tools often process user input via external servers, meaning personal data can be exposed if precautions aren’t taken. Schools must ensure a lawful basis for processing under UK GDPR Data Protection and clearly communicate privacy information in age-appropriate language.
“The DfE expects schools to minimise the data stored by AI tools and to be transparent about how and where that data is processed,” says Mary-Ann.
“If AI tools are used to process personal data, they must comply with UK GDPR.”
4. Teach Students to Use AI Ethically and Responsibly
A clear AI policy is essential for both staff and pupils. This includes:
What generative AI tools are allowed
How they should be used
What constitutes inappropriate use
Consequences of misuse
“Teaching student online safety and the ethics and responsible use of AI is vital,” says Mary-Ann. “Staff should also receive regular training so they can confidently integrate AI into learning while supporting students to use these tools safely.”
5. Engage and Educate Parents
Recent research from Internet Matters highlights a worrying gap in parental understanding. While 78% of children say their parents have talked to them about AI, only 34% of parents discuss how to judge whether AI-generated content is truthful.
“Schools can help bridge this gap by clearly communicating their AI policies and offering practical guidance,” says Mary-Ann. Newsletters, online safety drop-ins and parent resources all help families support children’s safe use of AI at home.
A Proactive Approach Is Essential
Even without mandatory AI regulations in place, schools can’t afford to wait. By conducting risk assessments, updating filtering systems, ensuring GDPR-compliant data processing, educating students, and engaging parents, education settings can create an environment where AI enhances learning without compromising safety.
Virtual College continues to support schools with training and resources to help them embed safe, responsible and effective AI use across their whole community.
Take proactive steps today: explore our courses and equip your staff with the skills they need.
)