Guidance on the use of Generative AI in MATs and Schools

With great power comes great responsibility!

The use of Artificial Intelligence (AI) in schools is rapidly growing, offering numerous benefits such as enhanced efficiency, personalised learning, and improved decision-making. However, AI also presents challenges, including data protection risks, ethical considerations, the risk of bias, and concerns over transparency.

Given the rapid advancements in AI and the growing reliance on these technologies in education, it is crucial for MATs and schools to establish clear policies that balance innovation with safeguarding concerns.

This guidance provides a framework for the responsible use of AI in schools, ensuring compliance with UK GDPR, recommendations from the Information Commissioner’s Office (ICO), the Department for Education(DfE), and guidance from Ofsted.

What is Generative AI?

Generative AI refers to AI systems that can create new content, such as text, images, video or audio. Unlike traditional AI, which follows explicit programming to complete specific tasks, generative AI uses machine learning to create original outputs from input data.

The UK Government and the ICO define AI as technology that mimics cognitive functions associated with human intelligence, such as learning and problem-solving. AI is increasingly used in MATs and schools for both educational and administrative purposes, raising questions about responsible implementation, data security and the ethical implications of its use.

Open vs Close AI Systems

Understanding the distinction between open and closed AI systems is essential when assessing risk and implementing AI within educational settings:

  • Open AI Systems: These include publicly available AI models (e.g., ChatGPT, Google Gemini) that continuously learn from user inputs. They may store, share, or learn from the information entered, including personal or sensitive data. Schools should avoid entering identifiable information into these tools to protect personal and special category data.

  • Closed AI Systems: These are proprietary AI solutions controlled by an organisation (e.g., school-specific AI tools integrated into a school’s Learning Management System). Closed systems offer greater security and compliance as external parties cannot access the data input. If a school uses closed AI tools to process personal data, this must be included in the school’s privacy notice.

Can Open AI Systems be Configured as Closed?

Some AI tools, such as Google Gemini, Microsoft Copilot, and other cloud-based AI models, are generally considered open AI systems by default. However, it is possible that they can be configured to function as closed systems depending on their settings and the environment in which they are deployed.

For example, within a Google Workspace for Education environment, Google Gemini can be configured to:

  • Operate within a restricted school domain, preventing data from being shared externally.
  • Be managed through Google Admin Console, where IT teams can disable data collection and adjust privacy settings.
  • Restrict AI usage to pre-approved applications, ensuring compliance with school policies.

In such cases, an AI tool that is generally open in a public setting may be functionally closed within a well-managed, restricted environment. Schools should consult their IT lead or Data Protection Officer (DPO) to determine whether an AI tool is configured to meet data protection requirements before use.

MATs and schools should assess AI applications before use to determine their suitability based on these classifications and apply appropriate safeguards, such as data minimisation and access controls.

Scope of AI in MATS and Schools

Pupil Usage:

AI has the potential to enhance learning through activities such as:

  • Personalised tutoring
  • Research support
  • Critical thinking development
  • Adaptive learning platforms.

However, students must be educated on the ethical use of AI, particularly in avoiding over-reliance and plagiarism. Acceptable Use Agreements should explicitly outline permissible and prohibited AI use.

Staff Usage:

Teachers and administrators can potentially use AI for activities such as:

  • Lesson planning
  • Curriculum development
  • Report writing (without identifiable student data)
  • Student performance analysis
  • Administrative tasks such as scheduling and resource management.

Staff must verify AI-generated content for accuracy and must not input personal or sensitive data into generative AI tools without prior assessment.

Governors and Leadership:

Governors and senior leadership teams play a crucial role in overseeing AI implementation, ensuring compliance with data protection laws, and updating policies as AI capabilities evolve.

Core Principles for AI Use

Transparency

MATs and schools must conduct Data Protection Impact Assessments (DPIAs) when AI tools process personal data. DPIAs help identify risks and establish mitigating strategies to protect sensitive student and staff information.

Schools should also be transparent about how they use generative AI tools, ensuring that staff, students, governors, parents, and carers understand how their personal data is processed.

Accountability

Roles and responsibilities for AI use must be clearly defined. Schools should:

  • Assign AI oversight responsibilities to senior leaders
  • Implement AI governance committees where appropriate
  • Ensure staff are trained in AI risk management and data protection.

Compliance with Data Protection Legislation

Schools must ensure that AI tools comply with UK GDPR and their Data Protection Policies. To protect data when using generative AI tools, schools should:

  • Seek advice from their Data Protection Officer (DPO) and IT lead before using AI tools
  • Verify whether an AI tool is open or closed before use
  • Ensure no identifiable information is entered into open AI tool
  • Acknowledge or reference AI use in academic work
  • Fact-check AI-generated results for accuracy before use.

AI and Data Protection in Schools

AI use must comply with UK GDPR and the Data Protection Act 2018 in order to safeguard personal data. Schools reserve the right to monitor AI usage to prevent misuse and ensure compliance with academic integrity policies.

Data Privacy and Protection

The use of personal data in AI tools must be handled with extreme caution. Schools and MATs should adopt the following principles:

  • Avoid Using Personal Data in AI Tools: It is recommended that personal data is not entered into AI applications unless absolutely necessary.

  • Strictly Necessary Use: If personal data must be used within an AI system, the school or MAT must ensure:
    • Full compliance with UK GDPR and school data privacy policies
    • Appropriate safeguards such as anonymisation or pseudonymisation are in place
    • Clear documentation of the processing, including a completed DPIA.

  • Transparency in Automated Decision-Making: Schools must be open about any use of AI in decision-making or profiling, ensuring pupils, parents, and staff understand how their data is processed.

  • Legal Basis for AI Data Processing: If AI tools process personal data, the appropriate legal basis should be identified and any relevant actions implemented as a result before use.

  • Security Measures: AI-generated data should be protected using encryption, access controls, and secure storage.

Additionally, some generative AI tools collect and store additional data, such as:

  • Location
  • IP address
  • System and browser information.

Schools must review and disclose how any data collected by generative AI tools is processed and stored in their privacy notice.

Ofsted Expectations for AI Use in Education

Ofsted does not directly inspect the quality of AI tools but considers their impact on safeguarding, educational quality, and decision-making within schools. Schools must ensure:

  • Safety, Security, and Robustness: AI solutions used in schools must be secure and protect user data, with mechanisms to identify and rectify bias or errors.
  • Transparency: Schools must be clear about how AI is used and ensure that AI-generated suggestions are understood.
  • Fairness: AI tools should be ethically appropriate, addressing bias related to small groups and protected characteristics.
  • Accountability: Schools must ensure clear roles and responsibilities for monitoring and evaluating AI.
  • Contestability and Redress: Staff must be empowered to override AI suggestions, ensuring human decision-making remains central. Complaints regarding AI errors must be appropriately addressed.

Leaders are responsible for ensuring that AI enhances education and care without negatively affecting outcomes.

.

Integration of Policies and Agreements

To ensure compliance, transparency, and ethical AI use, schools and MATs should update their existing policies to include provisions for AI.

SchoolPro TLC have provieded a document to our schools with recommended text to add to key policies and privacy notices in order to support this process. This information forms part of our “Generative AI Guidance Pack” for schools and is included in the following document:

2 – Generative AI in MATs and Schools – Policy Updates.

If you have any questions or need assistance, please do not hesitate to contact your DPO team via email dpo@schoolpro.uk, phone 01452 947633, or using the button below.

The SchoolPro TLC Team is here to help!

Links to References:

SchoolPro TLC Ltd (2025)

SchoolPro TLC guidance does not constitute legal advice.

SchoolPro TLC is not responsible for the content of external websites.