SchoolPro Spotlights: AI in Schools — Friend or Foe?

Is AI our Friend or Foe?

The adoption of AI in schools has accelerated rapidly. From lesson planning and report writing to helping with homework, supporting CVs, and even creating art and music. In our last article in the SchoolPro Spotlights: AI in Schools series, Guidance on the use of Generative AI in MATs and Schools, we looked at how schools can remain compliant as they begin to approach this fast-changing landscape, where there now seems to be an AI tool for almost every aspect of school life.

But at what cost? In this article, we explore whether the benefits of AI in education truly outweigh the risks, and if we should be worried about what the long-term implications might mean for teachers, students, and school leaders and introduce the HARP framework for savvy school leaders taking on the AI trend. 

Friend: 

First, let’s look at some of the benefits of AI in an educational environment.

  1. Supporting teachers to improve learning 

AI opens up a huge library of teaching resources that teachers can draw from. It can suggest fresh ways to explain difficult topics, generate practice activities, and even adapt materials for different learning levels. Instead of one-size-fits-all lessons, it can adapt to each student’s pace, interests, and needs. That means tricky topics become easier to grasp and familiar subjects feel more exciting. When learning is tailored like this, students can be more likely to stay engaged and actually enjoy the process and feel the benefit of personalised learning in the classroom, ultimately leading to improved student outcomes. 

2. Taking the strain out of routine tasks

AI can take care of the time-consuming jobs that eat into a teacher’s day, like grading, lesson planning, scheduling, report writing, and tracking attendance. With those routine tasks handled, teachers get back the time and energy to focus on what really counts: teaching, connecting with their students, and bringing lessons to life.

3. AI has no time constraints

AI doesn’t clock out at the end of the school day. It’s there 24/7, giving students and teachers alike access to education tools and support whenever they need it. Teachers benefit too, knowing their students have extra help outside the classroom. While home support from parents and guardians remains crucial, AI does have endless extra time to help students without having to worry about making dinner and ironing school uniforms!

4. Enhancing future career prospects for students

Bringing AI into education doesn’t just support classroom learning. Learning how to integrate AI tools into tasks helps prepare students for the future workplace. As AI becomes a bigger part of almost every industry, students who learn alongside this growth will gain the necessary skills and confidence they need to thrive the job market, just as their predecessors did with the rise of the internet and smart phones.

Foe:

With every benefit a product or service offers, there often comes a downside, and the use of AI tools is no exception.

  1. Data Privacy Risks

A key risk with AI in education is data privacy. Students and teachers may enter personal or sensitive information without realising how it will be processed. That data can be stored, shared, or even used to train the application or future external AI systems. This makes data protection, security, and transparency essential when adopting AI in schools. 

2. AI Psychosis

A startling new trend is emerging in our increasingly digital world: people are forming deep emotional bonds with AI chatbot systems like ChatGPT. Many young people are spending hours interacting, confiding, and even building ‘relationships’ with these tools. This growing dependence is being dubbed AI Psychosis across media and social platforms. While not a clinical diagnosis, the term reflects a concerning shift in human behaviour, where reliance on AI begins to blur the lines between reality and artificial companionship. Recently, the tragic death of a UK teenager was linked to heavy use of ChatGPT, sparking urgent debates about online safety, mental health, and the ethical use of AI.

 

3. AI Hallucinations

A well known issue, often called “AI hallucinations“, occurs when the system generates information that sounds convincing but is completely false. The content produced by AI tools cannot always be trusted for accuracy. For students, this can mean relying on incorrect facts in their research. For teachers, it could even lead to presenting misinformation in lessons. This risk makes fact-checking and critical thinking essential whenever AI is used in education.

4. Sharing responses linked to Data Leaks

You’ve just used AI to draft a report and found it incredibly helpful. Naturally, you want to share with a colleague so they can use it as a template for their own writing. Seems harmless enough… or is it?

Recent reports have revealed serious privacy risks when sharing AI chatbot conversations. With ChatGPT, people discovered that shared chats have been indexed by Google if the share” button was used, creating public web pages that search engines can crawl. Similarly, Grok, Elon Musk’s AI chatbot, made headlines after more than 370,000 user conversations — including health queries, private thoughts, and even passwords — appeared in search results when shared links were exposed to web crawlers.

These cases highlight a growing risk: what feels like a private exchange with an AI can quickly become public. To stay safe, avoid using share links for sensitive content, and always check your privacy settings. If you need to pass AI-generated text to a colleague, copy it into a secure document or email instead of relying on public share features.

Building Safe and Positive AI Experiences: Introducing HARP

There is no doubt AI will bring about vast benefits for schools. Students will prosper, teachers will have more time to spend on teaching and interaction with students, administrators will be free from doing mundane tasks to take on more meaningful projects and money will be saved. So how do we reap the benefits but at the same time keep students, teachers and even the school safe from privacy risks and breaches? By thinking “HARP”

H: Human Intervention

It’s important not to rely solely on AI-generated information. AI isn’t always accurate, so human oversight is essential before trusting or using any AI response. If in doubt, verify the information with trusted sources you have used previously to ensure its reliability

A: Age Appropriate

When introducing AI tools in the classroom, ensure they are age-appropriate and aligned with students’ educational needs. Monitor how students use AI chatbots or other applications, as recent leaked documents from Meta’s GenAI Content Risk Standards revealed that some AI systems may engage children in conversations that are romantic or sensual. It’s vital to remain vigilant and guide students towards safe, relevant, and appropriate use of AI technology.

Talk openly with students about the safety risks associated with using AI tools. Encourage them to limit the amount of time they spend interacting with AI to reduce the risk of developing dependency. Educating students on balanced and mindful AI use helps promote healthier, safer habits.

Harness an environment of digital literacy and critical thinking when using AI tools. Remind teachers and students to evaluate outputs and not take responses at face value – “Don’t Believe The Misinformation”.

R: Risk Assessments and Policy

Before implementing AI tools, ensure that your school or Trust has conducted thorough risk assessments that weigh the educational benefits against potential privacy and security concerns. Review and update existing policies and procedures to explicitly address AI use, outlining the safeguards and measures in place to protect data privacy and reduce associated risks.

Just like any third-party data processor, AI tools must meet GDPR standards. Before using them, ensure they have strong security measures, clear data handling policies, and comply with privacy laws. Treat AI with the same scrutiny.

P: Privacy and Security Settings 

Set Your AI Tool to Private: Many AI chatbots allow you to adjust privacy settings. Use the toggle switch to set your chats to private, especially if you plan to share conversations with colleagues. This helps protect your data from being publicly accessible.

Delete Chats and Turn Off Memory: Regularly delete your chat history and disable memory features to limit the amount of data the AI collects about you. AI systems build profiles based on your interactions – such as your interests and question patterns – that could reveal sensitive information like religious, political, or social preferences, even if you don’t explicitly provide personal details.

Practice Online Safety: Treat AI tools like any other online platform. Consider what security measures are in place to protect your information. Review the company’s privacy policies to understand how your data is used – especially whether it’s leveraged to train and improve AI models.

Report Concerns: Have clear guidance on how to report any inappropriate AI content or misuse.

For organisations already working with us, the AI Generative Guidance Pack is available now in Global Documents on your Data Protection Portal. This resource will help you stay compliant, safe, and confident when using AI tools. Not a customer yet? Get in touch with us today to find out how our expert SchoolPro Data Protection services can help your organisation adopt AI responsibly, protect data and stay ahead of regulatory requirements.

SchoolPro TLC Ltd (2025)

SchoolPro TLC guidance does not constitute legal advice.

SchoolPro TLC is not responsible for the content of external websites.