Preventing and Responding to AI-Generated Image Exploitation

In recent years, artificial intelligence (AI) has advanced quickly and changed different sectors, including education, entertainment, and communication.

While these technologies have many benefits, they also come with serious risks. AI-generated image exploitation is the harmful and in certain cases, illegal use of AI image-generation technology for things like financial extortion, abuse/cyber bullying and disinformation.

Understanding AI-Generated Image Exploitation

AI-generated images are made using smart algorithms like Generative Adversarial Networks (GANs). These can create very realistic pictures, and whilst these tools can be useful for things like art or virtual simulations, they can also be misused to make exploitative content, such as child sexual abuse material (CSAM). Creating, possessing and distributing this material in the UK is illegal.

Rising Issue in Schools

A small number of schools are reporting incidents where photos, often of girls, are being copied from their websites and social media channels. According to a recent BBC article, fake images are being used to blackmail schools, with the real-life voices of children being used in some cases. These types of illegal material are also being used to blackmail children and force victims into further abuse.

Given the risks, it's important for everyone, especially in schools, to know about AI-generated content and how to handle it properly.

Identifying AI-Generated Content

Trust your instincts as you look at the image. If something feels off, investigate and look for evidence. However, if you think the image shows CSAM, don't investigate yourself. The police have the lawful authority and skills to do so.

For other images, watch out for these details:

  • Physical features: Check if they have the right number of toes and teeth. Does the smile look too stock-photo-like or forced? Is their skin overly smooth?
  • Clothes patterns: AI sometimes gets patterns wrong. Look for inconsistencies like missing seam lines, odd knit-patterns, or overly smooth fabric.
  • Text: If the text looks strange when you zoom in, like illegible scribbles shaped like letters, it might be AI-edited.
  • Background: The program might focus on the main part of the image but ignore the background. Look for misshapen faces, unnatural bending furniture, or a too-perfect scene behind the subject.

  • Prevention Strategies

    The first step to prevent AI image misuse is education and awareness. Our partners at INEQE Safeguarding Group, an independent safeguarding organisation, have created a guide on preventing and responding to AI-Generated image exploitation, which focuses on:

    1. Creating awareness-raising initiatives for students, staff and parents.
    2. Providing staff training on identifying and responding to online risks.
    3. Updating policies on data protection, image consent and online safety.
    4. Reviewing the consent process to ensure it remains relevant and reflects changes.
    5. Audit online platforms where student photos are publicly available.

    This guide helps schools prepare for, prevent, respond to and report incidents of AI-generated image exploitation that target the school community.

    Registered school and Local Authority customers can access further resources to support schools through our Safer Schools partnership. This includes a staff briefing, classroom lessons, a critical response checklist, and online training, all at no additional cost.

    Safer Schools – A Digital Safeguarding Ecosystem

    Staying updated with the latest news, trends, and risks online can be hard. Finding credible and relevant resources can be even more challenging.

    Safer Schools, a digital safeguarding ecosystem designed to educate, empower and help protect your school from safeguarding risk, is available at no extra cost to school and local authority customers that have their insurance programme with Zurich Municipal.

    It's a one-stop shop for essential safeguarding information, advice, and guidance including:

  • The Safer Schools App.
  • Education resources for staff and parents.
  • Online safeguarding training webinars for staff in a wide range of CPD-certified safeguarding courses.
  • Register for Safer Schools today.

    What’s Next?

    As technology grows, it's key to stay alert and address the ethical issues of AI.

    The government has announced four new laws which will tackle the threat of child sexual abuse images generated by AI, making the UK the first country in the world to make it illegal to possess, create or distribute AI tools designed to CSAM, with a punishment of up to five years in prison.

    Upcoming legislation, such as the Crime and Policing Bill 2025, will further criminalise the possession of AI tools used to generate CSAM. The UK government is also reviewing laws related to deepfakes and other AI-driven online harms, signalling a focus on addressing both harmful content and the tools that enable its creation.

    As technology advances, it is essential to remain vigilant and address the ethical concerns surrounding AI. This approach ensures a secure and respectful environment for everyone. By collaborating, we can leverage the advantages of AI while guarding against potential threats.

    Zurich Municipal logo

    If you would like more information about our products, visit our Zurich Municipal website

     

    Contact Zurich Municipal

    0800 232 1901