Anti AI

Are young people in the UK rejecting AI? The truth behind “anti-AI” sentiment

Reality check: “rejecting AI” is not the overall trend

What the best UK evidence suggests

The strongest UK datasets point to a more complicated picture than outright rejection: young people are widely exposed to, and often actively using, AI — but they’re also more vocal about the downsides, and more likely to “selectively refuse” certain uses (for example, AI teachers, AI art, AI “news”, or AI that trains on creatives’ work without consent).

Here are a few anchor data points:

  • Half of UK children aged 8–17 say they’ve used AI tools (up year-on-year in Ofcom’s tracking). 
  • Among 8–17s who use AI, use for learning and school has risen (and “for fun” remains common). 
  • UK university/college students are using AI for study, but simultaneously report anxieties about misinformation/deepfakes, privacy, and employability. 
  • The wider UK environment is one of high adoption + high concern (which shapes youth attitudes too). For example, a major GB pollster reported 69% of adults use AI for at least one purpose while large majorities worry about irresponsible use and misinformation. 

So if you’re hearing “Gen Z are rejecting AI”, the accurate version is usually:

Many young people aren’t rejecting AI outright — they’re rejecting specific AI uses they see as unfair, unsafe, low-quality, or invasive.


What “anti-AI” looks like in real life

The most common “rejection behaviours” (not total avoidance)

In UK schools, colleges, creative communities, and online spaces, “anti-AI” sentiment often shows up as:

  • Opting out of AI content (muting “AI slop”, unfollowing accounts that flood feeds with AI images/videos).
  • Refusing AI for personal/identity reasons (e.g., “don’t clone my voice”, “don’t train on my work”, “don’t deepfake people at school”).
  • Pushing back when AI replaces humans (especially in education, customer service, or creative work).
  • Selective use: using AI to summarise notes, draft plans, or practise interview answers — but not trusting it for news, health, or sensitive decisions.

https://www.teachingenglish.org.uk/sites/teacheng/files/images/GettyImages-1585453140.jpg

Why some young people choose to reject (some) AI

1) Trust has been damaged by deepfakes, scams, and “what’s real?”

Students report worries about misinformation and deepfakes, including not knowing how to spot them reliably. 

Ofcom’s children’s research also flags risks around AI chatbots and the wider range of uses children experiment with. 

Real-world effect: some teens respond by avoiding AI-generated media, distrusting online “proof”, or limiting what they share publicly.

2) Privacy worries and feeling “watched”

UK public polling repeatedly lists privacy/data security and trust as major barriers to using generative AI. 
Students echo this, worrying about how data might be used to predict or influence behaviour. 

Real-world effect: rejecting AI tools that require sign-ins, scrape uploads, or feel opaque about where data goes.

https://eleven-public-cdn.elevenlabs.io/payloadcms/88on6lb9zb-DALL%C3%82%C2%B7E%202024-07-26%2021.47.18%20-%20A%2035mm%20film-style%20image%20depicting%20artificial%20intelligence%20being%20used%20in%20education.%20The%20scene%20shows%20a%20classroom%20with%20students%20interacting%20with%20a%20humano.webp
3) Education: fear of skill loss, unfairness, and “AI doing the thinking”

Young people can be heavy users of AI for schoolwork and still feel it harms learning. UK reporting on pupil attitudes has highlighted concerns about AI eroding study skills even amid widespread use. 
Jisc similarly notes student anxiety about over-reliance and a perceived decline in the quality of their work. 

Real-world effect: some students refuse AI for coursework on principle, or only use it for “explainers” rather than answers.

4) Creators’ rights: “don’t train on my work without consent”

In UK creative industries, resistance is often less about “technology bad” and more about consent, credit, labelling, and pay. For example, UK Music / Musicians’ Union reporting highlights very high support among music creators for consent and labelling around AI training and AI-generated music. 

Real-world effect: younger creatives may embrace AI for parts of their workflow, but still reject models they believe were trained unfairly.

5) Quality fatigue: AI content feels repetitive, soulless, or spammy

A growing complaint (especially among younger, very online users) is that feeds are filling with low-effort AI material. This drives a simple behaviour: mute/skip/block.

Real-world effect: “I’m not anti-tech — I’m anti-spam.”

6) Jobs and the future: anxiety about employability

Jisc reports that many students’ most significant concern is AI’s impact on future employability. 

Real-world effect: scepticism towards “AI will make everything better” messaging, and resistance to employers/educators using AI in ways that feel like cost-cutting.


Expert voices (short, verifiable quotes)

Regulation and trust

“Success requires public trust.” 

How young people actually experience AI

Ofcom’s qualitative work captures how normalised AI chatbots can feel to children:

“…you can just message someone who is like, gonna message you back because, like, it’s AI.” 

(Quotes are kept short for readability and source-compliance.)


So, is “young people are rejecting AI” true?

A careful conclusion
  • Not broadly true if it means “young people don’t use AI” — usage and exposure are clearly widespread. 
  • Often true if it means “young people are rejecting certain AI uses” — especially where trust, privacy, authenticity, fairness to creators, or educational integrity are at stake. 

The headline story in the UK is adoption with friction: young people learn fast, experiment fast, and also push back fast when something feels dodgy.


Pictures and source links (for your blog)

Image sources used above
  • British Council TeachingEnglish (AI in classroom image).
  • The Independent (student using ChatGPT on a laptop).
  • Rawpixel (deepfake poster-style illustration).
  • ElevenLabs blog (AI classroom/robot illustration).
Key UK references (for fact-checking and further reading)
  • Ofcom — Children and Parents: Media Use and Attitudes / Children’s Media Literacy Report 2025 (AI use, trust, and risks). 
  • Jisc — Student perceptions of AI 2025 (student concerns, employability, misinformation). 
  • Ipsos (with Tony Blair Institute partnership) — barriers to genAI use including trust and privacy. 
  • Ada Lovelace Institute — UK polling on AI regulation and trust. 
Spread the word