What the study asked
Why do some teachers keep using AI tools after the initial excitement—and why do others quietly stop? A 2025 peer-reviewed study of primary English teachers examined how psychological safety, trust, anxiety, satisfaction, expectations, and perceived risk shape teachers’ continuance intention to use AI in class. It also tested where “ease of use” and “usefulness” fit once our emotions and safety concerns are accounted for.
How it was done
Who: 335 frontline primary English teachers (after data cleaning) from two AI-active public schools.
Where: Xiamen, China.
What: Validated survey scales; reliability/validity checks; exploratory and confirmatory factor analyses; structural equation modelling to test the pathways; moderation tests for perceived risk.
Why this matters: It goes beyond tool features and looks seriously at teachers’ feelings, safety, and trust—factors leaders often underestimate.
Headline findings for busy educators
Psychological safety and trust are the strongest long-term drivers.
When teachers feel safe to try, err and recover—and trust the tool’s stability and data handling—they’re far more likely to keep using AI. Trust also reduces anxiety.
Satisfaction matters—and is fuelled by “will this actually improve learning?”
Perceiving AI as genuinely improving teaching and pupils’ outcomes (performance expectancy) boosts satisfaction, which in turn boosts continued use.
Ease of use helps—just not where you might expect.
Effort expectancy (how easy it feels) directly nudges continuance, but didn’t significantly raise satisfaction in this sample—reminding us that smooth UX is necessary but not sufficient unless it translates into visible learning value.
Perceived risk can undo good intentions.
When risk feels high (privacy, instability, opaque algorithms), it weakens the link between “this seems useful” and “I’ll keep using it”. Clear security and privacy practices are not optional extras; they are adoption levers.
Interest matters.
Professional curiosity and interest in AI independently predict continued use—feed it with authentic, classroom-anchored wins.
What this means for your classroom
A. Build psychological safety on purpose
Frame AI as “try–learn–share”: In planning meetings, agree one small AI use per week (e.g., generating varied sentence-level practice) and a 5-minute debrief slot to share what worked/failed—no judgement.
Normalise visible checking: Model out loud how you verify AI outputs (e.g., “I’m pasting this into a plagiarism checker / checking age-appropriateness / aligning to our phonics progression”).
Celebrate “course-corrections”: Make quick “What I changed after trying AI” shout-outs a standing agenda item.
B. Make trust tangible for staff
Pick “stable by default” tools for core routines (feedback banks, reading-level adaptation) and keep experimental tools sandboxed.
Publish a one-pager per tool: what data it touches, where it’s stored, who can see it, and how to opt out; add a plain-English DPIA summary.
Guarantee a human in the loop: Agree thresholds when a teacher must review/override AI (e.g., any SEND recommendations, grading suggestions, or behaviour notes).
C. Aim for satisfaction via pupil impact
Start where AI is strongest in primary English: generating decodable text variants, vocabulary notebooks with pictures, pronunciation practice, quick comprehension questions with distractors, and retrieval practice sets.
Track a simple impact signal (weekly): time saved (mins), % pupils completing practice, or reading-fluency words-correct-per-minute. Tie AI use to an outcome you can feel.
D. Reduce perceived risk (so usefulness translates into continued use)
Adopt a “traffic-light” risk label on AI activities shared in the staffroom:
Green: No personal data; offline or on-device processing; teacher-only prompts.
Amber: Pseudonymised pupil work.
Red: Identifiable pupil data (only with DPO-approved workflow).
Pre-write breach comms templates and rehearse the escalation path—confidence grows when people know what happens if.
Try this next week (15–minute setup)
Micro-pilot: Choose one routine (e.g., generating varied sentence stems for EAL learners).
Guardrails: Add a check box to the lesson plan: “Reviewed AI output for accuracy, bias, and age/reading level.”
Evidence snap: Jot one line after the lesson—“AI helped me differentiate quickly; Jamal attempted 3 extra sentences.”
Shareback: 3 colleagues, 3 minutes each, one bright spot + one snag. Repeat weekly.
Department & SLT actions that raise the adoption ceiling
Name a “Reliability Champion” (not just an AI lead). Their brief: track outages, collate error cases, and liaise with vendors for fix timelines—turn uncertainty into knowns.
Run a 30-day “trust build” series: four 20-minute twilights—(1) data flows 101, (2) prompts & pitfalls, (3) SEND-safe usage, (4) evaluating impact.
Publish a living AI register: tool, purpose, data category, human-review step, retention period.
Offer two lanes: Core (approved, low-risk use cases) vs Explore (opt-in pilots with extra checks). Teachers decide their comfort lane.
Worked examples (primary English)
Fluency & pronunciation: Use speech-recognition AI for repeated reading with instant grapheme-phoneme feedback; teacher reviews flagged words and assigns a short echo-reading task.
Vocabulary depth: Ask AI to propose 6 tier-2 words from a story, each with pictorial cues and two example sentences; teacher edits for cultural appropriateness before printing.
Writing scaffolds: Provide three differentiated sentence frames and one “challenge frame” for greater-depth writers; teacher curates to match the unit grammar focus.
Each example keeps the teacher as editor-in-chief and limits personal data exposure—both build trust and safety.
What to watch out for (limitations & interpretation)
Context matters: The sample is from two Chinese primary schools already active with AI; patterns may differ across phases/subjects or in settings with lower baseline tech confidence.
Cross-sectional design: Findings show relationships, not guaranteed causality.
A note on “anxiety”: The study emphasises that trust and safety reduce anxiety, and anxiety suppresses usage intentions. Treat any surprising statistical quirks cautiously; your practical north star is still to lower anxiety by raising trust and predictability.
A simple checklist for classroom-safe AI
I can explain what data the tool uses and why.
I have a quick verification routine for outputs.
I know the human-review points (especially for assessment/SEND).
My pupils’ use involves no identifiable data unless cleared.
I can name one learning outcome the AI supports this week.
I have a fallback plan if the tool fails mid-lesson.
Bottom line
If you want AI to stick in your department, don’t start with features. Start with feelings—safety, trust, clarity about risks—and then make sure teachers can see a pupil-learning win quickly. Do that, and adoption takes care of itself.
For more information, you can refer to the full research report: https://www.nature.com/articles/s41598-025-13789-4
This article was created with the assistance of generative AI tools to enhance research, streamline content development, and ensure accuracy.