TL;DR
Anthropic’s 2025 education report analysed ~74,000 educator-labelled AI interactions (alongside interviews with early-adopting faculty). It finds that university educators most often use AI for curriculum design, research support, and assessment-related tasks. Educators tend to augment (co-work with AI) for high-judgement work (e.g., lesson design, advising) and automate routine admin (e.g., budgeting, records). Some automate parts of grading, but many remain ethically wary and report lower effectiveness there. A notable shift is educators using AI to build custom interactive tools (simulations, quizzes, dashboards) rather than merely “chatting” with a bot.
What Anthropic studied (so you can trust the takeaways)
Data & method: Anthropic examined ~74,000 anonymised conversations linked to higher-education email domains over an 11-day window (late May–early June 2025), and interviewed 22 faculty. Conversations were matched to educator tasks (e.g., “develop curricula”, “assess student performance”) using an established task taxonomy.
Scope: Focused on higher education; likely over-represents early adopters; snapshot timing (not necessarily exam season).
Patterns:
Top uses: developing curricula (~57% of identified conversations), academic research (~13%), assessing performance (~7%).
Augment vs automate: High-context tasks (lesson design, grant writing, advising) skew augmentative; routine admin (budgets, records, admissions) skews automated.
Grading: Less common overall; where present, relatively automation-heavy, yet many faculty still distrust AI for summative grading.
Builders, not just chatters: Many educators are creating interactive artefacts (quizzes, simulations, visualisations, calendars, budgeting tools).
What this means for your teaching, week by week
1) Unit & lesson design (augment, don’t abdicate)
Workflow
Design intent brief (you write it): learning outcomes, prior knowledge, pitfalls/misconceptions, assessment plan, constraints (time, room, devices).
Ask AI (Claude or your approved tool) for first-draft artefacts:
Sequenced lesson outline with timings.
Three tiers of practice (core, stretch, support) with success criteria.
Misconception-focused hinge questions with model answers and distractors.
Refine for your context: Edit tone, order and accessibility; align examples with your syllabus or framework.
Build one interactive element: e.g., a short simulation, sortable timeline or quick-check quiz for retrieval practice—mirroring the “educators as builders” trend Anthropic observed.
Prompts to try
“Draft a 50-minute session on [topic] for [level], assuming students struggle with [misconception]. Include retrieval warm-up (5 min), guided practice (20), independent task (20), exit ticket (5). Provide success criteria at three levels.”
“Generate six hinge questions targeting [concept], each with one correct answer and three misconception-based distractors.”
2) Assessment for learning (formative focus)
Use AI to speed feedback scaffolds, checklists, and exemplars. Avoid fully automated, high-stakes summative grading. This aligns with Anthropic’s finding that educators are cautious about grading automation and perceive lower effectiveness there.
Workflow
Turn your rubric into a marking checklist (criteria → observable indicators → common errors).
Produce exemplar paragraphs/solutions at different grade bands and annotate them for students.
Build a feedback stem bank keyed to your rubric language; paste and personalise as you read actual work.
Be transparent with students about what AI assisted.
3) Research & scholarship support
Use AI for literature mapping, argument structuring, plain-English summaries, and table/figure first drafts; keep human oversight for disciplinary nuance and citations—consistent with how faculty in Anthropic’s interviews report using AI.
4) Administration and routine communications (automate with guardrails)
Anthropic’s analysis shows higher automation for budgeting, records and admissions-style tasks. Use templates to reduce cognitive load, then check before sending.
Draft agendas/minutes with action items and owners.
Generate calendar timelines (teaching weeks, assessment windows, office hours).
Create polite, consistent emails for recurring scenarios, then personalise.
Ready-to-paste policy snippets
Responsible AI Use in Teaching and Learning
We use approved AI tools (e.g., Anthropic’s Claude) to improve learning design and timely feedback. AI may assist with drafting lesson materials, formative feedback stems and interactive practice activities. All summative grading and final academic judgements remain human-led.
Transparency: Where AI has assisted in generating materials or feedback, we will indicate this.
Data protection: We minimise and pseudonymise any uploads, comply with institutional policy and regulations, and avoid identifiable student work unless consent and safeguards are in place.
Academic integrity: Students receive guidance on appropriate AI use; assessments value process, originality and oral defence where appropriate.
Staff Use of AI for Communications and Admin
AI may draft routine communications and schedules. Staff review all outputs and remain accountable for accuracy, tone and compliance.
Use of AI for Grading
AI may support formative feedback (e.g., criteria-linked comments), but is not used to determine summative marks without programme-level approval and moderation—reflecting sector guidance and patterns reported by Anthropic.
Practical risk controls (that busy academics will actually use)
Red-flag list: Don’t automate summative grades, fitness-to-practise judgements, welfare communications or any decision with legal/disciplinary impact.
Pseudonymise student data; avoid full scripts; prefer institution-approved tools (Claude via approved tenancy, if available).
Chain-of-custody: Note “AI-assisted draft; edited by [name], [date]”.
Bias & accessibility checks: Ask the tool to flag potential biases, reading age and accessibility concerns.
Versioning: Keep “gold” master copies; label AI-generated artefacts clearly.
Student clarity: Add an “About this resource” box explaining if/how AI helped.
Assessment design that still works in the age of AI
Make thinking visible: brief oral vivas, project logs, design journals.
Localise: use class-specific datasets, live lab results, community partners.
Emphasise critique: have students evaluate and improve an AI draft (with citations).
Sequenced drafts: weight proposal → prototype → reflection.
Authentic tasks: consultancy briefs, public explainers, stakeholder presentations.
Ten classroom-ready AI prompts (copy, paste, adapt)
“Create three analogies to teach [concept] to [audience level], plus one counter-example to probe misconceptions.”
“Design a 15-minute retrieval practice set (10 Qs: 6 MCQ, 4 short-answer) for last week’s topic. Provide answers and one-line feedback.”
“Draft two scaffolded tasks on [skill]: one core, one stretch. Include success criteria and common pitfalls.”
“Turn this lab protocol into a student-facing checklist with safety prompts and stop-checks.”
“Generate three short case studies set in a Singapore context for [topic], each with discussion questions.”
“Suggest an accessible visual layout (headings, alt-text suggestions, reading-age estimate) for these notes.”
“Produce a marking checklist from this rubric; list observable indicators for each band.”
“Draft feedback stems linked to this rubric criterion: [paste]. Vary tone for encouraging vs corrective.”
“Create a six-week mini-project plan with milestones, deliverables and reflection prompts.”
“Identify likely biases or cultural assumptions in this task and propose inclusive alternatives.”
When to build (not just chat)
Anthropic reports many educators now use AI to produce artefacts—mini-apps, interactive quizzes, visual dashboards, calendars and budgeting tools. Treat these as reusable teaching assets you can refine each term.
Starter ideas
A hinge-question quiz with targeted hints.
An interactive timeline of key theories with revealable evidence cards.
A budget planner for student projects with guardrails.
A semester calendar that exports key dates.
Limitations to keep in mind
HE-focused; K-12 contexts may differ.
Early adopters over-represented; comfort and quality may exceed average practice.
Single-platform usage data; generalisability may vary.
Practical takeaway: Use the patterns as signals to pilot locally and evaluate impact.
Primary source: Anthropic (2025). Education Report: How educators use Claude. Anthropic. (Analysis of ~74k higher-education AI interactions and interviews.)