AI in the Classroom
Related Tags
Time to read
Estimated Time: 20 - 30 minutes

Artificial Intelligence (AI) is quickly becoming part of everyday school life. For many young people, using AI to brainstorm ideas, fix grammar, generate images, or get quick answers can feel as normal as using spellcheck or a search engine - it's just another tool at their disposal.
For teachers and school leaders, the rapid emergence and rate of uptake with AI tools can feel both promising and unsettling. AI can support teaching and learning in meaningful ways, but it also complicates assessment, raises privacy and safety questions, and can make it harder to see what students truly understand.
This guidance is designed to help schools navigate AI with clarity and confidence - balancing innovation with fairness, safety, and authentic learning.
Understanding AI in today's school environment
Many students are already using AI tools such as chatbots, writing assistants, translation tools, and image generators. Common uses include:
- Brainstorming ideas or planning structure
- Rewriting or summarising information
- Checking spelling and grammar
- Drafting paragraphs or whole assignments
- Translating text or simplifying language
- Generating creative outputs like images, music, or scripts
Many students don’t necessarily view this as “cheating” but rather as getting help and using the available tools at their disposal, similar to asking a friend, using a study guide, or searching online. Others may be unsure where the line sits, especially if different teachers at their school have different ideas on AI use and therefore different expectations.
Teachers, meanwhile, are grappling with questions that are being asked across Aotearoa and internationally:
- How do I know what learning has actually taken place?
- How do I keep assessment fair when AI can generate polished work quickly?
- How do I support students to use AI responsibly when I'm unsure of how to use it myself?
Used thoughtfully, AI can support quality teaching and deepen learning, particularly when it’s treated as a tool within a learning process, not a replacement for thinking.
Opportunities that AI tools can offer
AI connects naturally to digital citizenship and online safety learning. It offers real-world opportunities to explore:
- Misinformation and reliability
- Bias and representation
- How algorithms influence what we see
- Privacy, consent, and data sharing
- Ethical decision-making and respectful communication
When AI literacy is integrated into these broader conversations, students learn to question, evaluate, and reflect - skills they will need well beyond school.
AI can help adjust text complexity, generate alternative explanations, translate instructions, or produce extra practice at different levels. For some learners, this can remove barriers and increase confidence, especially when guided by a teacher who understands what the student actually needs.
AI can generate first drafts of rubrics, exemplars, lesson ideas, discussion prompts, or parent communications. Many teachers are already using AI this way, not to replace professional judgement, but to reduce time spent on repetitive tasks and free up energy for relationships, planning, and feedback.
AI outputs often sound convincing, even when they’re wrong. Using AI as a “text to critique” can strengthen media literacy by checking sources, identifying bias, comparing perspectives, and editing for clarity and accuracy.
Some students struggle to begin. AI can provide early prompts, rough outlines, or alternative ways into a task. When used well, it can help students move from stuck to started, while the deeper thinking and personal voice still comes from the learner.
The goal is not for AI to do the learning for students, but to scaffold learning with them.
Challenges and limitations to keep in mind
AI brings real benefits and real limitations and challenges. Naming these openly can helps schools build a culture of responsible and safe use of AI in order to take advantage of the opportunities it can offer.
AI can produce answers that look polished and authoritative but contain made-up facts, incorrect science, or misleading interpretations. Some teachers see students submit work that includes fake references, invented quotes, or inaccurate claims - not because students are trying to deceive, but because the writing sounds trustworthy.
This can be especially challenging for younger students who may not have the media literacy to recognise errors, and for older students who may assume “if it sounds smart, it must be true.”
A helpful shift is to treat AI like a source that always needs checking - not a source of truth.
AI can blur the line between supporting a student and substituting for a student. A student might genuinely believe they’ve “done the work” if they prompted a chatbot, copied a draft, and edited it - even if the core thinking wasn’t theirs.
This raises practical questions:
- Is the learning goal writing quality, content knowledge, or reasoning?
- What counts as “student work” when tools co-author text?
- How do we assess process as well as product?
This doesn’t mean assessment becomes impossible, but it does mean many schools will need to adjust how they gather evidence of learning. The shift is often from “what was produced?” to “how was it produced, and what can the student explain, defend, or apply?”
AI access is not equal. Some students have unlimited data, newer devices, and paid AI subscriptions that generate higher-quality outputs. Others may have limited device access, restrictions at home, or no access at all.
This can create subtle inequities:
- Students with more access may complete tasks faster or produce more polished work.
- Students without access may feel behind even if their understanding is strong.
- At-home AI use can widen gaps if some learners have guidance and others don’t.
Equity isn’t only about whether AI is allowed, it’s also about how expectations are set, what support is provided, and whether tasks assume access outside the classroom.
Many AI services collect and store user inputs. Students may unintentionally share personal, sensitive, or identifiable information in prompts, including details about themselves, their whānau, their school, or other students. Students may not realise their input can be stored, used to improve systems, or become part of future outputs.
This is why schools benefit from clear privacy guidance: what tools are approved, what information should never be shared, and how to talk to students about safe prompting.
AI can also be used in ways that cause harm, sometimes intentionally, sometimes through poor judgement.
Examples include:
- Creating fake images or videos (deepfakes) that depict a person doing or saying something they didn’t
- Generating humiliating or sexualised content using someone’s likeness
- Producing hateful or harassing messages at scale
- Relying on AI chatbots or AI companions for emotional advice rather than seeking trusted human support
Schools don’t need to fear AI - but they do need to include harm prevention education and a wellbeing focus in their approach to how AI is integrated into classroom learning, and set clear expectations around safe and responsible use.
Practical strategies for Teachers
When considering if and how to integrate AI in teaching practice, many teachers find themselves balancing competing priorities such as:
- Encouraging innovation whilst protecting authentic learning
- Supporting teacher efficiency whilst maintaining professional judgement
- Allowing AI as a learning tool without allowing it to replace thinking
- Teaching AI literacy whilst managing privacy and safety risks
Rather than focusing on detecting and penalising AI use, many teachers are strengthening learning design and making thinking visible. The following strategies respond directly to the challenges outlined above.
When teachers can see how students arrive at their work, concerns about authenticity reduce.
Consider incorporating:
- checkpoints in longer tasks
- planning notes or concept maps
- draft feedback cycles
- short reflections about tools or support used
- brief conferences where students explain key ideas
This supports assessment of understanding, not just output.
Tasks grounded in class discussions, personal reflection, local issues, collaborative work or oral explanation are naturally more authentic and less reliant on generic AI output. This approach reinforces connection between learning and lived experience.
Clarity reduces confusion, and clear expectations create fairness and reduce secrecy. Consider co-creating an AI charter with students and teachers that outlines:
- when AI can support brainstorming or structure
- when independent work is required
- whether acknowledgement is expected
- how AI-generated information should be checked
AI literacy is a skill that supports digital citizenship and responsible decision making that is not just relevant for schoolwork, but for life in the digital world.
Encourage students to practise:
- fact-checking AI responses
- identifying bias or missing perspectives
- refining and improving AI drafts
- comparing AI explanations with trusted sources
In-class writing, discussions, applied problem-solving, and oral explanation provide reliable insight into student understanding. Blending supervised and independent work helps build a fuller picture of the learning taking place, and where an AI bot might be supporting.
Teachers can use AI to draft and adapt materials, while maintaining professional judgement and avoiding sensitive data entry. AI can support efficiency while teachers remain the decision-makers and relationship-builders in the learning process.
Considerations for School Leaders
School leaders have a key role in reducing uncertainty and ensuring a consistent, fair approach. When expectations differ widely between classrooms, students get mixed messages and teachers feel isolated making judgement calls on their own. Therefore there needs to be a focus on creating consistency across classrooms without necessarily applying a 'one-size-fits-all' rule.
Often, a principles-based approach works well, setting school-wide expectations around responsible use, assessment, privacy, and equity can reduce confusion and pressure on individual teachers.
Guidance could include:
- when AI use is appropriate (and when it isn’t)
- expectations for assessment and acknowledgement
- privacy and data principles
- wellbeing and harm prevention considerations
- equity considerations (especially for at-home tasks)
Teachers benefit from time to explore tools safely, share strategies, and discuss what’s working. This could include:
- staff discussions using real task examples
- shared guidance on safe prompting
- moderation conversations about assessment evidence
- opportunities for teachers to trial approaches and reflect together
- sharing the Netsafe Kete AI module for teachers(/tools/exploring-generative-ai-in-a-nz-classroom-environment)
Families may have very different views about AI, from great enthusiasm to deep concern. Clear communication builds trust so it's important to consider sharing why the school is addressing AI now, how the school is supporting safe and responsible use, what is expected (and not expected) at home and where to go for help if issues arise (including online harm).
The Netsafe Kete Newsletter Pack (/tools/school-and-kura-newsletter-packs)contains some ready-made content that you can copy and paste into your school communications to help you share information about AI as well as other online safety topics.
Leaders may wish to consider which AI tools are age-appropriate for different year levels, what data is collected and stored, what safe-use settings exist (and whether they’re enabled) and how access will be managed in equitable ways.
The Digital Safety Management Plan(/tools/digital-safety-management-plan) can help you assess these tools and consider risks and mitigations before introducing new tools to the classroom.
AI is unlikely to disappear from classrooms and many of its uses will become increasingly normal in study, work, and everyday life. The key question for schools is not whether AI exists, but how students learn to use it safely, ethically, and thoughtfully.
At its core, this is not just a technology issue. It is a learning, fairness, privacy, and wellbeing issue. With shared expectations, open conversation, and a focus on authentic learning processes, schools can respond to AI in a way that supports both innovation and safety while keeping young people, and their learning, at the centre.
Register for free to access tools & resources
Unlock valuable tools and resources for schools with Netsafe Kete. Join now to access essential online safety support tailored for educators.
Related Tools and Resources
Suggested inclusion to your existing school online safety policy to cover the use of AI in the classroom and/or for homework.
Co-creating a class charter is a great starting point to explore AI in the classroom and to set expectations around the use of AI in homework.
Generative AI online learning suite for teachers. An introduction to use, limitations, benefits and potential pitfalls of generative AI.


