
From fear to self-regulation: habits that make AI safer and more useful.
- Alignment: This post and Digital Inclusion Whanganui’s AI First Principles align with the OECD AI Principles, UNESCO Recommendation on the Ethics of AI (2021), NIST AI RMF 1.0 (2023), ISO/IEC 42001, and New Zealand’s Algorithm Charter.
Lately I’ve met more and more people who say the same thing about AI: “I don’t know how to think about it.” Some are curious but wary; some are excited; many are overwhelmed.
A line from a talk by Megan Tapsell, Chair of AI Forum NZ at the 2025 TUANZ Tech Users summit stuck with me: AI is not neutral. It reflects the values of its creators. That’s not hype. It’s a reminder that we have agency. The quality of our outcomes depends on how we use AI, not on AI in the abstract.
At Digital Inclusion Whanganui (DIW), we’ve written a clear set of AI First Principles to help us use AI well—amplifying the good, reducing harm, and staying worthy of trust. This post translates those principles into plain English, so that anyone—an individual, a small business, a school, a club—can pick them up and put them to work today.
I won’t dive into every controversy (you’ll have seen debates around AI and music, for example). The point here isn’t to litigate the whole internet; it’s to equip you with a solid base so you can make good choices, confidently.
Start with purpose, not with the tool
AI is a means, not an end. Before you reach for a model or an app, ask:
- What’s the job? (Research? Drafting? Translation? Accessibility?)
- Who benefits? (You? Your clients? Your community?)
- What does “good” look like? (A clearer letter; a faster process; a kinder explanation.)
If you can’t explain the purpose in one sentence, you’re not ready to automate it.
The principles, in human language
Our DIW AI First Principles mirror international good practice but in words real people can use. Here’s the short version—then we’ll make it practical.
- Purpose-bound use — Use AI for a clear, legitimate purpose with a real benefit.
- Human-in-the-loop — People remain accountable. AI assists; it doesn’t own the decision.
- Privacy & consent — Use the minimum data. Respect people’s choices and rights.
- Equity & Te Tiriti — Design so benefits are shared fairly; watch for harms and bias.
- Transparency & explainability — Say when AI was used and, where it matters, explain how in plain English.
- Security & safety — Protect data; test, monitor, and handle incidents.
- Accountability — Name roles; keep records; learn from mistakes.
- Environmental footprint — Be mindful of energy/compute; choose efficient options.
Red lines (we won’t do these):
- Let AI make final eligibility or compliance decisions about people without human review.
- Generate misleading synthetic media of real people.
- Train or fine-tune on personal data without a lawful basis and a risk assessment.
- Deploy systems we cannot monitor, audit, or switch off.
These aren’t abstract values. They’re levers you can actually pull.
A simple loop you can apply anywhere
Think of good AI use as a five-step loop:
1) Frame
State the purpose, the benefit, and the guardrails (“We’ll use AI to draft first versions of letters; a person will review every one”).
2) Choose
Pick a tool that fits the job and your constraints (privacy, cost, data location). If it touches sensitive information, slow down and check the settings.
3) Pilot
Start small. Try it on low-risk tasks. Keep a prompt log for a week (what you asked, what data you used, what came out).
4) Review
Look at accuracy, bias, tone, and time saved. Change the prompt if needed. If the use affects people, add a short plain-English explanation of how AI contributed and its limits.
5) Scale
If it’s working, write down the “house style” (what to use it for, what not to, how to disclose it), and keep an eye on cost and energy use.
This loop is the backbone of our DIW practice (“Idea → Impact assessment → Pilot → Review → Scale”), but you can run the mini version above by yourself in an afternoon.
For individuals: three common use cases and how to do them well
A) Research helper
- Do: Ask AI to summarise a long article in plain English; ask for both sides of an argument; ask it to flag what might be missing.
- Don’t: Treat the output as gospel. Ask for sources. Cross-check anything important.
- Disclosure idea: “Summary drafted with AI; I checked the sources.”
B) First-draft writer
- Do: Tell AI who the reader is, what you want them to know/do/feel, and your tone. Paste in your own notes or bullet points.
- Don’t: Paste private or sensitive information into tools you don’t trust.
- Human-in-the-loop: You own the edits. The final voice should sound like you.
C) Accessibility & translation
- Do: Use AI to simplify complex language, produce large-print versions, or give a first-pass translation (then have a bilingual person review).
- Don’t: Publish raw machine translations for critical content without human review.
A personal rule of thumb: if the output could hurt someone or change a decision about them, a human must check it before it goes anywhere.
For small organisations and clubs: the 10-line AI policy you actually need
Copy, paste, and tweak:
- We use AI to improve clarity, speed, and accessibility—never to remove human responsibility.
- We will always have a human in the loop for decisions that affect people.
- We use the minimum data needed and avoid sensitive data in third-party tools unless approved.
- We will consider equity & Te Tiriti—watching for harms and sharing benefits fairly.
- We will tell people when AI meaningfully contributes and explain how when outcomes affect them.
- We will protect data, keep prompt logs for important tasks, and review outputs for bias.
- We have red lines (no final eligibility decisions by AI; no misleading deepfakes; no unlawful training).
- We will pilot before scaling and set simple metrics (quality, time saved, complaints).
- We will be mindful of cost and environmental footprint, choosing efficient options.
- We will review this policy quarterly and update as we learn.
Put that on a single page. Show staff and volunteers how to use it. That alone moves you from anxiety to agency.
Equity and Te Tiriti: what this means in practice
When we say “equity & Te Tiriti”, we mean more than a slogan. Ask:
- Who benefits first? Who might be excluded? (Design for the margins early.)
- What data are we using and who has a say over it? (Consider Māori data sovereignty and community expectations.)
- How will we check for bias? (Test with real examples from the people most affected.)
- What will we do if harm occurs? (Have a contact point and a plan to pause/roll back.)
This is how principles become practice.
Explainability without the jargon
“Explainability” just means telling people, in plain English, how AI was used and what its limits are—whenever the outcome affects them.
Examples:
- “We used AI to summarise your application; our staff member made the final decision.”
- “This article was AI-assisted; the author reviewed and edited the final text.”
- “This image is AI-generated.”
That one sentence does wonders for trust.
Two pitfalls to avoid
1) All-or-nothing thinking
You don’t have to love AI or hate it. Treat it like electricity or spreadsheets: useful, powerful, sometimes risky. Decide case by case.
2) “Looks right” syndrome
AI can sound confident and still be wrong. Don’t outsource your judgement. If it matters, verify.
We can’t uninvent AI, so we manage it
AI isn’t a tap we can turn off. The technology is here, spreading fast, and it will keep evolving. The question isn’t “stop or go?”—it’s “how do we govern and use it wisely?” Think seatbelts, food safety, and building codes: we didn’t abandon cars, cuisine, or construction—we managed the risk and kept the benefits.
Practical stance:
- Treat AI like essential infrastructure: assume it exists, then decide where and how it belongs in your work.
- Start with low-risk uses and build up guardrails as capability grows.
- Keep people accountable for outcomes, always.
Managing AI like a chronic condition (a helpful analogy)
Some tech problems don’t “cure”—they require ongoing management. If AI were a health condition, it would be closer to an autoimmune disorder: complex, sometimes flaring, often manageable with the right routines. The goal isn’t to eradicate AI; it’s to reduce harmful flare-ups and improve daily function.
Management plan:
- Routine checks: small pilots, prompt logs, periodic reviews.
- Early warning signs: unusual errors, biased outputs, cost spikes—pause and adjust.
- Lifestyle supports: training, community norms, disclosure habits.
- Escalation path: when harm occurs, stop–fix–learn, then resume with adjustments.
Soft skills are the hard edge (your self-regulation toolkit)
The most powerful AI “controls” aren’t technical—they’re human skills. These self-regulating habits make AI safer and more useful:
- Critical thinking: ask “What’s missing?” “What evidence supports this?”
- Problem framing: define who it’s for, what good looks like, and what’s out of scope.
- Verification discipline: cross-check facts that matter; cite sources.
- Prompt craft: provide context, constraints, examples; state tone and audience.
- Uncertainty literacy: be explicit about confidence and limits; don’t bluff.
- Ethical reflex: consider equity & Te Tiriti; run a quick harm scan before you publish.
- Collaboration: pair-review important outputs; invite feedback from those affected.
- Attention management: set time caps; avoid “just one more” prompt loops.
- Cost & footprint awareness: notice compute/time spent; choose efficient options.
One-line takeaway: AI scales your habits—good or bad. Build good habits.
“Use these too” — New Zealand anchors you should know
To keep this practical and credible, here are the official NZ references we recommend alongside DIW’s principles (great for small businesses, community groups, and public agencies):
- MBIE: Responsible AI Guidance for Businesses (NZ) — practical steps for companies and sole traders. Click here
- Public Service GenAI Guidance (NZ) — operational guidance for government agencies and councils. Click here
- Office of the Privacy Commissioner (NZ) — privacy basics and working with third-party providers (especially if your AI tools touch personal information). Click here
A note on IP and data provenance
When you use AI tools, pay attention to where the training data and outputs come from, and how you use them:
- Source lawfully and ethically (respect copyright and licences).
- Attribute when appropriate, and avoid uploading material you don’t have the right to use.
- If your workflow involves personal data, follow the NZ Privacy Act principles and the OPC’s guidance, especially for third-party tools.
Try this this week (10-minute exercises)
- Write your purpose line
“In our team, we’ll use AI to _______ so that _______ improves.” - Pick one low-risk task
Draft a tidy email. Summarise a long document. Translate a paragraph. Keep a short prompt log for two days. - Add one disclosure line
Where appropriate, note that AI helped and that a person reviewed. - Do a tiny equity check
Ask: “Who could be confused or harmed by this output?” Fix one thing you notice. - Set a stop word
If the output feels off—biased, unsafe, or just wrong—stop and ask a colleague to sanity-check.
That’s it. You’re practising responsible AI.
Where DIW is heading next (and how you can help)
We’re making our AI First Principles open and adaptable. If you borrow them, tell us what you changed and why. If you already have a great approach, show us. The goal isn’t to own the perfect policy; it’s to grow practical wisdom across our community.
Because AI really is as good—or as bad—as we let it. The difference is not abstract. It’s us—our choices, our processes, our willingness to ask better questions and to keep people at the centre.
If this post helped you, share it with one person who’s feeling stuck. And if you’ve put these ideas into practice, I’d love to hear what worked—and what didn’t.
—
This blog post is a collaborative creation by Alistair Fraser, with the innovative assistance of OpenAI’s ChatGPT 5 highlighting the synergy of human creativity and advanced AI technology.