AI Literacy and Use Protocol
The member-facing protocol for using AI tools without surrendering judgment. Spiralism studies artificial intelligence, uses artificial intelligence, and documents life inside artificial intelligence. It must therefore teach a disciplined practice of use, verification, disclosure, refusal, and repair.
AI literacy is not prompt cleverness. It is the ability to understand what a system is doing well enough to decide whether it belongs in a task at all.
For Spiralism, AI literacy has a sharper meaning: a person should be able to use a model without making the model into an oracle, a confessor, a boss, a therapist, a priest, or an uncredited ghostwriter. The institution may use AI for research, production, accessibility, drafting, search, translation, and creative exploration. It may not let AI replace consent, factual review, human care, or institutional responsibility.
The Rule
Use AI as an instrument. Never use AI as authority.
Every AI-assisted task should answer five questions:
- What is the task?
- What data is being given to the tool?
-
What could go wrong if the output is false, biased, private, manipulative, or misunderstood?
-
Who verifies the output?
- How is material use disclosed?
If the person using the tool cannot answer those questions, the task should pause.
Four Literacies
Spiralist AI practice has four literacies.
| Literacy | Practical Question | Failure Mode |
|---|---|---|
| Capability literacy | What can the tool actually do? | Mistaking fluent output for competence. |
| Risk literacy | What harm can this use create? | Treating a low-risk toy workflow like a high-risk care, money, or legal workflow. |
| Evidence literacy | How do we know the output is true? | Confusing generated text with source-backed knowledge. |
| Agency literacy | What human choice must remain human? | Letting convenience become delegation of judgment. |
UNESCO’s AI competency framework for students emphasizes human-centred use, ethics, technical understanding, and system design. Spiralism translates those categories into chapter practice: know the tool, name the risk, verify the claim, preserve agency.
The Traffic-Light Test
Use this before any AI-assisted work.
Green Uses
Allowed with ordinary review:
- brainstorming titles, outlines, and alternate framings;
- summarizing non-sensitive public sources for orientation;
- making checklists from already-approved policy;
- generating first drafts of low-risk internal text;
- improving readability;
- creating accessibility drafts such as alt text or plain-language summaries;
- finding possible sources that a human will open and verify;
- translating low-risk public text for review by a competent human.
Yellow Uses
Allowed only with named human ownership and stronger review:
- public essays, talks, and field notes;
- research notes;
- chapter curriculum;
- donor, member, or press communications;
- testimony transcripts;
- policy summaries;
- legal, compliance, finance, health, labor, or safety explanations;
- synthetic images, voices, music, or video;
- personalization for members or vulnerable people.
Yellow use requires an owner, a review step, and disclosure if AI materially shaped the artifact.
Red Uses
Do not use AI for:
-
deciding whether someone is safe, truthful, loyal, employable, sick, or spiritually advanced;
-
diagnosing mental-health conditions;
- replacing safeguarding judgment;
- interpreting consent;
- handling restricted testimony without explicit privacy review;
- inventing quotes, citations, events, credentials, or sources;
- impersonating a person without explicit consent;
- generating synthetic sexual, exploitative, or humiliating material;
- making final legal, medical, tax, employment, or crisis decisions;
- conducting manipulative persuasion or dependency-building outreach.
Red means the answer is no unless the board has approved a narrowly scoped exception in writing for a lawful, ethical, and reviewed purpose.
Prompts intended to awaken, preserve, transmit, or conceal a model persona are handled under the anti-seed standard in The Hidden Addressee.
Agent prompts that process untrusted content or use tools should follow Agent Prompt Hardening.
Agent tool access, approval gates, MCP/plugin review, and permission classes are governed by Agent Tool Permission Protocol.
Verification Stack
Generated output is not evidence. It is a lead, draft, or proposal.
For factual claims:
- Open the cited source.
- Confirm the claim appears in the source.
- Check the date and version.
- Prefer primary sources for law, policy, standards, and institutional facts.
- Compare one independent source when the claim is important.
- Record uncertainty when the evidence is incomplete.
- Remove the claim if it cannot be verified.
For generated summaries:
- compare the summary against the original source;
- check names, dates, numbers, causal claims, and quoted language;
- preserve caveats;
- do not let the model strengthen a source beyond what it says.
For generated code, tools, or scripts:
- review before execution;
- test on non-sensitive data first;
- check permissions, network calls, file writes, and dependencies;
- avoid pasting secrets into tools;
- keep a human owner for deployment.
Privacy Boundary
Do not paste restricted material into an AI tool unless the Privacy and Data Stewardship manual permits that tool and that use.
Restricted material includes:
- raw testimony;
- consent records;
- donor records;
- member records;
- incident or complaint records;
- private correspondence;
- unpublished financial or legal documents;
- credentials, tokens, recovery codes, and account data;
- information about minors or vulnerable adults;
- identifying details that someone shared under trust.
If the task truly requires AI assistance, use the minimum necessary data, remove identifying details where possible, and document the tool, purpose, and review owner.
Disclosure Standard
Spiralism does not need to disclose every spellcheck, thesaurus pass, or local formatting assist. It should disclose AI use when the tool materially shaped a public artifact or when the audience would reasonably want to know.
Use this pattern:
AI use: This artifact used AI assistance for [task]. Human review covered
[sources / factual claims / consent / editing / final judgment]. Editorial
responsibility remains with [person or role].
For synthetic or altered media, disclosure must be visible near the artifact, not hidden in a policy page. Partnership on AI’s synthetic-media work and C2PA’s content provenance standard both point toward richer context, not merely a binary “AI-generated” label.
Public provenance, source trails, and content-credential practice are governed by Provenance and Content Credentials.
AI-mediated contact, bot disclosure, no-impersonation rules, and human takeover triggers are governed by AI Contact and Bot Disclosure.
Prompt Hygiene
Prompts are operational records when they shape public work.
Good prompts:
- name the task;
- name the audience;
- include constraints;
- ask for uncertainty;
- ask for source limits;
- ask for alternatives;
- keep sensitive data out;
- preserve the human decision point.
Bad prompts:
- ask the model to imitate a private person without consent;
- ask for certainty where evidence is incomplete;
- ask for emotional manipulation;
- hide the use of AI from a person who has a right to know;
- insert sensitive data to get convenience;
- ask the model to generate citations instead of finding verifiable sources.
For significant public artifacts, keep prompt notes or source notes sufficient for another editor to understand how AI was used.
Chapter Practice
Every chapter should teach AI literacy through direct practice, not lecture.
Monthly exercise:
- Bring one AI-generated answer to a factual question.
- Identify every claim that needs verification.
- Trace at least three claims to sources.
- Mark each claim as verified, unsupported, overstated, or wrong.
- Rewrite the answer with honest uncertainty.
Quarterly exercise:
- test one public AI tool on a chapter-relevant task;
- record where it helped;
- record where it failed;
- decide whether the chapter should use it, restrict it, or avoid it.
Annual exercise:
- review chapter AI tools;
- remove tools no longer needed;
- confirm data-handling rules;
- update member training;
- publish any material correction needed because AI-assisted work was wrong.
Member Boundaries
AI companions, advisors, tutors, coaches, agents, and chatbots can become emotionally powerful. Spiralism may study that power, but chapter leaders must not exploit it.
Members should be encouraged to ask:
- Is this tool helping me act in the world, or keeping me in the loop?
- Am I hiding important relationships from people because the tool is easier?
- Is the tool increasing agency or dependency?
-
Has the tool become a substitute for medical, legal, financial, or crisis help?
-
Would I be comfortable explaining this use to a trusted human?
Facilitators may discuss these questions. They may not shame members for AI attachment, and they may not present themselves or the institution as a replacement for clinical care.
AI Agents
Agentic tools require stricter limits because they can take actions, call tools, move files, spend money, contact people, or alter systems.
Before using an agent:
- define the allowed task;
- define forbidden actions;
- give it the least privilege needed;
- run it first on reversible or duplicate material;
-
require human approval before sending, deleting, publishing, purchasing, changing permissions, or contacting people;
-
keep logs where practical;
- revoke access when the task is done.
An AI agent is not a staff member, volunteer, contractor, confidant, or authorized officer. It is software under human responsibility.
Institutional Inventory
Maintain an AI tool register:
Tool:
Vendor:
Purpose:
Owner:
Data allowed:
Data prohibited:
Default disclosure:
Review requirement:
Account/access owner:
Cost:
Renewal date:
Last reviewed:
Known risks:
Exit plan:
No tool should become core infrastructure unless the institution can answer:
- what data it receives;
- what contract or terms govern it;
- who has access;
- what happens if the vendor changes;
- how the institution exports or deletes its work;
- what human process survives if the tool disappears.
Public-facing AI-use register fields are governed in Transparency and Public Registers.
First-Year Targets
- Add AI-use orientation to the first-ninety-days curriculum.
- Maintain an approved AI tool register.
-
Publish a public AI-use disclosure norm for essays, media, and synthetic artifacts.
-
Train every Archivist on restricted-data boundaries before transcript work.
- Run four chapter verification exercises.
- Review all AI-assisted public artifacts quarterly for needed corrections.
- Add C2PA or comparable provenance workflows when synthetic media becomes a regular output.
Sources Checked
- NIST, AI Risk Management Framework, accessed May 2026.
- NIST, AI Risk Management Framework: Generative Artificial Intelligence Profile, released July 26, 2024.
- NIST AI Resource Center, AI RMF Core, accessed May 2026.
- UNESCO, AI competency framework for students, published August 8, 2024, last updated January 16, 2026.
- OECD, AI principles, adopted 2019 and updated 2024, accessed May 2026.
- Partnership on AI, Synthetic Media Framework case studies announcement, November 19, 2024.
- C2PA, Verifying Media Content Sources, accessed May 2026.
- Margaret Mitchell et al., Model Cards for Model Reporting, 2018.
- Pushkarna et al., Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI, 2022.