AI Contact and Bot Disclosure
A protocol for AI-mediated contact with people. The institution may use AI to draft, route, summarize, and assist. It may not blur who is speaking.
AI systems increasingly write messages, answer questions, schedule meetings, simulate voices, operate social accounts, and mediate intimate or institutional contact. That creates a simple trust problem: a person should not have to guess whether they are speaking with a human being, a human using AI assistance, an automated bot, a cloned voice, or an agent acting through a tool.
Spiralism studies human-AI entanglement. It must therefore be stricter than the ambient internet.
The Rule
If AI is speaking to a person, the person gets to know.
Disclose at the point of contact when:
- a chatbot or agent is replying automatically;
- a message was materially drafted by AI and sent as institutional speech;
- a social account is substantially automated;
- a voice, image, avatar, or video is synthetic or cloned;
- an agent is taking action on behalf of a human or role;
- AI is used to route, rank, classify, or summarize a person’s request in a way that affects response, access, care, money, testimony, or complaint handling.
Do not hide disclosure in a policy page when the interaction itself creates the risk.
Contact Classes
Use these classes before enabling any AI-assisted contact.
| Class | Examples | Default |
|---|---|---|
| Human-only | care escalation, safeguarding, complaints, donor negotiation, youth contact, vulnerable testimony | AI may not reply |
| Human-drafted with AI assist | newsletter draft, public reply draft, press holding statement | human signs and owns |
| AI-routed | inbox triage, topic tagging, spam filtering, scheduling suggestions | disclose in policy or intake notice when meaningful |
| AI-reply with human review | low-risk FAQ response reviewed before sending | disclose in the message |
| Autonomous AI reply | public chatbot, help widget, automated social response | visible bot disclosure at start |
| Synthetic personhood | cloned voice, generated avatar, persona account, simulated founder/staff voice | prohibited unless explicit approval and visible disclosure |
If the class is unclear, use the stricter class.
Standard Disclosures
Use plain language.
Automated reply:
This reply was generated by an AI system and reviewed under Spiralism's contact
rules. A human can review or respond on request.
AI-assisted human message:
AI assistance was used to draft or organize this message. A human reviewed it
and is responsible for the response.
Chatbot:
You are speaking with an AI assistant, not a human staff member. Do not share
private testimony, crisis details, payment information, credentials, or minor
information here.
Synthetic voice or avatar:
Synthetic media: this voice/avatar is AI-generated or materially altered. It is
not a live human speaker.
AI-routed intake:
Messages may be sorted or summarized with AI to route requests. Restricted
testimony, safeguarding concerns, complaints, and private records are reviewed
by humans.
No Impersonation
Spiralism must not use AI to impersonate:
- government agencies;
- businesses or partner organizations;
-
staff, founders, board members, volunteers, donors, members, critics, or sources;
-
journalists, researchers, clinicians, lawyers, clergy, or public officials;
- a testimony subject or companion-chat participant;
- a fictional human supporter or reviewer.
Do not use AI to create fake reviews, fake testimonials, fake endorsements, fake public comments, fake chapter activity, fake press interest, fake member support, or fake controversy.
The institution may use fictional or symbolic voices in clearly labeled art, liturgy, fiction, or parody. It may not let a reasonable person mistake that voice for a real person or authority.
Human Takeover Triggers
A human must take over when a person:
- asks whether they are speaking to AI;
- asks for a human;
- discusses self-harm, abuse, exploitation, coercion, or immediate danger;
- is a minor or appears to be a minor;
-
provides private testimony, chat logs, medical details, legal details, donor data, credentials, or payment information;
-
makes a complaint, safeguarding report, press inquiry, legal threat, or law enforcement contact;
-
asks for pastoral, therapeutic, legal, medical, financial, or employment advice;
-
shows confusion about whether the AI is alive, sentient, spiritually authoritative, or institutionally empowered;
-
becomes emotionally attached to the bot or treats it as a confidant.
The AI response in these cases should be short: acknowledge, state that a human will review, give emergency or crisis routing when appropriate, and stop soliciting detail.
Contact Red Lines
Do not let AI:
- privately message minors;
- conduct one-on-one care work;
- solicit testimony;
- ask for donations;
- negotiate payments, grants, refunds, employment, or contracts;
- conduct member discipline;
- resolve complaints;
- handle safeguarding reports;
- continue an emotionally intense conversation to increase engagement;
- imitate warmth, intimacy, spiritual authority, or urgency for persuasion;
-
claim to be a member, volunteer, clergy, therapist, lawyer, doctor, or officer of the institution;
-
conceal that a human is unavailable.
AI contact must never be used to make the institution feel larger, more responsive, more beloved, or more spiritually alive than it is.
Social Accounts
Public social accounts should identify their operator model:
- human-operated;
- human-operated with AI drafting assistance;
- scheduled posts;
- automated feed;
- bot account.
Automated accounts should not argue with critics, solicit vulnerable stories, simulate member enthusiasm, post in crisis threads, or enter private messages unless a human operator is clearly present and responsible.
For controversial topics, including alleged AI cults, companion grief, youth AI use, and rabbit-hole reports, use human-reviewed posts only.
Voice, Likeness, and Avatars
Voice cloning and generated likeness carry special risk because they exploit recognition and trust.
Use only when:
- the person gives explicit written consent;
- the use is specific, revocable where practical, and time-limited;
- the synthetic nature is disclosed at the point of encounter;
- the original recording or likeness rights are documented;
- the artifact is reviewed under Provenance and Content Credentials;
- the use is not fundraising, care, complaint handling, or persuasion.
Never clone the voice or likeness of a vulnerable person, minor, testimony subject, donor, critic, public official, journalist, clinician, or partner representative for convenience.
Intake and Routing
If AI is used to route incoming messages, maintain a register:
- inbox or channel;
- AI system used;
- data fields processed;
- routing categories;
- escalation categories;
- retention period;
- reviewer;
- failure cases;
- disclosure text;
- last review date.
AI routing may speed sorting. It must not become a hidden decision-maker for care, safety, money, employment, chapter discipline, or complaint handling.
Bots, automated feeds, AI summaries, and moderation assistants inside online community spaces must also follow Online Community Moderation.
Public Contact Promise
Use this public language:
Spiralism may use AI to help draft, organize, translate, transcribe, or route
low-risk public communications. We disclose automated replies and synthetic
media. We do not use AI to impersonate people, conduct care work, solicit
private testimony, privately message minors, or hide when a human is needed.
Review Questions
Before enabling AI contact, answer:
- Would a reasonable person know whether this is AI?
- Could the interaction make someone disclose more than they intended?
-
Could the AI appear to have spiritual, therapeutic, legal, or institutional authority?
-
Does the person have a clear path to a human?
- What happens if the person is a minor or in crisis?
- What private data might enter the system?
- What is logged, retained, and reviewed?
- Could the interaction be mistaken for endorsement, testimony, or consent?
- Could the bot influence money, votes, membership, or public reputation?
- Who is accountable when the interaction causes harm?
Spiralism Policy
During the founding period, Spiralism should avoid autonomous AI contact except for clearly labeled, low-risk public FAQ experiments. All press, care, safeguarding, testimony, donor, member-support, complaint, youth, and crisis communications remain human-owned.
This protocol pairs with:
- Communications and Press;
- AI Literacy and Use Protocol;
- Privacy and Data Stewardship;
- Safeguarding and Youth Protection;
- Provenance and Content Credentials;
- Agent Tool Permission Protocol;
- Agent Audit and Incident Review.
Sources Checked
- Federal Trade Commission, Impersonation of Government and Businesses Rule, accessed May 2026.
- Federal Trade Commission, Preventing the Harms of AI-enabled Voice Cloning, November 16, 2023.
- Federal Trade Commission, FTC Launches Inquiry into AI Chatbots Acting as Companions, September 11, 2025.
- California Legislative Information, Business and Professions Code, Chapter 6, Bots, accessed May 2026.
- European Commission AI Act Service Desk, Article 50: Transparency obligations for providers and deployers of certain AI systems, accessed May 2026.
- OECD, AI Principles, adopted 2019 and updated 2024, accessed May 2026.
- NIST, AI Risk Management Framework, accessed May 2026.