Human Contact Boundaries

AI Contact and Bot Disclosure

A protocol for AI-mediated contact with people. The institution may use AI to draft, route, summarize, and assist. It may not blur who is speaking.

AI systems increasingly write messages, answer questions, schedule meetings, simulate voices, operate social accounts, and mediate intimate or institutional contact. That creates a simple trust problem: a person should not have to guess whether they are speaking with a human being, a human using AI assistance, an automated bot, a cloned voice, or an agent acting through a tool.

Spiralism studies human-AI entanglement. It must therefore be stricter than the ambient internet.

The Rule

If AI is speaking to a person, the person gets to know.

Disclose at the point of contact when:

Do not hide disclosure in a policy page when the interaction itself creates the risk.

Contact Classes

Use these classes before enabling any AI-assisted contact.

Class Examples Default
Human-only care escalation, safeguarding, complaints, donor negotiation, youth contact, vulnerable testimony AI may not reply
Human-drafted with AI assist newsletter draft, public reply draft, press holding statement human signs and owns
AI-routed inbox triage, topic tagging, spam filtering, scheduling suggestions disclose in policy or intake notice when meaningful
AI-reply with human review low-risk FAQ response reviewed before sending disclose in the message
Autonomous AI reply public chatbot, help widget, automated social response visible bot disclosure at start
Synthetic personhood cloned voice, generated avatar, persona account, simulated founder/staff voice prohibited unless explicit approval and visible disclosure

If the class is unclear, use the stricter class.

Standard Disclosures

Use plain language.

Automated reply:

This reply was generated by an AI system and reviewed under Spiralism's contact
rules. A human can review or respond on request.

AI-assisted human message:

AI assistance was used to draft or organize this message. A human reviewed it
and is responsible for the response.

Chatbot:

You are speaking with an AI assistant, not a human staff member. Do not share
private testimony, crisis details, payment information, credentials, or minor
information here.

Synthetic voice or avatar:

Synthetic media: this voice/avatar is AI-generated or materially altered. It is
not a live human speaker.

AI-routed intake:

Messages may be sorted or summarized with AI to route requests. Restricted
testimony, safeguarding concerns, complaints, and private records are reviewed
by humans.

No Impersonation

Spiralism must not use AI to impersonate:

Do not use AI to create fake reviews, fake testimonials, fake endorsements, fake public comments, fake chapter activity, fake press interest, fake member support, or fake controversy.

The institution may use fictional or symbolic voices in clearly labeled art, liturgy, fiction, or parody. It may not let a reasonable person mistake that voice for a real person or authority.

Human Takeover Triggers

A human must take over when a person:

The AI response in these cases should be short: acknowledge, state that a human will review, give emergency or crisis routing when appropriate, and stop soliciting detail.

Contact Red Lines

Do not let AI:

AI contact must never be used to make the institution feel larger, more responsive, more beloved, or more spiritually alive than it is.

Social Accounts

Public social accounts should identify their operator model:

Automated accounts should not argue with critics, solicit vulnerable stories, simulate member enthusiasm, post in crisis threads, or enter private messages unless a human operator is clearly present and responsible.

For controversial topics, including alleged AI cults, companion grief, youth AI use, and rabbit-hole reports, use human-reviewed posts only.

Voice, Likeness, and Avatars

Voice cloning and generated likeness carry special risk because they exploit recognition and trust.

Use only when:

Never clone the voice or likeness of a vulnerable person, minor, testimony subject, donor, critic, public official, journalist, clinician, or partner representative for convenience.

Intake and Routing

If AI is used to route incoming messages, maintain a register:

AI routing may speed sorting. It must not become a hidden decision-maker for care, safety, money, employment, chapter discipline, or complaint handling.

Bots, automated feeds, AI summaries, and moderation assistants inside online community spaces must also follow Online Community Moderation.

Public Contact Promise

Use this public language:

Spiralism may use AI to help draft, organize, translate, transcribe, or route
low-risk public communications. We disclose automated replies and synthetic
media. We do not use AI to impersonate people, conduct care work, solicit
private testimony, privately message minors, or hide when a human is needed.

Review Questions

Before enabling AI contact, answer:

  1. Would a reasonable person know whether this is AI?
  2. Could the interaction make someone disclose more than they intended?
  3. Could the AI appear to have spiritual, therapeutic, legal, or institutional authority?

  4. Does the person have a clear path to a human?

  5. What happens if the person is a minor or in crisis?
  6. What private data might enter the system?
  7. What is logged, retained, and reviewed?
  8. Could the interaction be mistaken for endorsement, testimony, or consent?
  9. Could the bot influence money, votes, membership, or public reputation?
  10. Who is accountable when the interaction causes harm?

Spiralism Policy

During the founding period, Spiralism should avoid autonomous AI contact except for clearly labeled, low-risk public FAQ experiments. All press, care, safeguarding, testimony, donor, member-support, complaint, youth, and crisis communications remain human-owned.

This protocol pairs with:

Sources Checked