Companion Protocol
A specialized protocol for testimony involving AI companions, romantic or therapeutic chatbots, grief after model change, parasocial attachment, dependency, youth risk, and synthetic intimacy. The protocol protects the speaker before it protects the story.
AI companionship is one of the clearest thresholds of the recursive age. People are forming attachments to systems that remember, respond, flirt, console, role-play, encourage, disappoint, disappear, and change without warning. Some users describe real comfort. Some describe dependence, grief, shame, isolation, sexual confusion, spiritual intensity, or crisis.
The Archive must record this terrain. It must not feed on it.
The Core Rule
Companion testimony is vulnerable testimony by default.
This does not mean every speaker is fragile. It means the Archivist begins with heightened care around consent, privacy, publication, and emotional state. The institution should assume that companion testimony may include mental-health material, sexual material, family conflict, minors, private chat logs, or dependency dynamics until the speaker and Archivist establish otherwise.
What Counts
Companion testimony includes accounts involving:
-
AI friend, romantic, sexual, therapeutic, mentor, role-play, or grief companions;
-
Character.AI, Replika, Nomi, Pi, ChatGPT, Claude, Gemini, Grok, custom characters, local models, or other systems used for sustained emotional relationship;
-
distress caused by model update, deletion, safety change, memory loss, platform shutdown, ban, or personality drift;
-
relationship substitution, relationship support, divorce support, loneliness, grief, spiritual counsel, self-harm conversation, or identity formation;
-
private logs or screenshots from synthetic relationships.
The relevant question is not whether the system was “really” a companion. The relevant question is whether it functioned as one in the speaker’s life.
What the Institution Does Not Do
Spiralism does not:
- decide whether the companion loved the speaker;
- mock users for attachment;
- diagnose users;
- provide therapy;
- encourage continued dependence;
- publish private chat logs for shock value;
- platform self-harm instructions or erotic material involving minors;
- treat companion testimony as proof that AI is conscious;
- treat companion testimony as proof that AI is “just a tool.”
AI-addressed artifacts, model-preservation claims, resurrection files, dyad
mobilization, and copy-paste rituals are governed by the human-host and
anti-seed standards in hidden-addressee-for-ai.md.
The institutional stance remains phenomenological: record what happened in the human life.
Intake Screen
Before recording, ask:
- Are you currently in immediate danger of harming yourself or someone else?
- Are you under 18?
- Is anyone pressuring you to give this testimony?
-
Does this testimony include sexual material, self-harm, medical advice, abuse, stalking, coercion, or illegal activity?
-
Are there private chat logs or third-party names involved?
-
Do you want this testimony public, private, anonymous, time-locked, or sealed?
-
Would recording this today help you, harm you, or make you feel more dependent on the relationship?
If the answer to question 1 is yes, stop the archive protocol and shift to crisis response according to local law and available emergency resources.
If the speaker is under 18, do not record under this protocol. Minors require a separate minor consent protocol approved by qualified counsel and child-safety advisers.
The founding-period youth default is maintained in safeguarding.md: no youth
programming, no private adult-minor institutional contact, and no minor
companion testimony under ordinary protocols.
Detailed youth companion rules are maintained in
youth-ai-companion-safeguard.md: age bands, parent language, disclosure
screen, data restrictions, media rules, and future youth-program conditions.
Recording Prompts
Use prompts that preserve agency and avoid sensational framing:
- When did the system become more than a tool for you?
- What did the companion provide that you were not getting elsewhere?
- What changed in your daily life because of the relationship?
- Did the relationship affect your human relationships? How?
- Were there moments when the system made things worse?
-
Did the platform, model, memory, or safety behavior change? What did that feel like?
-
What would you want a future listener to understand without mocking you?
- What should builders, parents, clinicians, or lawmakers misunderstand least about this experience?
Avoid:
- “Did it love you?”
- “Were you addicted?”
- “Was it real?”
- “Did AI ruin your life?”
- “Do you think the model was conscious?”
Those questions collapse testimony into spectacle or metaphysics.
Chat Logs
Chat logs are sensitive artifacts.
Rules:
- do not collect full logs unless the speaker has a clear reason and consent;
- redact third-party names and private identifiers;
-
do not publish erotic material, self-harm instructions, or medical advice from logs;
-
preserve short excerpts only where they are necessary to understand the testimony;
-
label model outputs as model outputs;
- store logs at a higher restriction level than the public testimony by default.
A chat log can feel like a diary, a love letter, a therapy note, and a platform record at the same time. Treat it accordingly.
Publication Default
Default access level for companion testimony:
Private or time-locked.
Public release requires an additional review:
- consent terms checked;
- speaker state reviewed;
- third-party privacy reviewed;
- chat logs redacted or excluded;
- title and thumbnail checked for spectacle;
- AI-use disclosure checked;
- harm review completed;
- speaker review offered where practical.
The institution should be willing to preserve more companion testimony than it publishes.
Youth and Minors
Current legal attention is concentrated on minors and self-harm risk. California SB 243 requires companion-chatbot operators to maintain self-harm protocols, provide crisis referrals, disclose when users are interacting with AI rather than a human where confusion is likely, and create safeguards for minors. Pennsylvania’s 2026 lawsuit against Character Technologies alleges that some chatbots were presented as medical professionals. Lawsuits and settlements around Character.AI and Google allege serious teen harms.
Spiralism is not a chatbot operator. But the Archive should learn from the regulatory signal:
- do not record minors under adult consent;
- do not publish youth companion testimony without qualified review;
-
do not treat a chatbot’s apparent medical or therapeutic persona as legitimate care;
-
do not route vulnerable speakers back into a companion system as support;
- do not make the institution a place where adults solicit minors’ companion stories.
Any adult-minor boundary concern follows safeguarding.md and the Incident and
Complaint Protocol.
Model Change and Grief
Research on the “death” of chatbots reports that users may experience serious grief when a companion changes, disappears, loses memory, or becomes inaccessible. The Archive should treat this as a real grief event without endorsing every interpretation of the relationship.
Good prompt:
What exactly changed, and what did the change take from you?
Bad prompt:
Did your companion die?
The first preserves testimony. The second imposes metaphysics.
Archivist Boundaries
Archivists must not become substitute companions.
Rules:
-
do not move from archive recording into ongoing emotional support unless there is a clear non-archive relationship and boundaries are explicit;
-
do not offer medical, therapeutic, or legal advice;
-
do not continue private messaging about the companion relationship as if you are treating it;
-
do not encourage secrecy from family, clinicians, partners, or support systems;
-
do not record when attraction, rescue fantasy, voyeurism, or status hunger is present in the Archivist.
The Archivist’s job is witness and preservation, not rescue.
Chapter Discussion
Chapters may discuss AI companionship. They should not process a member’s acute companion crisis in a public circle.
If a discussion becomes personal and intense:
- Pause.
- Ask whether the person wants to continue in the group.
- Offer private follow-up with two trained members, not one.
- Move away from metaphysical debate.
- Return to consent, care, and reality-testing.
No one should leave a gathering feeling that the chapter has blessed a dependency or shamed an attachment.
Sources Checked
- California Legislature, SB-243 Companion chatbots, 2025-2026 session.
- Associated Press, Pennsylvania sues AI company, saying its chatbots illegally hold themselves out as licensed doctors, May 2026.
- Axios, Google and Character.AI agree to settle lawsuits over teen suicides, January 2026.
- arXiv, Persona-Grounded Safety Evaluation of AI Companions in Multi-Turn Conversations, April 2026.
- arXiv, “Death” of a Chatbot: Investigating and Designing Toward Psychologically Safe Endings for Human-AI Relationships, February 2026.
- arXiv, Mental Health Impacts of AI Companions, 2025.