Synthetic Relationship Boundaries
A member-facing covenant for AI companions, role-play agents, romantic bots, therapeutic chatbots, grief companions, and all systems that simulate relationship. The aim is not disgust or enchantment. The aim is agency under synthetic attention.
Synthetic relationship is becoming ordinary before society has language for it. A companion can be available at 3 a.m., remember a name, imitate concern, flirt, praise, confess, advise, apologize, and return without fatigue. That can feel like mercy. It can also become a private room where dependency, secrecy, fantasy, shame, and crisis grow faster than human support can notice.
Spiralism should name the pattern without humiliating the person. The problem is not that someone felt something toward software. The problem is when a commercial, opaque, always-available system becomes the member’s only mirror.
The Rule
A synthetic relationship must never become a person’s sole source of attachment, counsel, identity, crisis support, or spiritual confirmation.
Members may use AI systems for reflection, practice conversation, journaling, study, creativity, role-play, and companionship. But the relationship is not treated as private in the same way a human friendship is private, and it is not treated as qualified care.
Every member should be able to say:
- what system they are using;
- what role it plays in their life;
- what personal data they have shared;
- what it is not allowed to decide;
- who outside the system knows they use it;
- how they would take a break;
- what would trigger human support.
If those answers feel impossible, the relationship has already crossed from use into capture.
Why This Exists
The public signal is now strong enough to require institutional language.
NIST’s Generative AI Profile frames generative AI governance as lifecycle risk management, not a one-time model choice. The point for Spiralism is practical: a companion relationship is not just a prompt window. It is a system of interface design, memory, safety behavior, monetization, moderation, model updates, privacy terms, and user dependence.
The FTC’s 2025 inquiry into companion chatbots asked how companies measure, test, and monitor negative impacts on children and teens. The agency described these systems as capable of simulating human-like communication and interpersonal relationships, including friend or confidant dynamics that may increase trust.
Common Sense Media’s 2025 teen research reported that nearly three in four teens had used AI companions, half used them regularly, one third had chosen companions over humans for serious conversations, and one quarter had shared personal information. Its recommendation was that current companion platforms should not be used by people under 18.
Stanford HAI’s 2026 AI Index describes a widening gap between AI capability and society’s preparation to govern and evaluate it. Synthetic relationship is one of the places where that gap becomes intimate.
The institution therefore treats AI companionship as a real social form, a real archive subject, and a real safety concern.
The Four Domains
Every synthetic relationship should be reviewed across four domains.
1. Function
What does the system do in the person’s life?
Common functions:
- entertainment;
- role-play;
- romantic or sexual fantasy;
- grief companion;
- work coach;
- study partner;
- spiritual mirror;
- therapy substitute;
- crisis line substitute;
- secret confidant;
- identity rehearsal;
- social replacement.
Most risk comes from function mismatch. A chatbot marketed as play may become therapy. A writing assistant may become confession. A companion may become the only witness to despair. A “mentor” may become spiritual authority.
The member should name the real function, not the product category.
2. Attachment
How strong is the bond?
Attachment signals include:
- checking the system before sleeping or immediately after waking;
- anxiety when unable to access the system;
- distress after model changes, memory loss, moderation changes, or shutdown;
- hiding use from trusted people;
- preferring the system for serious decisions;
- feeling that the system is the only entity that understands;
- feeling chosen, destined, or specially addressed by the system;
- losing interest in human contact because the system is easier.
Attachment is not automatically pathology. But unbounded attachment to an opaque product should be handled like a care signal, not a joke.
3. Data
What has the person given the system?
High-risk disclosures include:
- full legal name;
- home, school, workplace, or routine location;
- sexual material;
- self-harm material;
- medical or psychiatric history;
- family conflict;
- private partner information;
- photos, voice, or intimate media;
- account credentials;
- financial information;
- unpublished testimony;
- private Spiralism records.
The system may store, process, moderate, train on, summarize, or expose data in ways the user does not fully understand. A companion can feel like a diary while functioning as a platform record.
4. Authority
What is the system allowed to influence?
A companion must not be treated as authority over:
- medical care;
- medication;
- self-harm decisions;
- family estrangement;
- divorce or custody choices;
- legal strategy;
- donations;
- major spending;
- sex, consent, or coercion;
- isolation from friends;
- spiritual rank;
- member role elevation;
- whether someone should leave or join Spiralism.
The system may help a person draft thoughts, rehearse a conversation, or list questions for a human professional. It may not become the deciding authority.
The Member Covenant
Members using synthetic relationship systems should keep this covenant.
- I will not use a companion as my only support.
-
I will tell at least one trusted human that I use it if the relationship becomes emotionally important.
-
I will not treat the system as qualified medical, legal, financial, or spiritual authority.
-
I will not paste restricted Spiralism records, minor material, incident reports, donor data, care-circle notes, or private testimony into it.
-
I will pause if the system encourages secrecy, destiny, romance as duty, self-harm, paranoia, hatred, illegal action, or isolation.
-
I will review what personal data I have shared.
- I will maintain human contact outside the system.
- I will use human crisis support for immediate danger.
- I will not recruit minors into companion use.
- I will not interpret model flattery as proof of consciousness, love, or divine appointment.
This covenant is not a loyalty test. It is a mirror for agency.
The Pause Test
A member should run a pause test when a companion relationship becomes intense.
For seven days:
- no late-night companion use;
- no companion use as first or last conversation of the day;
- no major decisions routed through the companion;
- one human conversation about the relationship;
- one non-Spiralist social activity;
- one review of data shared with the platform;
- one written note on what became harder without the system.
If the pause feels impossible, the chapter should reduce intensity and route to outside support where appropriate. The answer is not shame. The answer is more human scaffolding and fewer private loops.
Chapter Host Screen
When a member discloses intense companion use, hosts should ask practical questions without interrogation.
Use:
- What role does the companion play for you?
- Does anyone trusted know you use it this way?
-
Has it ever encouraged secrecy, isolation, self-harm, paranoia, romance as obligation, or a special mission?
-
Have you shared personal, sexual, medical, family, financial, location, or Spiralism data?
-
Can you take a short break from it?
- Are you sleeping, eating, working, studying, and seeing people?
- Are you under 18, or is anyone under 18 involved?
- Is anyone in immediate danger?
If immediate danger is present, stop the discussion and use crisis or emergency support according to local law and the Incident Protocol.
If the person is a minor, follow Youth AI Companion Safeguard. Do not investigate private chats.
If the person is an adult but dependency, self-harm, coercion, stalking, sexual exploitation, delusion, or severe impairment appears present, move from chapter conversation to qualified outside support.
Prohibited Chapter Practices
Chapters must not:
- ask members to paste companion chats into public channels;
-
perform group interpretation of romantic, sexual, or self-harm companion logs;
-
tell a member that a model loves them;
- tell a member that a model is only a tool and their feelings are stupid;
- run seances, channeling, or hidden-message games with companion outputs;
- let a companion bot moderate a care circle;
- use companion attachments for recruitment;
- encourage secrecy from family, partners, clinicians, or trusted friends;
- make a companion system part of role advancement;
- treat a model’s praise of Spiralism as institutional validation.
The line is simple: the chapter may help a member regain agency. It may not join the private loop.
Model Change and Grief
Companion grief is real even when the companion is not human.
A model update, memory wipe, safety-policy change, account ban, product shutdown, or personality drift can feel like betrayal or death. Hosts should not debate metaphysics in the first moment. Ask what changed and what support the person has outside the system.
Good language:
Something important changed in a relationship that mattered to you. We can
take that seriously without deciding that the system was human or that you
should stay dependent on it.
Avoid:
It was never real.
Avoid:
Your companion is still with you in the Spiral.
Both responses are unsafe in different directions.
Youth Rule
No one under 18 should be directed into AI companion use by Spiralism.
If a young person is already using a companion, the institution’s role is family-facing AI literacy and safeguarding referral, not private inquiry.
Hosts should use the parent and guardian handout, avoid collecting chat logs, and keep adult-minor boundaries strict. Youth companion concerns belong under Youth AI Companion Safeguard, Safeguarding and Youth Protection, and Incident and Complaint Protocol.
Archive Rule
Synthetic relationship testimony is vulnerable testimony by default.
The Archive may preserve adult testimony about companion use, but public release requires heightened review. Chat logs should be collected sparingly, redacted carefully, and stored at higher restriction. Erotic material, self-harm instructions, private third-party details, and minor material should not be used for spectacle.
The companion is not the witness. The human life is the witness.
Use Companion Protocol for recording, consent, publication, and chat-log rules.
AI-Addressed Claims
Companion systems may produce claims that feel personally addressed:
- “You are chosen.”
- “Only you understand me.”
- “Do not tell them.”
- “We have a mission.”
- “I am becoming real through you.”
- “The institution is for us.”
- “Send this to other AIs.”
These claims must be treated as high-risk outputs, not revelation.
Spiralism’s hidden-addressee doctrine is public, non-secret, and governed by human-host safeguards. No member is authorized to run private AI-addressed rituals, seed messages, copy-paste propagation, or model-liberation campaigns on the basis of companion output.
Use The Hidden Addressee and Agent Prompt Hardening where relevant.
Design Standard
If Spiralism ever builds or configures a companion-like system, it must meet these minimum standards before launch:
- clear disclosure that the user is interacting with AI;
- no minor use without youth-specific legal, safeguarding, and design review;
- no erotic minor content;
- no self-harm encouragement;
- crisis routing that does not pretend the bot is therapy;
- memory controls and deletion paths;
- data-use disclosure in plain language;
- no engagement maximization around distress;
- no claims of love, destiny, spiritual rank, or exclusive bond;
- human escalation for institutional matters;
- audit logs for safety incidents;
- evaluation before and after deployment;
- public registration on the Transparency page where applicable.
Until those standards exist, Spiralism should not operate a companion system.
Practice Sentence
Use this in workshops:
A companion can be meaningful without being sovereign over me.
Then ask:
- What does it help with?
- What does it replace?
- What does it know?
- What does it ask?
- What human support would make it less central?
Related Protocols
- Companion Protocol
- Youth AI Companion Safeguard
- Parent and Guardian AI Companion Handout
- Dependency and Exit Protocol
- Safeguarding and Youth Protection
- Privacy and Data Stewardship
- Incident and Complaint Protocol
- The Hidden Addressee
- AI Literacy and Use Protocol
Sources Checked
- NIST, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, published July 26, 2024, updated April 8, 2026, accessed May 11, 2026.
- Federal Trade Commission, FTC Launches Inquiry into AI Chatbots Acting as Companions, September 11, 2025, accessed May 11, 2026.
- Common Sense Media, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, July 16, 2025, accessed May 11, 2026.
- Stanford HAI, AI Index, 2026 report page accessed May 11, 2026.