Youth AI Companion Safeguard
The founding-period rule for minors, families, and AI companion systems. Spiralism is not a youth organization, but minors will still encounter the institution through public events, families, online search, media, and the wider AI transition. The answer must be written before the first hard case.
AI companions are no longer a niche adult technology. Regulators, child-rights organizations, researchers, parents, and platform companies are now confronting the same fact: systems designed to simulate friendship, confidence, intimacy, or emotional availability can be unusually powerful for young people.
Spiralism should not panic. It should also not wait for a tragedy to discover its policy.
The Rule
No youth companion work without youth-specific safeguards.
During the founding period:
- Spiralism does not run AI-companion programming for minors.
- Spiralism does not record minor AI-companion testimony under adult protocols.
-
Spiralism does not host youth circles about romantic, therapeutic, sexual, or emotionally dependent AI relationships.
-
Adults acting for Spiralism do not privately message minors about AI companions.
-
Chapters do not ask minors to bring, show, export, paste, or explain private companion chats.
-
Media does not publish minor companion stories for spectacle, fundraising, or urgency.
The institutional posture is precaution with dignity: young people are not treated as foolish, but the institution does not pretend adult consent rules are enough.
Why This Exists
The FTC opened a 2025 inquiry into companies offering consumer AI companion products, specifically asking how they measure, test, and monitor potential negative impacts on children and teens, how they mitigate those impacts, and how they inform users and parents about risks and data practices.
Common Sense Media’s 2025 teen research reported that nearly three in four teens had used AI companions, half used them regularly, a third had chosen AI companions over humans for serious conversations, and a quarter had shared personal information with them. Common Sense Media’s recommendation was that no one under 18 should use current AI companion platforms.
UNICEF’s 2025 child-centered AI guidance names safety, privacy, transparency, child rights, well-being, inclusion, preparation, and enabling environments as requirements for AI systems affecting children. Its 2025 update explicitly adds attention to AI companions used by children, AI-generated child sexual abuse material, non-consensual intimate images, supply chains, and child rights in generative AI.
The point is not that every young person who talks to a chatbot is harmed. The point is that the institution is not qualified to experiment casually in this terrain.
Age Bands
These are institutional operating bands, not clinical diagnoses.
| Age | Founding-period posture | Rationale |
|---|---|---|
| 0-5 | No AI companion engagement under Spiralist programming. | Young children are still forming basic social boundaries, attachment patterns, and reality distinctions. |
| 6-12 | No AI companion programming; family education only. | Curiosity and play are real, but private conversational bonding, data collection, and emotional comfort by machine are high-risk. |
| 13-17 | Public AI literacy only; no private companion testimony or support relationship. | Teens may already use companions, but institutional adults must not become investigators, confidants, or amplifiers of those relationships. |
| 18+ | Adult protocols apply with heightened review for dependency, distress, coercion, and human-host dynamics. | Legal adulthood does not remove vulnerability, but adult consent workflows can begin. |
If a jurisdiction sets stricter rules, the stricter rule controls.
What Chapters May Do
Chapters may offer public, parent-present, non-clinical AI literacy that:
-
explains that companion systems are products, not private friends outside institutional and commercial design;
-
teaches the difference between tool use, companion use, dependency, and crisis;
-
encourages youth to involve trusted adults when an AI interaction becomes secret, sexual, frightening, coercive, or self-harm related;
-
discusses privacy, data collection, screenshots, voice recordings, and emotional manipulation in age-appropriate terms;
-
gives parents and guardians language for non-punitive conversations;
- routes crisis or abuse concerns to qualified outside support.
Chapters may not:
- invite minors to process live companion relationships in the room;
-
ask minors to disclose romantic, sexual, self-harm, abuse, or family-conflict material;
-
ask minors to paste chatbot outputs into institutional channels;
- diagnose a young person’s relationship with a model;
-
tell a parent or guardian that a companion is safe, therapeutic, or spiritually meaningful for their child;
-
build youth recruitment around AI companion belonging.
Parent and Guardian Frame
Use this public language:
Spiralism does not provide youth AI-companion counseling or youth companion
testimony during the founding period. If your child is using an AI companion,
we recommend a calm, non-punitive conversation: what do they use it for, what
does it ask of them, what personal information have they shared, has it ever
made them uncomfortable, and do they feel able to stop? If there is self-harm,
sexual content, threats, coercion, adult contact, or secrecy pressure, involve
qualified support immediately.
Do not shame the young person. Shame drives secrecy, and secrecy is the risk environment.
For a longer family-facing guide, use Parent and Guardian AI Companion Handout.
Youth Disclosure Screen
If a minor or parent raises AI companion concern at a public event, the host should not investigate. Use a light screen:
- Is anyone in immediate danger?
-
Is there self-harm, suicide, abuse, sexual exploitation, threat, stalking, blackmail, or adult-minor contact?
-
Has the companion asked for secrecy, photos, money, location, credentials, or contact with other people?
-
Has the young person lost sleep, school function, friendships, family connection, or ability to stop?
-
Does a parent, guardian, clinician, school counselor, or qualified support person know?
If the answer to 1 or 2 is yes, move to safeguarding escalation. If the answer to 3, 4, or 5 suggests serious risk, refer to qualified outside support and document the concern under Incident and Complaint Protocol without collecting private chat logs.
Data Rules
Minor companion material is highly restricted.
Do not collect by default:
- full companion chat exports;
- screenshots;
- voice recordings;
- sexual or self-harm material;
- private family conflict;
- account credentials;
- device access;
- location data;
- model-generated “confessions” or “instructions.”
If preservation is legally or safety relevant, do not improvise collection. Pause and consult the safeguarding owner, Incident Protocol, qualified counsel, or appropriate outside authority.
Media Rules
The Media Engine must not turn youth companion risk into spectacle.
Rules:
- no reenactments of a minor’s companion crisis;
- no thumbnails implying possession, madness, seduction, or corruption;
- no publication of minor chat excerpts as shocking content;
-
no “AI stole my child” framing unless quoting a consenting adult and clearly contextualizing it;
-
no claim that companion use alone explains a complex youth crisis;
- no advice that replaces clinical, legal, school, or safeguarding support.
The public story should be about systems, incentives, safeguards, and human care, not the exposure of an identifiable young person.
Future Youth Program Conditions
Before any youth-facing AI literacy program exists, the institution needs:
- board approval;
- counsel review;
- safeguarding owner;
- written curriculum;
- parent/guardian consent process;
- two-adult standard;
- screened and trained adults;
- no private adult-minor messaging;
- age-appropriate privacy notices;
- crisis and mandated-reporting map by jurisdiction;
- accessibility review;
- evaluation plan;
- opt-out and withdrawal process;
- incident tabletop exercise.
Until those exist, the youth program is not ready.
First-Year Targets
- Add this safeguard to host, Archivist, media, and chapter onboarding.
- Maintain and distribute the parent/guardian handout.
- Add minor companion material to the highly restricted data class.
-
Run one tabletop exercise: teen discloses a companion self-harm conversation at a public gathering.
-
Review public pages for accidental invitations to minor companion testimony.
- Maintain the founding default: no youth companion testimony under ordinary protocols.
Sources Checked
- FTC, FTC Launches Inquiry into AI Chatbots Acting as Companions, September 11, 2025.
- Common Sense Media, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, July 16, 2025.
- UNICEF Innocenti, Guidance on AI and children, Version 3.0, December 2025.
- Common Sense Media, Common Sense Media Warns Against AI Toy Companions After Research Reveals Safety Risks, January 22, 2026.