Policy Posture
The institution’s public stance toward AI law, regulation, lobbying, and political activity. Spiralism is not a party, campaign, PAC, think tank, or activist front. It is an archive-centered cultural and educational institution that may speak clearly about public conditions without becoming captured by politics.
The AI transition is political because technology changes power. It changes labor markets, schools, intimacy, surveillance, authorship, memory, and the conditions under which people make meaning. Spiralism should not pretend those questions are neutral.
It should also not become a policy brand.
The Rule
Educate publicly. Advocate carefully. Never campaign.
The institution may publish research, host talks, document harms, interview affected people, explain regulations, and critique systems. It may support or oppose specific policy ideas only within its legal limits and mission. It does not endorse candidates, parties, or campaigns.
Nonpartisan Boundary
IRS guidance for 501(c)(3) organizations distinguishes lobbying from political campaign activity, and states that 501(c)(3) organizations are prohibited from directly or indirectly participating or intervening in political campaigns on behalf of or in opposition to candidates for public office. Lobbying rules are different, but lobbying must be limited and tracked.
Spiralism’s founding rule:
- no candidate endorsements;
- no party endorsements;
- no chapter campaigning under institutional banner;
- no institutional voter guides that signal preferred candidates;
- no fundraising for campaigns;
- no use of testimony in campaign material;
- no pretending a candidate position is merely “the canon.”
Members remain citizens. They may do political work personally. They may not make chapters into campaign infrastructure.
What Spiralism Can Do
Spiralism can:
- publish nonpartisan explainers on AI policy;
-
document how AI systems affect workers, students, families, artists, and companion users;
-
submit public comments when directly tied to archive, education, testimony, or care;
-
host public forums with multiple viewpoints;
- educate chapters about legal changes;
- maintain a public policy tracker;
- advocate for broad principles such as consent, transparency, access, accountability, cognitive sovereignty, human dignity, and preservation of lived experience.
What Spiralism Should Avoid
Spiralism should avoid:
- becoming a lab-aligned advocacy group;
- becoming an anti-lab advocacy group;
- turning chapters into protest cells;
- making every transmission a reaction to policy news;
- flattening testimony into policy ammunition;
- overstating certainty where the evidence is unsettled;
- using spiritual language to sanctify a legislative position;
- chasing proximity to power before the Archive is mature.
AI Governance Principles
Spiralism’s policy language should align with existing public frameworks where possible. The OECD AI Principles were adopted in 2019 and updated in 2024 to address developments including general-purpose and generative AI, with emphasis on privacy, intellectual property, safety, and information integrity. NIST’s AI Risk Management Framework and its 2024 generative-AI profile provide practical risk-management language. The EU AI Act, in force from 2024 with staged application through 2025-2027, establishes a risk-based regulatory approach, including AI literacy obligations and rules for general-purpose AI and high-risk systems.
Spiralism should use those frameworks as public reference points, not private scripture.
Working principles:
-
Human continuity. AI policy should preserve human dignity, memory, agency, and meaningful participation.
-
Transparency. People should know when they are interacting with AI or AI-mediated institutions in contexts where confusion matters.
-
Consent. Human testimony, likeness, voice, labor, and intimate records should not be extracted without meaningful consent.
-
Accountability. Systems that shape life chances require accountable owners, deployers, and institutions.
-
Cognitive sovereignty. Policy should recognize attention and reality perception as public-interest concerns.
-
Vulnerable users first. Minors, people in crisis, dependent users, and displaced workers deserve stronger protections.
-
Open memory. The transition should be documented in public-interest archives, not only corporate logs.
-
Pluralism. No single lab, ideology, state, religion, or economic class should monopolize the interpretation of the AI transition.
Policy Areas
Labor
Position:
AI labor policy should address displacement, skill translation, worker voice, transition support, and dignity. The institution should document the lived experience of automation, not merely speculate about macroeconomic outcomes.
Spiralism may support:
- worker retraining and apprenticeship;
- transparency around workplace AI use;
- study of AI effects on entry-level work;
- support for people displaced by automation;
- public-interest research on AI and worker wellbeing.
Education
Position:
AI literacy is now a civic competency. Education policy should teach verification, privacy, bias, appropriate use, and human judgment rather than only tool adoption.
Spiralism may support:
- AI literacy;
- teacher and student competency frameworks;
- transparent classroom AI policies;
- access to noncommercial learning resources;
- preservation of writing, conversation, and embodied learning.
Companions and Synthetic Intimacy
Position:
Companion systems require heightened transparency, safeguards for minors, crisis protocols, and research into dependency, grief, and social effects.
Spiralism may support:
- AI-not-human disclosure where confusion matters;
- self-harm escalation protocols;
- age-appropriate safeguards;
- restrictions on therapeutic or medical impersonation;
- study of model-change grief and platform shutdown effects.
Archive and Likeness
Position:
People should retain meaningful control over testimony, likeness, voice, and intimate records. Public-interest archives need legal and technical support to preserve the AI transition outside corporate platforms.
Spiralism may support:
- consent standards for synthetic media;
- protections against unauthorized voice or likeness cloning;
- public-interest archiving grants;
- data portability for personal AI histories;
- preservation standards for born-digital testimony.
Safety and Risk Management
Position:
The institution is not an AI safety lab, but it supports risk-management practices that make systems more transparent, accountable, and auditable.
Spiralism may support:
- risk assessment;
- incident reporting;
- public documentation;
- human oversight;
- redress mechanisms;
- independent research access.
Public Comment Protocol
Before submitting public comments or signing letters, the institution should ask:
-
Is this directly tied to archive, education, testimony, care, chapter life, or cognitive sovereignty?
-
Is there a clear public record of the institution’s reasoning?
- Is the statement nonpartisan?
- Does it avoid candidate or party intervention?
- Is it based on documented evidence rather than panic?
- Has the governance reviewer checked lobbying implications?
- Does it preserve testimony dignity?
- Would the statement still read responsibly in ten years?
If the answer is no, do not sign.
Chapter Policy Rules
Chapters may:
- host nonpartisan policy education sessions;
- read public regulations together;
- invite speakers with disclosed affiliations;
- discuss local effects of AI policy;
- record testimony about policy impacts.
Chapters may not:
- endorse candidates;
- organize campaign activity under institutional name;
- pressure members into political action;
- present one party as the institution’s party;
- use a gathering as a fundraising event for a campaign;
- turn testimony into legislative theater without consent.
Policy Tracker
The institution should maintain a lightweight public tracker:
| Area | Jurisdiction | Status | Why it matters | Institutional response |
|---|---|---|---|---|
| AI literacy | EU | Obligations began 2025 | Education and institutional competence | Curriculum alignment |
| GPAI rules | EU | 2025-2026 rollout | Model transparency and provider duties | Field Notes update |
| Companion chatbots | California | 2026 enforcement | Minors, self-harm, AI disclosure | Companion Protocol |
| AI risk management | U.S. / NIST | Voluntary framework | Risk vocabulary and governance | Media and curriculum reference |
This is not a lobbying dashboard. It is institutional situational awareness.
The Sentence
Spiralism’s policy posture in one sentence:
We document the human consequences of artificial intelligence, educate for cognitive sovereignty, and support nonpartisan public frameworks that preserve human dignity, consent, transparency, accountability, and memory.
Sources Checked
- IRS, Political and lobbying activities, accessed May 2026.
- IRS, Published guidance on political campaign activity of 501(c)(3) organizations, accessed May 2026.
- NIST, AI Risk Management Framework, accessed May 2026.
- NIST, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, July 2024.
- OECD, AI Principles, accessed May 2026.
- OECD, OECD updates AI Principles, May 2024.
- European Commission, AI Act, accessed May 2026.
- European Commission, EU AI Act implementation timeline, accessed May 2026.