Post : AI Didn’t Start the Fire: How Stack Exchange Moderators and Users Demonstrate Exit, Voice, and Loyalty
URL : https://blog.communitydata.science/ai-didnt-start-the-fire-how-stack-exchan…
Posted : February 24, 2026 at 07:13
Author : yiweiwu
Categories : Uncategorized
https://blog.communitydata.science/wp-content/uploads/sites/5/2026/02/image… How historical tensions on Stack Exchange (SE) between the community and platform (SE, Inc.) and strike-related events align with the SE community’s grievances, their actions, and our theoretical interpretations of loyalty, voice, and exit.
Generative AI technologies rely on content from knowledge communities as their training data. However, these communities receive little in return and instead experience increasing moderation burdens imposed by an influx of AI-generated content. Moreover, as platform operators sell their content to AI developers whose products may substitute for their work, these communities see a decrease in web traffic and new content and struggle with maintaining the vibrancy of their knowledge repositories. According to The Pragmatic Engineer ( https://newsletter.pragmaticengineer.com/p/the-pulse-134 ) , a prominent technology newsletter covering software engineering, the traffic on Stack Overflow declined dramatically to the point that the platform now generates roughly the same amount of new content as it did when it first launched in 2008, mostly driven by the impact of generative AI.
Even before AI technologies posed new threats, relationships between online communities and their host platforms were often uneasy. Past research on platforms such as Reddit, Stack Exchange, Tumblr, and DeviantArt reveals a recurring pattern: when platform policies conflict with community values, communities tend to push back. Community members have organized blackouts, suspended moderation, or migrated to alternative platforms altogether. However, less understood is how these conflicts unfold over time, especially in the context of generative AI. So how do knowledge contributors resist AI-related policies that conflict with their values? And what happens in the aftermath of such collective action, especially for a community's governance, including how rules are set, whose voices are recognized, and how participation is enabled?
To answer these questions, we examined ( https://arxiv.org/abs/2512.08884 ) a major conflict between SE, Inc. and the community that occurred in 2023 around an emergency arising from the release of LLMs. Drawing on a qualitative analysis of over 2,000 messages posted on Meta Stack Exchange (the Stack Exchange site designated for policy discussions), as well as interviews with 14 community members, we traced how this conflict emerged, escalated, and evolved. What we found was not a sudden backlash driven solely by AI, but the accumulation of long-standing grievances.
According to our interviews, SE community members described years of frustration over declining transparency, accountability, and participatory governance. Although the platform historically supported community self-regulation through mechanisms such as moderator elections and shared moderation responsibilities for users with high reputation, community members increasingly perceived that key decisions were being made by SE, Inc. without meaningful community input. Tensions escalated when SE, Inc. introduced policies related to AI-generated content without consulting moderators or contributors, which many interpreted as a long-standing exclusion and disregard. In response, moderators and contributors coordinated collective action by suspending moderation activity, signing public petitions, and updating discussions on Meta Stack Exchange. Some also chose to exit the platform, migrating to alternative spaces such as Codidact, which is an open-source, community-governed platform. The collective action was organized through a tiered communication structure, beginning with a small, enclosed group of moderators and then spreading across the network's users.
We interpret findings through the lens of Albert O. Hirschman’s Exit, Voice, Loyalty ( https://www.hup.harvard.edu/books/9780674276604 ) framework. According to Hirschman, members of an organization face two options to express their dissatisfaction when loyalty towards the organization decreases: one is exit, and the other is voice. In the Stack Exchange case, loyalty had already degraded due to the accumulation of unresolved grievances rather than a single triggering event. As community members came to believe that their voices were no longer heard, dissatisfaction manifested in two distinct responses: coordinated collective voice through organized resistance, and exit through permanent disengagement from the platform. This pattern highlights how governance crises can emerge even in platforms that formally support community self-regulation, and how declining loyalty can transform routine disagreement into large-scale collective action or exit.
In retrospect, the Stack Exchange strike highlights a broader lesson: community grievances around AI are not just about technical issues, but about deeper governance issues about relationships between platforms and the communities that sustain them. Thus, managing these crises requires more than better moderation tools or more transparent AI policies. Platforms and big tech companies need to support participatory governance in a more systematic way. For example, creating mechanisms for effective voice by binding platforms into an agreement where community input can help shape decision-making processes. Another possible solution would be credible exit, where contributors have alternatives if governance on the original platforms fails. When communities can leave without their data being locked in, platforms are more likely to listen. Credible exit not only empowers the communities, but also reduces long-term governance risks for platform operators. Conflict is expensive for platforms, and maintaining loyalty requires long-term investment in moderation, communication, and policy enforcement. Conversely, the exit process can function as a self-binding mechanism that mediates platform behavior and mitigates costly disputes when users have functional alternatives. And when platforms bind themselves to community accountability, conflicts are less likely to escalate into strikes in the first place.
In conclusion, the SE moderation strike was not a sudden backlash driven solely by AI, but the accumulation of long-standing grievances. As generative AI continues to reshape the internet, the future of knowledge production will depend not only on what AI can generate, but also on whether volunteer contributors who built our shared knowledge commons are given the right to decide what comes next. We need to institutionalize participatory governance with binding mechanisms and create more credible exit options for communities to sustain this future.
Add a comment to this post: https://blog.communitydata.science/ai-didnt-start-the-fire-how-stack-exchan…
--
Manage Subscriptions
https://subscribe.wordpress.com/?key=abce9362fc2ea4285c1a73bc91209dd6&email…
Unsubscribe:
https://subscribe.wordpress.com/?key=abce9362fc2ea4285c1a73bc91209dd6&email…
What: Community Data Science Collective Open Lab
When: Friday, October 10th, 3-5pm
Where: Seattle, UW, the ground between CMU (Communications Building) and
HUB (Husky Union Building)
Greetings colleagues and friends!
If you happen to be in the Seattle area, please join my research group
at 3-5pm, Friday, October 10th, for an informal "open lab" at the
University of Washington Community Data Science Collective (CDSC).
CDSC folks from Northwestern University, Purdue University, the
University of Texas, Austin, and the University of Idaho will be in
town, so there's more reason to come!
The open lab is an opportunity to learn about our research and
activities, connect with us about project ideas, catch up over snacks
and beverages, and pick up a sticker or two. We will have a series of
posters up describing projects we are working on.
The primary plan is to have it outside between the CMU and the HUB
(near the elevator). If there is rain on that afternoon, we will be
inside in CMU 126, and there will be signs directing us there.
The CDSC studies online communities and platforms: participation,
governance, inequality, collaboration, learning, moderation, and
organizing in places like Reddit, Wikipedia, Discord, Linux, gig work
platforms, and more! You can learn a bit more about the CDSC here:
https://wiki.communitydata.science/
We look forward to seeing you on Friday!
Regards,
Mako (on behalf of the whole collective)
--
Benjamin Mako Hill
https://mako.cc/academic/
Join the Community Data Science Collective<https://communitydata.science/> (CDSC) for our 11th Science of Community Dialogue<https://docs.google.com/forms/d/e/1FAIpQLSfC8ByIxY3TIyM-fpN-yYxov37Qf0IBIYe…>.
This Community Dialogue will take place on April 4th at 12:00 pm CT and will explore the critical role of community governance and organizational design in making communities resilient to systematic threats. Professor Paul Gowder<https://gowder.io/> (Northwestern University) will join Zarine Kharazian<https://zarine.net/> (University of Washington) to present recent research on topics including:
* Exploring threats like misinformation and propaganda in online communities.
* Limitations of approaches that neglect community governance.
* Tradeoffs in governance models, such as those of Facebook, Bluesky, and Wikipedia.
* Strategies to protect information commons.
* Participatory governance for platforms.
* Insights on democratizing platforms and society.
A full session descriptions<https://wiki.communitydata.science/The_Role_of_Community_Governance> is on our website. Register online<https://docs.google.com/forms/d/e/1FAIpQLSfC8ByIxY3TIyM-fpN-yYxov37Qf0IBIYe…>
What is a Dialogue?
The Science of Community Dialogue Series<https://wiki.communitydata.science/Dialogues> is a series of conversations between researchers, experts, community organizers, and other people who are interested in how communities work, collaborate, and succeed. You can watch this short introduction video<https://northwestern.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=1bb154…>with Aaron Shaw.
What is the CDSC?
The Community Data Science Collective<https://wiki.communitydata.science/Main_Page> (CDSC) is an interdisciplinary research group made of up of faculty and students at the University of Washington Department of Communication, the Northwestern University Department of Communication Studies, the Carleton College Computer Science Department, the School of Information at UT Austin, and the Purdue University School of Communication.
Learn more
If you'd like to learn more or get future updates about the Science of Community Dialogues, please join the low volume announcement list<https://communitydata.science/mailman3/postorius/lists/cdsc-dialogues.commu…>.
Madison Deyo
Program Coordinator
Northwestern University
The Center for Human-Computer Interaction + Design<https://www.hci.northwestern.edu/>
Community Data Science Collective<https://wiki.communitydata.science/Main_Page>
--
Join the Community Data Science Collective<https://communitydata.science/> (CDSC) for our 11th Science of Community Dialogue<https://docs.google.com/forms/d/e/1FAIpQLSfC8ByIxY3TIyM-fpN-yYxov37Qf0IBIYe…>.
This Community Dialogue will take place on April 4th at 12:00 pm CT and will explore the critical role of community governance and organizational design in making communities resilient to systematic threats. Professor Paul Gowder<https://gowder.io/> (Northwestern University) will join Zarine Kharazian<https://zarine.net/> (University of Washington) to present recent research on topics including:
* Exploring threats like misinformation and propaganda in online communities.
* Limitations of approaches that neglect community governance.
* Tradeoffs in governance models, such as those of Facebook, Bluesky, and Wikipedia.
* Strategies to protect information commons.
* Participatory governance for platforms.
* Insights on democratizing platforms and society.
A full session descriptions<https://wiki.communitydata.science/The_Role_of_Community_Governance> is on our website. Register online<https://docs.google.com/forms/d/e/1FAIpQLSfC8ByIxY3TIyM-fpN-yYxov37Qf0IBIYe…>
What is a Dialogue?
The Science of Community Dialogue Series<https://wiki.communitydata.science/Dialogues> is a series of conversations between researchers, experts, community organizers, and other people who are interested in how communities work, collaborate, and succeed. You can watch this short introduction video<https://northwestern.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=1bb154…> with Aaron Shaw.
What is the CDSC?
The Community Data Science Collective<https://wiki.communitydata.science/Main_Page> (CDSC) is an interdisciplinary research group made of up of faculty and students at the University of Washington Department of Communication, the Northwestern University Department of Communication Studies, the Carleton College Computer Science Department, the School of Information at UT Austin, and the Purdue University School of Communication.
Learn more
If you'd like to learn more or get future updates about the Science of Community Dialogues, please join the low volume announcement list<https://communitydata.science/mailman3/postorius/lists/cdsc-dialogues.commu…>.
Madison Deyo
Program Coordinator
Northwestern University
The Center for Human-Computer Interaction + Design<https://www.hci.northwestern.edu/>
Community Data Science Collective<https://wiki.communitydata.science/Main_Page>
--
Hello,
On behalf of Sohyeon Hwang, I’d. like to share a panel event in Seattle from September 3-6.
“My colleagues and I are organizing an open panel on AI harms and communities at the 50th Annual Meeting of the Society for Social Studies of Science (4S)<https://urldefense.com/v3/__https:/www.4sonline.org/about_the_conference_se…>, September 3-6, 2025 in Seattle. Focused on the idea of "social model collapse," we advocate for consideration of harms to communities (including online communities) as they respond to, are used for, and incorporate generative AI algorithms.
The call asks for a 250 word abstract, due January 31. We would love to see your submissions - the full call is pasted below and also available at the blog post in this link<https://urldefense.com/v3/__https:/blog.communitydata.science/thinking-abou…>. More information about 4S and submitting can be found here: https://www.4sonline.org/call_for_submissions_seattle.php<https://urldefense.com/v3/__https:/www.4sonline.org/call_for_submissions_se…>.
Please feel free to reach out with any questions.
Thank you,
Sohyeon Hwang
Postdoctoral Fellow
Center for Information Technology Policy
Princeton University
Open Panel: Risks of ‘Social Model Collapse’ in the Face of Scientific and Technological Advances
Model collapse in machine learning refers to the deterioration such a model faces if it is re-fed with its own output, removing variation and generating poor output; in this panel, we extend this notion to ask in what ways the use of algorithmic output in place of human participation in social systems places those social systems at risk. Recent research findings in the generation of synthetic text using large language models have fed and been fed by a rush to extract value from, and engage with, online communities. Such communities include the discussion forum Reddit, the software development communities producing open source, the participants in the question and answer forum StackExchange, and the contributors to the online knowledge base Wikipedia.
The success of these communities depends on a range of social phenomena threatened by adoption of synthetic text generation as a modality replacing human authors. Newcomers who ask naive questions are a source of members and leaders, but may shift their inquiries to LLMs and never join the community as contributors. Software communities are to some extent reliant on a sense of generalized reciprocity to turn users into contributors; such appreciation may falter if their apparent benefactor is a tireless bot. Knowledge communities are dependent on human curation, inquiry, and effort to create new knowledge, which may be systemically diluted by the presence of purported participants who are only algorithms echoing back reconstructions of the others. Meanwhile, extractive technology firms profit from anyone still engaging in a genuine manner or following their own inquiries.
In this panel, we invite consideration of current forms of social model collapse driven by a rush of scientific-industrial activity, as well as reflection on past examples of social model collapse to better contextualize and understand our present moment.
Submissions are 250-word abstracts due January 31st; our panel is #223, “Risks of ‘Social Model Collapse’ in the Face of Scientific and Technological Advances” [Submission site link<https://urldefense.com/v3/__https:/www.xcdsystem.com/4sonline/member__;!!Dq…>].”
Madison Deyo
Program Coordinator
Northwestern University
The Center for Human-Computer Interaction + Design<https://www.hci.northwestern.edu/>
Community Data Science Collective<https://wiki.communitydata.science/Main_Page>
--
Greetings!
Kaylea Champion's PhD dissertation defense is scheduled for next
Monday July 22 at 9:30am Seattle time. The public portion will not
take more than 90 minutes. The title is: "Social and Technical Sources
of Risk in Sustaining Digital Infrastructure" and the defense is open
to the public. Anyone is welcome to come.
That date/time is officially tentative until there is final
confirmation from all members of Kaylea's reading committee that the
dissertation is ready to defend but that might not happen for a couple
more days and we wanted to let you know with some time in advance.
The plan is to hold the defense in this Zoom room:
URL: https://washington.zoom.us/my/makohill
Number: 951 959 3783
Kaylea will give a talk for 30-45 minutes or so to present her
dissertation research and then there will be time for questions from
her committee and from the audience.
Regards,
Mako
--
Benjamin Mako Hill
https://mako.cc/academic/