Post : AI Didn’t Start the Fire: How Stack Exchange Moderators and Users Demonstrate Exit, Voice, and Loyalty
URL : https://blog.communitydata.science/ai-didnt-start-the-fire-how-stack-exchan…
Posted : February 24, 2026 at 07:13
Author : yiweiwu
Categories : Uncategorized
https://blog.communitydata.science/wp-content/uploads/sites/5/2026/02/image… How historical tensions on Stack Exchange (SE) between the community and platform (SE, Inc.) and strike-related events align with the SE community’s grievances, their actions, and our theoretical interpretations of loyalty, voice, and exit.
Generative AI technologies rely on content from knowledge communities as their training data. However, these communities receive little in return and instead experience increasing moderation burdens imposed by an influx of AI-generated content. Moreover, as platform operators sell their content to AI developers whose products may substitute for their work, these communities see a decrease in web traffic and new content and struggle with maintaining the vibrancy of their knowledge repositories. According to The Pragmatic Engineer ( https://newsletter.pragmaticengineer.com/p/the-pulse-134 ) , a prominent technology newsletter covering software engineering, the traffic on Stack Overflow declined dramatically to the point that the platform now generates roughly the same amount of new content as it did when it first launched in 2008, mostly driven by the impact of generative AI.
Even before AI technologies posed new threats, relationships between online communities and their host platforms were often uneasy. Past research on platforms such as Reddit, Stack Exchange, Tumblr, and DeviantArt reveals a recurring pattern: when platform policies conflict with community values, communities tend to push back. Community members have organized blackouts, suspended moderation, or migrated to alternative platforms altogether. However, less understood is how these conflicts unfold over time, especially in the context of generative AI. So how do knowledge contributors resist AI-related policies that conflict with their values? And what happens in the aftermath of such collective action, especially for a community's governance, including how rules are set, whose voices are recognized, and how participation is enabled?
To answer these questions, we examined ( https://arxiv.org/abs/2512.08884 ) a major conflict between SE, Inc. and the community that occurred in 2023 around an emergency arising from the release of LLMs. Drawing on a qualitative analysis of over 2,000 messages posted on Meta Stack Exchange (the Stack Exchange site designated for policy discussions), as well as interviews with 14 community members, we traced how this conflict emerged, escalated, and evolved. What we found was not a sudden backlash driven solely by AI, but the accumulation of long-standing grievances.
According to our interviews, SE community members described years of frustration over declining transparency, accountability, and participatory governance. Although the platform historically supported community self-regulation through mechanisms such as moderator elections and shared moderation responsibilities for users with high reputation, community members increasingly perceived that key decisions were being made by SE, Inc. without meaningful community input. Tensions escalated when SE, Inc. introduced policies related to AI-generated content without consulting moderators or contributors, which many interpreted as a long-standing exclusion and disregard. In response, moderators and contributors coordinated collective action by suspending moderation activity, signing public petitions, and updating discussions on Meta Stack Exchange. Some also chose to exit the platform, migrating to alternative spaces such as Codidact, which is an open-source, community-governed platform. The collective action was organized through a tiered communication structure, beginning with a small, enclosed group of moderators and then spreading across the network's users.
We interpret findings through the lens of Albert O. Hirschman’s Exit, Voice, Loyalty ( https://www.hup.harvard.edu/books/9780674276604 ) framework. According to Hirschman, members of an organization face two options to express their dissatisfaction when loyalty towards the organization decreases: one is exit, and the other is voice. In the Stack Exchange case, loyalty had already degraded due to the accumulation of unresolved grievances rather than a single triggering event. As community members came to believe that their voices were no longer heard, dissatisfaction manifested in two distinct responses: coordinated collective voice through organized resistance, and exit through permanent disengagement from the platform. This pattern highlights how governance crises can emerge even in platforms that formally support community self-regulation, and how declining loyalty can transform routine disagreement into large-scale collective action or exit.
In retrospect, the Stack Exchange strike highlights a broader lesson: community grievances around AI are not just about technical issues, but about deeper governance issues about relationships between platforms and the communities that sustain them. Thus, managing these crises requires more than better moderation tools or more transparent AI policies. Platforms and big tech companies need to support participatory governance in a more systematic way. For example, creating mechanisms for effective voice by binding platforms into an agreement where community input can help shape decision-making processes. Another possible solution would be credible exit, where contributors have alternatives if governance on the original platforms fails. When communities can leave without their data being locked in, platforms are more likely to listen. Credible exit not only empowers the communities, but also reduces long-term governance risks for platform operators. Conflict is expensive for platforms, and maintaining loyalty requires long-term investment in moderation, communication, and policy enforcement. Conversely, the exit process can function as a self-binding mechanism that mediates platform behavior and mitigates costly disputes when users have functional alternatives. And when platforms bind themselves to community accountability, conflicts are less likely to escalate into strikes in the first place.
In conclusion, the SE moderation strike was not a sudden backlash driven solely by AI, but the accumulation of long-standing grievances. As generative AI continues to reshape the internet, the future of knowledge production will depend not only on what AI can generate, but also on whether volunteer contributors who built our shared knowledge commons are given the right to decide what comes next. We need to institutionalize participatory governance with binding mechanisms and create more credible exit options for communities to sustain this future.
Add a comment to this post: https://blog.communitydata.science/ai-didnt-start-the-fire-how-stack-exchan…
--
Manage Subscriptions
https://subscribe.wordpress.com/?key=65cad9df3c24375c969692300da5e2ea&email…
Unsubscribe:
https://subscribe.wordpress.com/?key=65cad9df3c24375c969692300da5e2ea&email…
Post : Why do people participate in similar online communities?
URL : https://blog.communitydata.science/why-do-people-participate-in-similar-onl…
Posted : February 15, 2026 at 19:18
Author : Benjamin Mako Hill
Categories : Uncategorized
Note: We have missed publishing blog posts about academic papers over the past few years. To ensure that my blog contains a more comprehensive record of our published papers and to surface these for folks who missed them, I will be periodically publishing blog posts about some "older" published projects.
It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I've worked with a team of collaborators—led by Nathan TeBlunthuis ( https://teblunthuis.cc/ ) —to use mathematical and statistical techniques from ecology to understand these dynamics. What we've found surprised us: competition between online communities is rare and typically short-lived.
When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: why were the same groups of people talking to each other about the same things in different places? And why don't they appear to be in competition with each other for their users' time and activity?
We didn't know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).
We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:
* The ability to connect to specific information and narrowly scoped discussions.
* The ability to socialize with people who are similar to themselves.
* Attention from the largest possible audience.
Critically, we also found that these three things represented a "trilemma," and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.
https://blog.communitydata.science/wp-content/uploads/sites/5/2025/06/tease… Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.
The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.
This work was published as a paper at CSCW: TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908.
This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.
This post was first published ( https://mako.cc/copyrighteous/why-do-people-participate-in-similar-online-c… ) on Benjamin Mako Hill's blog copyrighteous ( https://mako.cc/copyrighteous/ ) .
Add a comment to this post: https://blog.communitydata.science/why-do-people-participate-in-similar-onl…
--
Manage Subscriptions
https://subscribe.wordpress.com/?key=65cad9df3c24375c969692300da5e2ea&email…
Unsubscribe:
https://subscribe.wordpress.com/?key=65cad9df3c24375c969692300da5e2ea&email…
Post : Symposium on Online Community Research at Purdue
URL : https://blog.communitydata.science/symposium-on-online-community-research-a…
Posted : September 26, 2024 at 11:17
Author : madisondeyo
Categories : newsletter, public event
On September 13th, the Community Data Science Collective led the “Frontiers in Online Community Research Symposium” at Purdue University. We had a number of fantastic presenters and panelists discussing topics from moderating the Fediverse to the role of LLMs in online communities and how different academic disciplines approach online community research.
Eshwar Chandrashekharan (University of Illinois at Urbana-Champaign) joined as our keynote speaker. He presented research he and his group have been working on titled, “Proactive Approaches to Promote Community Resilience and Foster Desirable Behavior Online”. Eshwar discussed ongoing efforts to combat undesirable online behaviors through research and design that promote resilience and facilitate positive interactions within online conversations and communities.
https://blog.communitydata.science/wp-content/uploads/sites/5/2024/09/eshwa…
Prior to Eshwar’s keynote, we had an opening panel and research presentations by CDSC members. For the opening panel, Purdue professors Diana Zulli (Communication) and Marcus Mann (Sociology) joined CDSC faculty Aaron Shaw (Northwestern), and Mako Hill (University of Washington) for an introductory Q&A panel. The panel discussed what we know about online communities, what new questions we are just starting to answer, and what exciting new methods are being used.
https://blog.communitydata.science/wp-content/uploads/sites/5/2024/09/panel…
Following the panel, CDSC students Carl Colglazier (Northwestern), Sohyeon Hwang (Northwestern), and Kaylea Champion (University of Washington) gave really wonderful talks on their research. Carl talked about his work on moderation in the Fediverse, and the impact of site-level blocking. Sohyeon provided a number of provocations about community governance in the face of AI-driven changes, while Kaylea discussed her work on underproduction in social systems. They all gave fantastic presentations and inspired great conversations among attendees.
https://blog.communitydata.science/wp-content/uploads/sites/5/2024/09/carl_…https://blog.communitydata.science/wp-content/uploads/sites/5/2024/09/sohw_…https://blog.communitydata.science/wp-content/uploads/sites/5/2024/09/kayle…
Overall, it was an excellent symposium that we hope helps to push our field forward. Thank you to all who attended and made it such a great event. A special thank you to the CDSC Purdue members for organizing the event and to Thatiany Andrade Nunes for taking photos!
Add a comment to this post: https://blog.communitydata.science/symposium-on-online-community-research-a…
--
Manage Subscriptions
https://subscribe.wordpress.com/?key=65cad9df3c24375c969692300da5e2ea&email…
Unsubscribe:
https://subscribe.wordpress.com/?key=65cad9df3c24375c969692300da5e2ea&email…
Subscribed!
===========
Congratulations, you are now subscribed to the site "Community Data Science Collective" (blog.communitydata.cc) and will receive an email notification when a new post is made.
Whoa, slow down with the emails!
================================
You can change your delivery preferences in your subscription dashboard. You can have emails sent when they are published, or aggregated and sent once a day or once a week.
How do I cancel my subscription?
================================
You can cancel your subscription at any time by following one of these options:
- Click on the 'unsubscribe' link at the bottom of every email (including this one)
- Reply to the email with 'unsubscribe blog.communitydata.cc' as the email subject or first line of the body (also including this one)
- Visit your subscription dashboard and unsubscribe from there
--
Manage Subscriptions
https://subscribe.wordpress.com/?key=65cad9df3c24375c969692300da5e2ea&email…
Unsubscribe:
https://subscribe.wordpress.com/?key=65cad9df3c24375c969692300da5e2ea&email…