Erin Kissane gives a talk at ATmosphere Conf 2025 in Seattle

I'm so bummed that I can't be there in the room with you. I'm also very excited not to give you the lung crud I have right now. I had to call Ted two days ago and say, "You don't want me in your space." Please bear with me — I took a lot of Benadryl last night and drank a lot of coffee this morning. Not an ideal situation for the flame of cognition, but we're going to give it our best shot.

Thank you for the great intro. Thank you to Ted, to Boris, and to everyone who's made this work. The streaming experience has actually been really good, and it's been fun to see the experiments with streaming, plus having a YouTube channel for when that doesn't work as well. Thank you also to the Bluesky folks who have been army‑crawling along on this work for the two‑year decade you've just lived through. And thank you especially to Ted, who's been making sure I could actually make it to the screen this morning.

I want to situate myself briefly: I'm a writer and researcher working on the internet — both trying to make the internet itself better (advocating early for accessibility, the open web, better networks) and trying to do useful things on the internet, like building more human forms of journalism and doing pandemic work. I'm here specifically, I think, because I'm trying to produce knowledge and thinking that’s useful to people working on atproto, but also across all the decentralized and new network experiments happening right now. And yes, that includes the Fediverse.

I started out somewhat skeptical of atproto, but it's been amazing watching it come together as an open ecosystem over the last couple of years. Yesterday's talks were great.


Giving Up on (and Returning to) the Social Internet

Last summer, I gave a talk at XOXO in Portland about my experience giving up on the social internet because of its predatory structures and incentives — and then getting pulled back in by my pandemic work and becoming convinced that we have to make big‑world networking work, and work much better, for something as basic as collective survival.

I ended that talk by calling for digging into the social and technical aspects of making networks safer, more habitable, and better at helping us find the people we need to work with. That went over well, but people reasonably asked: How?

I’ve been dragging around ideas about this for years — ideas about how our work is like forestry, about Heidegger’s standing reserve, about what we can learn from sustainable traditional practices. I even had my next painting picked out to build a talk around (Bruegel’s The Gloomy Day — your second Bruegel in two days, thank you Blaine).

But then… the election happened. The rapid descent into authoritarian rule. The dismantling of the administrative state. So here we are.

I'm pushing aside the abstract stuff for now to get down to absolute basics — things I think we're all going to need, urgently.

I don’t like offering prescriptions. I came up in early UX culture, and I don’t believe in taking advice from random talkers when you could communicate directly with the people whose needs you’re trying to meet. But there are things that show up repeatedly in my work, and they feel essential right now.

They're coming from three projects that have most shaped my thinking:


Three Projects That Shaped My Perspective

1. The COVID Tracking Project My experiences there — a project utterly of the network, powered by hundreds of strangers volunteering time, attention, and love — changed how I think about big‑world networking. It also put me in direct conflict with organized hybrid online/offline campaigns designed to lie to people about reality. That rewired my brain.

2. Ethnographic work with Darius Kazemi, interviewing people about the realities of running Fediverse servers.

3. A 40,000‑word incident report on Facebook/Meta's role in the Rohingya genocide in Myanmar. I thought I understood it before I dug in. I did not. It was more terrifying than I imagined: a case study in carelessness, callousness, and refusing to listen to increasingly desperate civil‑society warnings.

I don't do this gruesome work to depress people about the social internet — I do it because I love us, and I don't want us to fall into the same pitfalls again. New ones, fine. But not the same old ones.


Two Short Talks Within This Talk

I want to dig into two things today:

  • Vulnerable people

  • Useful institutions — specifically vernacular institutions


1. Vulnerable People

Listening to vulnerable people has become such a corporate buzzphrase that it triggers eye‑rolling — and with good reason. I have rarely seen it meaningfully operationalized in social‑internet projects, except in communities already composed of heavily targeted people.

But it's still centrally important.

It’s not operationalized because it’s not legible as a practice — especially for small teams. And in an ecosystem like atproto, where many small teams are working at different layers, shared knowledge is crucial.

I don’t mean “social listening” like following the discourse. I mean actively seeking out the intelligent, informed perspectives of the most vulnerable groups you’re building for, before and during rollouts and changes.

Why the most vulnerable? Because:

  • If you can keep them safe, you can keep everyone safe.

  • If you only respond to the loudest voices, you’ll miss a lot.


Social Threat Modeling (Collaborative)

This idea is not new. People like Joan Donovan and Black women on Twitter have been developing versions of this since at least 2017–18.

The Electronic Frontier Foundation has also done foundational work here, particularly on data harms.

Key questions to ask with communities:

  • What do you need to protect, and from whom?

  • What vectors do those actors use? (People know — through lived experience.)

  • What happens if those protections fail?

  • Which design elements make it harder for you to stay safe?

  • What would help?

This needs to happen now because the weaponization of social data is shifting week to week in the U.S.

There’s also a growing sense that this shouldn’t be just threat modeling but collaborative risk–benefit analysis — especially for communities outside the U.S. who already make nuanced trade‑off decisions about which networks they use.

A great recent example: a report by The Engine Room interviewing activists in the global south about new networks. The researchers went in thinking about onboarding; the activists responded with highly sophisticated risk–benefit frameworks.

There's an enormous amount of community knowledge out there. But to access it, we must be willing to name who is vulnerable — which requires value judgments about whose needs take priority. Avoiding that is impossible; not choosing is choosing.

Ursula Franklin, quoted yesterday, spoke about justifiable fear: fearing you can’t feed your children, or fearing the knock on the door at night. A year ago that might’ve seemed dramatic; now it feels less so.

Design justice (see Sasha Costanza‑Chock’s work) is relevant here: design led by marginalized communities, challenging rather than reproducing structural inequality.

But small teams can't do all this alone. Which brings me to the next part.


2. Useful Institutions — Vernacular Institutions

Some of you hear “vernacular” and think linguistics. Others, architecture. I mean it in the architectural sense: local, emergent, homegrown — as opposed to monumental.

In architecture, vernacular is:

  • a Basque farmhouse,

  • whitewashed Greek houses angled to handle wind and sun,

  • a traditional yurt built with modern materials but ancient logic.

So what are vernacular institutions?

They’re emergent, highly local institutions that serve community needs, not state needs. Examples:

  • aid and pleasure clubs in New Orleans,

  • free medical clinics in Greece,

  • the Black Panthers’ community clinics,

  • mutual aid projects,

  • indigenous conflict‑resolution systems.

These institutions are more useful than legible — which is part of why they’re undervalued.


The Fediverse Needs Vernacular Institutions

In my work on Fediverse governance, server admins repeatedly brought up collective‑action problems. There are tasks that must be done but cannot be done by individual server teams.

Everyone mentioned IFAS — Independent Federated Trust & Safety — which developed tools for shared blocklists and the first workable CSAM detection/reporting tool for the Fediverse. Admins saw it as essential, foundational — and a starting point.

But IFAS is now shutting down. Why? Because it wasn’t legible to funders. Too big for small‑donor funding, too weird for civil‑society grantmakers.

This is a problem of vernacular institutions existing inside non‑vernacular funding ecosystems.

To understand what sustainable support looks like, it’s helpful to look at older vernacular economies.


Vernacular Economies

Ivan Illich wrote about them; Alastair McIntosh described layers of mutuality, reciprocity, and barter; Elinor Ostrom analyzed commons governance.

A specific example I love: a 1990s study by Muhammad Abdullah Evan Salah on a traditional settlement in Saudi Arabia. There was a toolmaking institution called alusame — a social convention regulating work between farmers and toolmakers.

Farmers paid full price for new tools — and also paid for maintenance forever. About 1% of the total agricultural product went to the toolmakers. That kept the toolmaking community viable.

We have an entire human history of sustainable vernacular economies. Meanwhile, in tech, we have… micro‑payments. We can do better.

Which brings me back, briefly, to Bruegel’s The Gloomy Day. In the foreground, people are doing the steady work of managing a sustainable wood harvest — a vernacular practice enabling human survival. In the background, ships are breaking in a storm and a monumental fortress is being crushed by a glacier.

The big systems may be collapsing. But the steady vernacular work continues — and keeps people alive.


The Hidden Truth (Graeber)

David Graeber wrote:

"The ultimate hidden truth of the world is that it is something that we make, and could just as easily make differently."

The last couple decades of tech have been shaped by a conflict between:

  • The belief that we should at least try to be decent, avoid hosting violent hate speech, and avoid being the infrastructure for genocide — and

  • The belief that the point o

Absolutely — here is the cleaned-up transcript, with proper punctuation, paragraphing, casing, removal of filler sounds (“um,” “uh”), removal of repeated words,” while keeping all the meaning and the speaker’s voice. I did not summarize anything; this is a direct, cleaned transcript.


People in Protocols — Erin Kissane (Cleaned Transcript)

I’m so bummed that I can’t be there in the room with you. I’m also very excited not to give you the lung crap that I have right now. I had to call Ted two days ago and say, “You don’t want me in your space.”

Please bear with me. I took a lot of Benadryl last night and drank a lot of coffee this morning. It’s not the ideal situation for the flame of cognition, but we’re going to give it our best shot.

Thank you for the great intro. And thank you to Ted and to Boris, and to everybody who has been making this all work. The streaming experience has actually been really good, and it’s been fun to see the experiments with streaming and to have a YouTube channel for when that doesn’t work as well.

Also, thank you to the Bluesky folks, who have been army-crawling through this work for the two-year decade you’ve just been through. And thank you especially to Ted, who made sure I could actually make it to the screen this morning.

I want to situate myself a bit. I’m a writer and researcher working on the internet—both trying to make the internet itself better through early advocacy for accessibility, the open web, and better networks, and also trying to do useful things on the internet like building more human forms of journalism and doing that pandemic work.

I think the reason Boris and Ted asked me here is that I’m trying to come in as someone who’s done both of those things and produce knowledge and thinking that’s useful to people working on atproto and also across all the decentralized and new-network experiments and projects.

Which is a roundabout way of saying that I also work on Fediverse stuff. I was a bit of a skeptic about atproto, but it has been an amazing couple of years watching this come together as an open ecosystem. Yesterday’s talks were great.

I’m going to jump in.

Last summer, I gave a talk at XOXO in Portland about my experience giving up on the social internet because of its predatory structures and incentives—down with all of that. And then being pulled back into that world by my pandemic work, and realizing that we actually do have to make large-scale networking work, and work much better, to accomplish basic goals like collective survival.

I ended that talk with a call to dig into the social and technical aspects of making networks safer, more habitable, and better at helping us find the people we need to work with. People liked the talk, but the question afterward was basically: How?

I’ve had thoughts about this for years. I imagined that my next talk would get much nerdier—forestry, Heidegger’s standing-reserve, lessons from sustainable coppicing and pollarding practices. I even had my painting picked out—Bruegel’s The Gloomy Day (this is your second Bruegel in two days; thank you, Blaine). I had a whole thing planned.

And then the election happened—the rapid descent into authoritarian rule and dismantling of the administrative state.

So here we are.

I want to push aside some of the more abstract things and get down to absolute basics—things I think we are all going to need right now. At the same time, I'm resistant to offering prescriptions. I came up in early user-experience work, and I do not believe in taking advice from random talkers when you could be communicating with the people whose needs you’re actually trying to meet.

But there are things that have come up again and again in my work, and things that are especially essential right now. They’re not fully cooked, but I want to bring them to you so we can think about them together as a community of practice.

They come out of a particular perspective, so let me briefly mention three projects that have shaped my thinking.

One is the COVID Tracking Project. My experiences there—running a project that was so of the network that it could never have happened without a massive outpouring of time, attention, and love from hundreds of strangers online—changed how I think about the necessity of big-world networking. That work also put me in direct contact and conflict with organized, sophisticated hybrid online/offline campaigns designed to lie to people about reality. That changed my brain.

Another is ethnographic work I did last year with Darius Kazemi—interviewing people about the material realities of running Fediverse servers.

And the third was a 40,000-word incident report synthesizing accounts of the role Facebook and Meta played in the genocide in Myanmar in 2017 against the Rohingya people. I had read about it from a distance, but doing the deep work revealed something far more terrifying than I had imagined—a story of carelessness, callousness, and refusal to hear increasingly desperate warnings from civil society.

I work on gruesome parts of the social internet not because I want to dwell on how awful it is but because I love us, and I don’t want us to fall into the same pitfalls. Let’s fall into new potholes and make new mistakes.

These experiences run alongside my decades of being an architecture-brain person watching network dynamics since Twitter was a tiny baby.

That’s the context.

Now I want to get into two things that may be useful for us to think about together. Two short talks. The first is about vulnerable people.

Listening to vulnerable people has been talked about for so long, and with so little effect, that it can induce eye-rolling—corporate social-responsibility vibes. I have rarely seen it operationalized in meaningful, sustained ways in social-internet projects, except for those that emerge from communities who are themselves targeted.

But I’m an optimist. I think it remains centrally important, and the reason it’s not operationalized is that it’s not very legible as a practice—especially for small technology teams. That’s particularly important for an ecosystem like the one atproto is enabling. You’re not a giant monolithic company with institutional capacity; you’re many small teams who could benefit from shared knowledge.

So what do I mean?

Not “social listening”—not following the discourse. That’s fine, but it’s not what I mean.

I mean actively seeking the informed perspectives of the most vulnerable people in the groups you’re building for—before and during rollout and changes.

Why vulnerable people? Because if you can keep the most vulnerable people safe, you can keep everyone safe. But if you build based on vibes without real, ongoing consultation with targeted communities, you can’t do that. And if you only respond to the loudest voices, you will miss a lot.

One useful way to frame this is social threat modeling. This is not new. John Pinkard has been writing about it since 2017 or 2018, working with Black women on Twitter. The idea predates that too.

I want to talk specifically about collaborative social threat modeling, drawing heavily on the work that EFF has been doing around data harms—which overlap with many kinds of social harms.

Imagine that we build a shared body of knowledge in this ecosystem—spaces where we can ask with communities:

  • What do you need to protect?

  • From whom?

  • What vectors do they use?

  • How bad are the consequences if you cannot protect these things?

  • What elements of our design make it harder for you to protect them?

  • What can we do to help?

People know the answers to these questions. They have lived experience.

And the threats are changing week to week right now in the U.S.

There’s a great paper (I'll link it in the notes) about introducing social threat modeling in early CS courses. Students said it would be useful to expand it into something like collaborative risk/benefit analysis—examining tradeoffs, characterizing risks, reducing risk, increasing benefit.

The Engine Room—London-based—released an amazing report last year based on interviews with digital-rights activists in the Global South about their experiences with new networks. The researchers expected to hear “How can we get these people into the Fediverse?” Instead, they found extremely sophisticated understandings of risk/benefit tradeoffs in authoritarian contexts. There is so much knowledge available if you go get it.

But doing this work means acknowledging who is vulnerable. And that involves value judgments about whose needs get prioritized. That’s uncomfortable in tech circles, but if you’re building social tools, you are making these ethical decisions whether you want to or not. Not making a choice is a choice. Letting business logic handle it is a choice with real consequences.

Ursula Franklin—physicist, peace activist, Quaker, German Canadian Holocaust survivor—thought very deeply about technology, justice, and fear. She talked about “justifiable fear”—people worried they cannot feed their children; people who have reason to fear a knock on the door at night.

A year ago it might have sounded dramatic to invoke “a knock on the door at night.” It doesn’t sound dramatic anymore.

If we can dig into this work together, maybe we move closer toward design justice—Sasha Costanza-Chock’s work: design led by marginalized communities that challenges structural inequality rather than reproducing it.

The problem is: this is hard to do when you’re crunching on a technical project. Who’s going to do this work? You probably don’t even have a full-time user-researcher, and now I’m telling you to build a threat-modeling team?

Which brings me to the second piece: useful institutions, and specifically vernacular institutions.

“Vernacular” in the architectural sense: homegrown, emerging from locality, not monumental. A Basque farmhouse. A Greek island home built for extreme wind and sun. Or my favorite: a badass yurt built with modern materials in traditional ways, complete with ram’s horns in the webbing.

What does vernacular mean for institutions?

Think: benevolent societies; the 200-year-old Social Aid and Pleasure Clubs of New Orleans; free medical clinics in Greece; the People’s Free Medical Clinics run by the Black Panthers; freedom schools; local mutual-aid projects; Indigenous conflict-resolution bodies. Institutions that emerge from and serve communities, not the state—or whatever is currently functioning like the state (corporate hegemony on the internet).

Vernacular institutions are more useful than they are legible.

In the Fediverse governance research Darius and I did, people talked constantly about collective-action problems—work that individual server admins cannot do alone. There was one institution almost everyone mentioned: IFTS—Independent Federated Trust & Safety. They built tools for shared blocklists, and the first viable CSAM detection/reporting system for the Fediverse.

Server admins saw their work as essential—but only the beginning of what’s needed. And yet, IFTS is now shutting down because it couldn’t secure enough funding. It ran beyond what small crowdfunding could support, and it wasn’t legible to the kinds of funders who could have sustained it.

Vernacular institutions need vernacular economies.

Ivan Illich wrote about this; I prefer Alastair McIntosh’s Soil and Soul, about vernacular economies in the Hebrides—layers of mutuality, reciprocity, and formal barter that sustain communities.

Or consider Muhammad Abdullah Evan Salah’s study of a traditional settlement in Saudi Arabia—Al-Qalaf. He wrote about a tool-making institution called Al-Usama: a social convention regulating the relationship between farmers and toolmakers. Farmers paid full price for tools, but toolmakers maintained them forever. The community shared ~1% of its agricultural output with the toolmakers so that they could always repair the tools that enabled life.

There are thousands of these institutions through history.

Right now, we have a very impoverished understanding of economic exchange. God bless micropayments, but come on—human communities have figured out sustainable systems for millennia. We can, too.

Which brings me back to Bruegel’s Gloomy Day. In the foreground, people do the steady work of coppicing—sustainable woodland management that enables continuous life. In the mid-ground, people eat festival waffles. In the background, a violent sea storm wrecks ships, and a monumental fortress is being encroached on by a glacier.

Human life persists because people keep doing the grounded work of vernacular institutions, even as the monumental systems collapse.

That gives me comfort right now.

I want to land on David Graeber: “The ultimate hidden truth of the world is that it is something we make, and could just as easily make differently.”

The story of the last 20 years in tech has been a conflict between two philosophies:

  • The minimum responsibility not to harm people—not to host violent hate speech or become the infrastructure for genocide.

  • And the ideology that the whole point of the internet is the freedom to discard all norms and decencies in pursuit of individual advantage.

It’s clear which philosophy is ascendant in much of tech right now, in this gonzo moment when both the President of the United States and a billionaire shadow-daddy each own their own social network, aligned with massive hegemonic tech companies controlling public discourse and surveillance surfaces.

But what I see in the movement toward open protocols and non-centralized networks is a crucial and subversive liberty: the freedom to care as much as we want to about the effects of our work on the world.

[Applause]

Thank you.


The videos from ATmosphereConf 2025 held in Seattle, Washington, are being republished along with transcripts as part of the process of preparing for ATmosphereConf 2026, taking place March 26th - 29th in Vancouver, Canada.

Follow the conference updates and register to join us!

ATmosphereConf News
News from the ATProtocol Community Conference, aka ATmosphereConf. March 26th - 29th, Vancouver, BC, Canada
https://news.atmosphereconf.org