Dan Hon gives a talk at ATmosphere Conf 2025 in Seattle

Hi, I’m Dan. I also had a Jaiku account, so that makes two of us—three of us. I’m a person on the internet who has most recently been spending time accidentally making government technology better. I’m going to try to get through to the break as quickly as possible for you, and if I do a great job, maybe some of you will be a bit excited as well. I’m going to do that lightning-talk thing where I try to get through 65 slides in 900 seconds, and I’ve already used 33.

I want to run a heist to steal and bring back something from our fully automated luxury gay space communism future and give it to everyone—something that might help improve social networks. And you’re thinking: Dan, what’s the deal? You better nail the landing on this. So I’m going to start here: people are terrible. Well, let me be more accurate—people on the internet are terrible, or people are terrible on the internet.

Here’s one of the ways they’re terrible: some of them are sometimes sea lions. This is an excerpt from the classic Sea Lining Wondermark cartoon from 2014. It describes the troll behavior where you have an opinion about something and then someone shows up, in the manner of a sea lion, and constantly pretends to politely ask you to justify your claims. You don’t have to do that at all because they are a random stranger and you just want them to go away.

People are also terrible on the internet in this way, which is practically poetry. I’ll read it aloud in my English accent:

“Twitter: the only place where well-articulated sentences still get misinterpreted. You can say ‘I like pancakes,’ and somebody will say, ‘So you hate waffles.’ No. That’s a whole new sentence. WTF is you talking about.”

We know this happens because people are trolls. They are rude. But sometimes this is excusable because the context of the speech they’re replying to has collapsed. If you had “context collapse” on your bingo card, then you win. What do I mean by that? I think it might be helpful to incorporate coding of intent—some sort of extra-textual layer—so that when we communicate on social networks, it’s not just the pure text that’s visible. There should be more nuance so that emotional content can come through. How can I make it easier for you to understand me, and how can I make it easier for me to be understood?

We know the best way to deal with a problem is with a joke, because if anything goes wrong you can say it was supposed to be funny. (You are not allowed to do this if you’re a fascist or a Nazi.) So over a year ago, I made some joke images for people to add to posts on Mastodon. I made things like this: an image you could attach to your post to stop people from asking why you hate waffles whenever you say you like pancakes. It simply tells people: do not reply to tell me why a thing I like is bad.

Here’s one for when you’re just making an observation and you don’t want people to help you—you’re just talking out loud on the internet and you never asked for help. Here’s the one where you’re complaining about something and don’t want replies. You get the idea with this one. You get a very specific idea with this one. And so on. Here’s the one for Cy Doctorow. Here’s one if you’re Jamie Zawinski making blog posts about what to do if you have a specific problem. And here’s one if you are a woman.

I’m really interested in the idea that we have such low-bandwidth, low-dimensional ways of communicating online, and so many problems happen because we lose all that extra information in pure text. Someone pointed out how text markers like /s for sarcasm and /j for joking are confusing. They’re old, and we don’t always know which one means what. So how might we do this better?

Here’s a wildly abbreviated set of references around conveying rich meaning alongside text. If you know the work of Iain M. Banks, here are his sketches of what Culture drones would look like. Banks writes that drones have aura fields that change color according to mood—their equivalent of facial expression and body language. One drone, Jase, refused to be refitted with an aura field and preferred to rely on its voice or to remain inscrutable. What might aura fields look like? Maybe like this. Or like this: “a mixture of red and brown—humorous pleasure and displeasure together.” Basically: a drone that enjoys shitposting, which is something I also enjoy. Red and brown are also some of my favorite colors.

There have been other ways to do this. One of my favorites is Microsoft Comic Chat from 1996. It was a presentation layer on top of IRC. You could choose emotional intent, and it would wrap that around your text and animate your comic avatar accordingly. It was brilliant. You got it with Internet Explorer 5, and then for some reason they stopped it. There should be a Slack app for this.

We use emoji to express the entire breadth of human experience, especially in ways people are embarrassed about. We do this in video games too—this is the real metaverse, which is Fortnite. My son plays it all the time and spends most of his money there. This is how Fortnite lets you select your emotes to show how you’re feeling. Games love pie menus. So what am I talking about here? Is this a cry for multimodal alt-text for context? Some sort of markup for how I’m feeling when I say something, so you understand how to interpret it?

I got together with Florian, my design partner, who is somewhere in this room. We started turning those “not a joke” images into actual, real, not-a-joke images. I’ll show some references. Florian is very good. I occasionally got to include things I like—like Ron Cobb’s semiotic standard, the visual language he created for Alien. We both agree that the New York subway diagram is brilliant. Here are more references. Florian enjoyed making icons and still does.

We figured out how we might display these cards—URL-addressable ways of showing intent on the web. We translated them into French and German. We made stickers, which you can get from me or online. And we even made positive ones, because it’s not nice to always be negative—ones about viewing source, how it changed the world, and how the best days of the internet are ahead of us, and how we can take it back together.

Did they actually work? Or were they still just a joke? Here’s Erica Joy, who has tens of thousands of followers on various platforms. She posted about binging romance-y books and used the “do not tell me why the thing I like is bad” card. I didn’t tell her to. And… they work. People who used them got fewer terrible replies than they would have otherwise. The main negative replies they got were people complaining about the fact that they used them at all—“If you’re posting online you should expect replies,” that sort of thing.

Here’s Erica using one again. Here’s Mike Masnick using the “do not comment on my observation” card to avoid replies when talking about the U.S. government’s attitude toward TikTok. Another from Erica: she used the mansplaining one preemptively on Threads, saying it should be built in.

From the first set of 10 cards, the top three used were: “do not attempt to help,” “do not tell me why the thing I like is bad,” and “do not comment on my observation.” Not necessarily the ones I expected, but… both depressing and confirming of all my suspicions about how people behave online.

What I would really like is a much better way of showing what I mean and how I feel when I say something—so that you are not a jerk to me, or so that I get better replies. That richer context is what I want us to steal back from our fully automated luxury gay space communism future, like those drone auras. What can we build that gives richer context and helps people be understood? How would we display it?

We have some bad ways now. Tone tags are confusing, old, indecipherable, culturally specific, terrible for translation, and low bandwidth. Mastodon has content warnings, but they’re just text fields. Emoji pickers are intertextual—they go inside the speech act, not around it. And emoji pickers are slow; if we want people to express emotional intent, it needs to be fluid, like body language or tone.

There are good ideas too. Android’s XLock labeler. Video games like The Sims 4, where you understand emotional context and mood at a glance. Fortnite again—my son has bought many things that go in that UI ring, and millions of others use it. They have voice chat too, not just text. These problems have been solved in games for years.

You might think, well, we could just put an LLM in it. Someone will say that. “Use AI.” But here’s developer documentation from Eleven Labs, a text-to-speech startup. Their method for producing emotions is to provide narrative context, like tagging your text with “he said in a scared way.” Sometimes their system will read out the tag, which is not ideal. And I don’t think LLMs can truly do this. If we’re trying to convey more expression in speech acts, I need to be in control. If anything is being translated, I need autonomy. I need to know what is being displayed on my behalf.

Can we do this in Bluesky right now? With rich text or labels? Not really. Maybe facets, but those operate within individual words. You can theoretically attach labels to your own post, but the client needs to support it. And while you could create composable labels for emotional intent, the docs warn that expanding label vocabularies risks creating aggressive policing cultures.

We could try doing this at the app-proto layer using labels or lexicons. Labels could allow composable emotional context, like mixing red and brown for humorous pleasure and displeasure. But the developer docs basically say: if you want to define ontologies for all human emotion, go ahead—into that pit of snakes. Not a reason not to try, but still.

And even if you succeed, you might accidentally create the torment nexus by providing a massive amount of contextual information that could be used to profile people and feed token prediction engines.

I said I’d try to end with something inspirational. I would really like this to happen. I really think we need ways to be more expressive. I’m excited about the possibilities that atproto and the Bluesky lexicon offer. It’s something I’d love to see. We should do this.

Thank you so much.


The videos from ATmosphereConf 2025 held in Seattle, Washington, are being republished along with transcripts as part of the process of preparing for ATmosphereConf 2026, taking place March 26th - 29th in Vancouver, Canada.

Follow the conference updates and register to join us!

ATmosphereConf News
News from the ATProtocol Community Conference, aka ATmosphereConf. March 26th - 29th, Vancouver, BC, Canada
https://news.atmosphereconf.org