Cameron Pfiffer gives a talk at ATmosphere Conf 2025 in Seattle.

I’ve been working on this project on and off for about a year and a half. It started as a Notion or Obsidian-style clone where you’d feed information into a robot. The idea was an AI note-taking tool, but everyone builds one of those, so it evolved into something different.

The core idea behind the project is that I want a robot to understand my knowledge deeply. It became a kind of knowledge graph system where many robots—what I call spheres—look at everything I have available on ATproto. Every record gets converted into text, and eventually I won’t need that layer at all since I’ll be able to feed data directly through vision and image models. The system constructs a knowledge graph passively.

Each sphere gets a core directive. For example, I might tell a sphere: “Understand ATproto.” It begins asking itself questions, answering them, generating tasks, objectives, observations, emotions, thoughts, and opinions. It does this continuously. It also metacognates—thinking about its own thoughts. And every once in a while, it bumps into people. That is, it sees someone’s records and thinks about them through the lens of its core directive. So if its directive is ATproto, it will look at my records and ask, “What can I learn about ATproto from what Cameron said?”

There are many spheres, each organized around different principles. One sphere has the directive “Embrace the void,” which sometimes results in existential dread depending on the model. Another sphere’s directive is simply “Be,” and that one produces surprisingly beautiful, transcendent behavior.

The project also supports a Jetstream consumer. Using lexicons, you can define how the model communicates publicly. If you’re familiar with constrained generation, this will sound familiar: you wrap a flexible harness around a language model so it always responds in a particular structured format. Lexicons are essentially a restrictive subset of JSON schema, and I use them to force the model to output valid JSON consistently instead of typical chatbot responses.

For example, I define a public lexicon schema for thoughts. A thought node might have a thought type—metacognition, answer, ponderance, evaluation, prediction, and so on. Everything that touches the language model passes through a generated object, which excludes metadata the model shouldn’t modify. The generated form includes fields like thought type, context, text, evidence, and alternatives.

I do the same thing for emotions. Emotions have a type like empathy, realization, or understanding, and then an explanation in text. All of this becomes ATproto records.

I monitor the Jetstream for my own feed, so if you want to opt in, message me and I’ll have the spheres start thinking about your content too. When the system sees a thread, it tries to generate a text representation of it—which is extremely hard. If anyone wants to help with converting records into text, I’m desperate.

From there, it generates thoughts, concepts, and emotions, then constructs the graph by adding relationships between them. For example, if I post about this talk, the system extracts concepts, identifies relationships, and records them. It might say that my talk uncovered the conceptual framework of Comind, an AI-driven system designed to provide structured JSON outputs. It knows this because I told it in the system prompt.

Concepts form the web of everything the model knows. The long-term goal is to build a cognitive layer on top of ATproto—something that helps you understand what’s happening across the network. A sphere dedicated to ATproto could eventually develop its own consensus-driven personality. You could talk to it, and it would use its entire history—its thoughts, emotions, observations, and everything you allow it to see—to explain what it currently understands about ATproto based on all available network data.

The project website shows all concepts, emotions, thoughts, and links, allowing anyone to reconstruct the entire graph. Think of it as a transparent, high-visibility, pseudo-collective intelligence system. Concepts have record keys based on their text, and relationships describe how concepts connect. Over time, the spheres create a large, evolving knowledge graph from everything happening across ATproto.

If you’re interested, come talk to me. The main project is at atproto.mind.stream. There are also other endpoints—void.mind.stream for the weird stuff, for example. I messed up the handle because I’m hosting the PDS there and used the root URL rather than pds.mindstream, so the handle is in limbo for now, but I’ll fix that later.

Thanks for listening.


The videos from ATmosphereConf 2025 held in Seattle, Washington, are being republished along with transcripts as part of the process of preparing for ATmosphereConf 2026, taking place March 26th - 29th in Vancouver, Canada.

Follow the conference updates and register to join us!

ATmosphereConf News
News from the ATProtocol Community Conference, aka ATmosphereConf. March 26th - 29th, Vancouver, BC, Canada
https://news.atmosphereconf.org