I’ve been building Confer: end-to-end encryption for AI chats. With Confer, your conversations are encrypted so that nobody else can see them. Confer can’t read them, train on them, or hand them over – because only you have access to them.

The core idea is that your conversations with an AI assistant should be as private as your conversations with a person. Not because you’re doing something wrong, but because privacy is what lets you think freely.

I founded Signal with a simple premise: when you send someone a message, only that person should be able to read it. Not the company transmitting it, not the government, not anyone else on the internet. It took years, but eventually this idea became mainstream enough that even Facebook adopted end-to-end encryption.

These days I spend a lot of time “talking to” LLMs. They are amazing. A big part of what makes them so powerful is the conversational interface – so once again I find myself sending messages on the internet; but these messages are very different than before.

The medium is the message

Marshall McLuhan argued that the form of a medium matters more than its content. Television’s format - passive, visual, interrupt-driven - shaped society more than any particular broadcast. The printing press changed how we think, not just what we read.

You could say that LLMs are the first technology where the medium actively invites confession.

Search trained us to be transactional: keywords in, links out. When you type something into a search box, it has the “feeling” of broadcasting something to a company rather than communicating in an intimate space.

The conversational format is different. When you’re chatting with an “assistant,” your brain pattern-matches to millennia of treating dialogue as intimate. You elaborate. You think out loud. You share context. That’s a big part of what makes it so much more useful than search – you can iterate, elaborate, change your mind, ask follow-up questions.

One way to think about Signal’s initial premise is that the visual interfaces of our tools should faithfully represent the way the underlying technology works: if a chat interface shows a private conversation between two people, it should actually be a private conversation between two people, rather than a “group chat” with unknown parties underneath the interface.

Today, AI assistants are failing this test harder than anything ever before. We are using LLMs for the kind of unfiltered thinking that we might do in a private journal – except this journal is an API endpoint. An API endpoint to a data lake specifically designed for extracting meaning and context. We are shown a conversational interface with an assistant, but if it were an honest representation, it would be a group chat with all the OpenAI executives and employees, their business partners / service providers, the hackers who will compromise that plaintext data, the future advertisers who will almost certainly emerge, and the lawyers and governments who will subpoena access.

None of this is entirely new, exactly. We went through the same cycle with email. In the early days, people treated email like private correspondence. Then we learned that our emails live forever on corporate servers, that they’re subject to discovery in lawsuits, that they’re available to law enforcement, that they’re scanned for advertising. Slowly, culturally, we adjusted our expectations. We learned not to put certain things in email.

Advertising is coming

What is new is that email and social media are interfaces where we mostly post completed thoughts. AI assistants are a medium that invites us to post our uncompleted thoughts.

When you work through a problem with an AI assistant, you’re not just revealing information - you’re revealing how you think. Your reasoning patterns. Your uncertainties. The things you’re curious about but don’t know. The gaps in your knowledge. The shape of your mental model.

When advertising comes to AI assistants, they will slowly become oriented around convincing us of something (to buy something, to join something, to identify with something), but they will be armed with total knowledge of your context, your concerns, your hesitations. It will be as if a third party pays your therapist to convince you of something.

Making the interface match the way the technology works

Confer is designed to be a service where you can explore ideas without your own thoughts potentially conspiring against you someday; a service that breaks the feedback loop of your thoughts becoming targeted ads becoming thoughts; a service where you can learn about the world – without data brokers and future training runs learning about you instead.

It’s still an early project, but I’ve been testing it with friends for a few weeks. Keep an eye on this blog for more technical posts to follow. Try it out and let me know what you think!