CyberInsider: Signal President Warns AI Agents Are Making Encryption Irrelevant

Signal Foundation president Meredith Whittaker said artificial intelligence agents embedded within operating systems are eroding the practical security guarantees of end-to-end encryption (E2EE).

https://xcancel.com/vitrupo/status/2015815296272203801

The remarks were made during an interview with Bloomberg at the World Economic Forum in Davos. While encryption remains mathematically sound, Whittaker argued that its real-world protections are increasingly bypassed by the privileged position AI systems occupy inside modern user environments.

Whittaker, a veteran researcher who spent more than a decade at Google, pointed to a fundamental shift in the threat model where AI agents integrated into core operating systems are being granted expansive access to user data, undermining the assumptions that secure messaging platforms like Signal are built on. To function as advertised, these agents must be able to read messages, access credentials, and interact across applications, collapsing the isolation that E2EE relies on.

This concern is not theoretical. A recent investigation by cybersecurity researcher Jamieson O’Reilly uncovered exposed deployments of Clawdbot, an open-source AI agent framework, that were directly linked to encrypted messaging platforms such as Signal. In one particularly serious case, an operator had configured Signal device-linking credentials inside a publicly accessible control panel. As a result, anyone who discovered the interface could pair a new device to the account and read private messages in plaintext, effectively nullifying Signal’s encryption.

Retrieving Signal’s TLS certificate
Jamieson O’Reilly

Signal, a nonprofit organization focused on privacy-preserving communications, is widely used by journalists, activists, and government and military personnel around the world. Its Signal Protocol is considered a gold standard in modern cryptography and is also used by platforms such as WhatsApp and Google Messages. However, Whittaker warned that encryption alone cannot protect users when AI systems operate with near–root-level access on their devices.

During the interview, she described how AI agents are marketed as helpful assistants but require sweeping permissions to work. As Whittaker explained, these systems are pitched as tools that can coordinate events or communicate on a user’s behalf, but to do so they must access calendars, browsers, payment methods, and private messaging apps like Signal, placing decrypted messages directly within reach of the operating system.

Whittaker characterized this architectural shift as “breaking the blood-brain barrier” between applications and the operating system. Once that boundary is crossed, either through compromise or intentional design choices, individual apps can no longer guarantee privacy on their own. She said companies deploying AI agents, particularly at the OS level, must recognize how reckless such designs can be if they undermine secure communications.

In O’Reilly’s Clawdbot research, he identified hundreds of exposed control panels reachable over the public internet, some lacking any authentication. These interfaces provided access to full conversation histories, API keys, OAuth tokens, and command execution features across services including Slack, Telegram, Discord, WhatsApp, and Signal. In several instances, Signal device-pairing data was stored in plaintext, enabling attackers to take over accounts remotely.

According to O’Reilly, the issue extends beyond individual bugs and reflects a broader pattern. AI agents require extensive privileges to function, yet they are frequently deployed without adequate security hardening. Common misconfigurations, such as treating all connections from loopback addresses as trusted when used behind reverse proxies, can expose systems to the internet unintentionally. Even when authentication is enabled, concentrating credentials and conversation history in a single system creates an especially attractive target.

Whittaker emphasized that debates around encryption should not be confined to abstract or academic arguments. Although the Signal Protocol itself remains cryptographically secure, she warned that privacy in practice depends on the security of the entire system. If the layer that processes decrypted messages is compromised, the protections encryption provides become irrelevant.


Thoughts?

2 Likes

She’s totally right of course. AI agents open up whole new vectors for exploits, they need to be heavily restricted in what they can access on your machine. I think on Windows they’re making the AI agents have their own separate user account with more restrictions. It’s definitely something that needs to be very carefully handled. Google also has some research on how it should be handled.

It seems right now that people are just deploying it on their system and giving it full carte Blanche access to everything and just sort of YOLOing it for some reason, I don’t really understand it. I guess that’s just standard on desktop though.

1 Like

I am so glad to have left Windows behind years ago to learn macOS and now even more glad to be learning to use Linux instead seeing how Apple has its QA for their OS this version.

Happy to be enshittification sensitive. Even happier for alternatives existing that I can move to.

Said people literally want all their decision making offloaded to an AI lol.

This is like the professor who wrote a whole paper trying to warn people that letting AI have write privileges is a bad idea. His brain was so degraded he legitimately thought people need to know that the delete button works as intended.

Please only include a couple of relevant snippets from the article in your post. That way, people can quickly get a gist of the contents while still directing traffic to the original source for those who want to read the entire article.

3 Likes

She’s definitely not wrong, but AI does feel like a subset of a larger, systemic problem: normalizing high-level admin access to untrusted apps, often as a prerequisite to run the damn machine

Play Services has been doing this on Android for over a decade; by default, your phone gives Google access to pretty much all of this. imo AI is just the latest, trendy, convenient tool to seduce the masses

4 Likes

I think the difference here is Play Services are a deterministic program. Agentic AI without any restrictions can take any input from anywhere on your system and perform actions based on that input in other parts of the system. It’s almost like if any app was allowed to rewrite Play Services to do whatever it wants, so instead of just trusting Google now you’re having to trust every single app and all possible input from every single app including all messages from random strangers on social media apps not to screw you. It’s almost incomprehensibly insecure lol

3 Likes

I agree, Google Play Services has relatively limited functionality (we don’t really know this because it’s proprietary) and AI agents take compromise of OS-level security to another level. However the more general problem of giving wide admin access to untrusted apps shouldn’t be ignored because agentic AI is threatening to become in fashion.

For people interested, AI agent Wikipedia page.

1 Like

That’s a really strong, pretty terrifying insight. I think you’ve realigned my POV:

The root cause is the same as it ever was: I’ll complacently let this spyware run with admin privileges on my device. But the consequences are growing more severe - while something like Play Services would rigidly export specific data as per its instructions, AI agents can compromise your device in new, creative, unpredictable ways

This blog post by trail of bits is really good, they were able to exploit agentic browser AI using some pretty creative methods:

And that’s just in the browser, imagine what they could do with an OS-wide version.

3 Likes

I’m curious if anyone has insights into the status of the agent running inside the Brave PWA. It is in its own browser.

edit. I read a fria’s post.

Without proper isolation, these agents can be exploited to compromise any data or service the user’s browser can reach.

Tools should not authenticate as the user or access the user data. Instead, tools should be isolated entirely, such as by running in a separate browser instance or a minimal, sandboxed browser engine. This isolation prevents tools from reusing and setting cookies, reading or writing history, and accessing local storage.

This approach is efficient in addressing multiple trust zone violation classes, as it prevents sensitive data from being added to the chat history (CTX_IN), stops the agent from authenticating as the user, and blocks malicious modifications to user context (REV_CTX_IN). However, it’s also restrictive; it prevents the agent from interacting with services the user is already authenticated to, reducing much of the convenience that makes agentic browsers attractive. Some flexibility can be restored by asking users to reauthenticate in the tool’s context when privileged access is needed, though this adds friction to the user experience.

These recommendations require further research and careful design, but offer flexible and efficient security boundaries without sacrificing power and convenience.

> Please only include a couple of relevant snippets from the article in your post. That way, people can quickly get a gist of the contents while still directing traffic to the original source for those who want to read the entire article.

Also, it's more convenient if you are browsing from the wider fediverse with Fedilab as it doesn't collapse long posts.

This seems tragic for people who depend on normie computers and phones in the handicapped community.

A real world example of this. My supervisors Grandma is blind and has weak bones. He wasn’t home and she fell in the bathroom and she broke her hip.

Her phone (Iphone) was clear in the other room and she shouted out for Siri to call 911 and contact her emergency contacts.

Had it not been for that IPhone Lord knows how bad that could have went.

He was telling me that Iphones are top notch and popular with blind people because voiceover is wayyyyyy ahead of voice assistants in Android it it’s functionally less confusing.

Also many blind folks rely on windows and Mac os for services and software either unavailable or completely useless in Linux.

If I had deep pockets I would join the graphene project if they let me and delegate and pay for a team to focus on things like a voiceover for android that works just as good as on an iPhone and other features the blind rely on that’s just not currently in the scope of what android can do in it’s current form.

Is this something an average user can protect themselves from?

It needs to be carefully considered in the OS when it’s implemented. There’s a few different approaches to isolating and restricting what AI agents can do but it’s early so there’s not a consensus on what the best thing to do is yet.

Right now it’s all opt-in so you have to go out of your way to use it, so as an end user you can just do nothing and you’re protected. It’s the people that are installing and enabling these agents when we’re in this very early stage that are taking a big risk here.

1 Like

I’m not a developer so this may be a stupid question, but is there a way for an app like Signal to know if agentic AI is actively running on a device and therefore be programmed to warn user (on both ends?) that security may be compromised?

I think AI is the biggest risk for humanity since nuclear weapons, maybe its even more dangerous.

1 Like