Can Bots Read Your Encrypted Messages? Encryption, Privacy, and the Emerging

The Tech Policy Press released a preprint paper examining whether AI assistants could break end-to-end encryption. They found that the “Trusted Execution Environments” proposed by Apple and Meta to ensure data privacy are not 100% sufficient in keeping confidentiality and security.

AI assistants are programs designed to interpret everyday language and perform computational tasks. Today’s AI assistants are able to handle a wide range of tasks, including text analysis, content creation, code generation, language translation, and more. At the core of these technologies are programs trained on data to identify patterns, e.g., large language models (LLMs) such as OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama, which process complex inputs (queries) and provide contextually relevant responses.

Putting this together, it seems impossible that an application with no access to message content can initiate AI processing on that same message content. To introduce AI in E2EE, the strictness of E2EE is widened in two dimensions: what is an “end”, and where is the end.

It would not be a strict violation of E2EE system design if you were to copy and paste your messages into a chatbot (though you might be violating the norms of confidentiality and privacy in the context of the conversation). If an application were to facilitate that for you, however, it would need to guarantee that the processing happens in a way that preserves E2EE, such as processing on the end—your device—and not on another computer, such as the application servers. Both Apple Intelligence and MetaAI have proposed “trusted execution environments” (TEE) as the way to keep private the data used for AI training and processing.

They recommended the following steps to the public:

Practical Solutions and Recommendations for the Public

Aside from informing technologists, companies and regulators, it’s important to provide actionable steps the public can take to protect their privacy in the era of AI and encryption. Based on what we know about how companies like Apple and Meta plan to integrate AI into applications that have promised privacy, here is what anyone can do to help maintain their privacy and confidentiality:

  • Choose OS-level app permissions carefully: Device-wide AI capabilities like Apple Intelligence mean that you need to be aware of which of your applications interact with AI features.
  • Review App Settings: Regularly check the privacy settings on your applications. If you’re concerned about privacy, turn off AI-based features like message summarization or smart replies.
  • Be Aware of What You’re Sharing: Be mindful of the data you share with AI services, especially personal or sensitive information that could be used for training or other purposes. When applications tell you they might use your data for training AI, believe them. Passwords, contact information, and a wide variety of sensitive information might end up in an AI model and out of your control.
  • Beware of Opt-in Conditions: If you choose to invoke MetaAI or Apple Intelligence features in a private or confidential setting, be sure you understand the limitations– is it for all chats? Is it forever?
  • Talk to Your Contacts: If you are having sensitive conversations over E2EE services, have a conversation with relevant contacts, and make sure they aren’t inviting bots to the conversation.

Personally, I am not a fan of AI assistants running on-device. This could be a good usage of GrapheneOS’s work profiles feature…if for some reason you need an AI assistant installed.

1 Like

You should always assume (even when it’s not strictly true) that:

  • AI Agents exist to serve corporate interests, not yours.
  • AI Agents will build a profile of you, if you give them sufficient data.
  • AI Agents will share any information they can (especially if they have any access to your input devices, such as your keyboard to see what you’re typing) with hostile governments.
    • Even if you like your government and think they won’t abuse you, governments get hacked. Usually by other nations’ governments.
  • Client-side AI will rat you out if poked by the authorities (and may even hallucinate crimes you didn’t commit).
  • Server-side AI is one subpoena or system compromise away from being weaponized against you.

My advice in general for AI in the context of privacy is simple: Turn it off.

5 Likes

Preach!

Sky write this.

AI is still useful.

Just self host one and remove it from the internet and you are good!

Sound alternative but not always a feasible alternative. It’s like telling one who can’t afford something to just get more money.

Point is, self hosting is not always possible. But yes, local AI/LLMs is always the best way to go.

This is such a well-written article. Thanks for linking to it!

I’ve seen a lot of messages going around on Signal threads warning about this. They often contain inaccurate information, so I wrote up a “fact-checking” style article about it:

I’m going to add in a link to the techpolicy.press article you shared!

2 Likes

No. You can get to run the smaller “dumber” AI that will fit whatever your consumer GPU RAM is running now.

Even if you only have an iGPU you can still run LLMs in CPU+RAM. Its not going to be the fastest at spitting out words but at least you can use it to do spell check and rewrite documents.

Or just use duck.ai with open source model and no identifying information.