Introducing Lumo, the AI where every conversation is confidential | Proton

Those are probably worked on by completely different teams.

1 Like

Proton did not create (train) its own models, that would be way out of scope for them. They use 4 different open weight models, and in the case of Olomo 2 32B even an open source model (training data is also available).

You are correct that almost all LLMs are trained on copyrighted data, or more generall on internet-crawlable-text with very rare exceptions such as the Olomo model family.

From a user perspective, it isn’t true that what you feed it (prompts, files) end up in the base model. After a LLM has finished pre and post training, it is ā€œfrozen in timeā€ while it is used and deployed (inference). Typical AI providors do collect usage and interaction data, and use parts of that to train the next model generation.

5 Likes

They are constantly improving their current services, just check out their quarterly roadmaps.

Lumo was made by a separate team. From their release announcement:

Lumo comes from Proton’s R&D lab that has also delivered other features such as Proton Scribe and Proton Sentinel and operates independently from Proton’s product engineering organization.

5 Likes

That’s why it’s very aesthetically pleasing and doesn’t feel like a perpetual beta :joy:

Proton are looking more and more distant from their userbase. Instead of fixing stuff, or introduce things that people requested (re: their uservoice feature ā€œvoteā€ forum), they gave everyone the Wallet, and now the Ai. Incoming Proton self driving car next?

2 Likes

Until it is homomorphic encryption, Proton is not doing anything better, and are actually worse with the models they use. If anyone wants better models with local storage of chats and e2ee sync of chats, Brave Leo is miles ahead. Duck.ai is also helpful, but without local storage of chats and e2ee sync, it is not a competitor. Apple Intelligence is the closest to actual privacy preserving AI, hopefully they make it useful next release.

1 Like

Duck.ai chats are stored locally on your device, not on DuckDuckGo or other remote servers. You can opt out of recent chats at any time in Duck.ai settings. (source)

1 Like

If anyone wondering about the limits I just saw this on FB

image

and

so, Unlimited sub will not get Lumo.

so means not even family plan at all? like not even 2 or 3 Lumo Plus?
sucks to suck but so far it’s been working for me like I have Lumo Plus, no idea, at least last I tried which is of course yesterday

Only in the sense that the alternatives make no efforts whatsoever. Apple Private Cloud Compute boils down to a monumentally technically complex pinky promise, which is still vastly weaker than encryption.

Even the best homomorphic encryption schemes currently available have little practical application to machine learning, and we are many many many years away from that potentially changing. There is no path forward for private AI compute outside of running it locally yourself.

3 Likes

I wonder why we haven’t seen any homomorphic encryption yet, it would solve a ton!
Are there any project in development?

1 Like

Here’s a question, what about Maple AI?
https://trymaple.ai/
not only uses secure enclave but also does end-to-end encryption

4 Likes

You can really only use modern homomorphic encryption in AI with the tiniest models, i.e. the types of models you could easily run locally on even a weak device.

Their privacy/transparency ā€œguaranteesā€ are not guarantees made with encryption, they are similar to Apple Private Cloud Compute.

5 Likes

We have just not for a full on chatbot yet, right now it’s more limited.

2 Likes

would urge you to reconsider:
https://trymaple.ai/proof

3 Likes

At the end of the day, Maple, like Apple (in nearly all cases), is performing their compute on plain text queries. Using secure boot and secure enclaves and other techniques to try and prevent adversaries (or themselves) from looking at those plain text queries is simply not (and will never be) the same as mathematically protecting that data with strong encryption.

It is still a worthwhile effort to protect their users that will provide more security than OpenAI and other AI companies just running on GPU servers that any of their employees can access.

It is also still essentially a pinky promise that they have implemented all these security measures correctly (which I would not necessarily take for granted given the complexity required of these setups), and it doesn’t make sense to treat it much differently than VPN companies claiming they’re using disk-less setups and not logging.

6 Likes

This is currently a known ā€˜issue’. Mods on reddit are saying to contact support so they can help you with the upgrade.

Thanks for the correction. Does the ā€œnot e2ee syncā€ still stand?

Exactly in that sense.

I am bullish on homomorphic. Some of the newer publications show promise.

Jonah is perfectly correct, and maybe slightly generous to maple (since apple owns their hardware, maple does not). Maple uses the same premise as private compute (in fact, anyone can use AWS to make their own maple, albeit less cost effective). E2EE means end to end, not end to secure enclave, then secure enclave to end encryption.

3 Likes

What is the source for this understanding?

That conflicts with their own characterization of their service. If they are blatantly misrepresenting the level of trust required to use their service, they should be called out (with specific technical details) and asked to clarify or improve the clarity of their communication. And if you are maybe overstating/exaggerating, that should probably be walked toned down.Your comparison to RAM-only servers seems flawed, unless I fundamentally don’t understand the benefit of using a secure enclave (which is entirely plausible)

  • All communications are encrypted locally on your device before being transmitted, ensuring your data is secured from the start.
  • Even during AI processing, your data remains encrypted. The entire pipeline through to the GPU is designed with privacy as the priority
  • Our servers can’t read your data. We use secure enclaves in confidential computing environments to verify our infrastructure integrity.
  • We provide cryptographic proof. Our commitment to transparency means you don’t have to trust us - you can verify it yourself.

(emphasis mine)

I’m currently pretty excited about the potential about services like MapleAI (and my understanding is that it significantly minimizes trust in meaningful ways relative to other options, short of local hosting), but I’ll admit to a pretty limited understanding and awareness of TEEs / Confidential Compute. If I’m assuming more technical protection than can actually be provided, I’d like to understand specifically where and how, and have my expectations lowered where needed.

Can you be more specific about the critism you are making. With how MapleAI has been designed, at which specific point is thier not a technical guarantee of confidentiality, who potentially could exploit that, and how?

Thanks for the correction. Does the ā€œnot e2ee syncā€ still stand?

I’m not positive, I don’t need/use sync so it isn’t something I’ve really looked into. Afaik, there is no sync whatsoever for AI chats. But DDG sync for their other stuff is apparently e2ee so I assume if they add the ability to sync chats in the future that’d be e2ee as well (but that’s speculation).

3 Likes

I can find very little technical documentation from Maple AI about how their service actually works, so I was not making any claims based on their claims. My understanding simply stems from knowing what is and isn’t within the realms of technical possibility with modern encryption schemes, not from any knowledge about Maple AI, which I have never heard of before an hour ago.

Anyone performing ML compute on encrypted data is simply not possible with computers today, unless you are willing to use a very tiny model and put up with a ton of latency (i.e. 30 minutes to process a tiny thumbnail).

Therefore, I can confidently know Maple AI is not doing that, even though I don’t actually know anything about them.

If you read Apple’s article @fria linked above you’ll see that the most complex computation that Apple can use homomorphic encryption with is basically a key value lookup, which is something you could feasibly do on something with the computational power of punch cards.

I’ll just end with this from Maple:

4 Likes