The list of–interesting to me–privacy centered or privacy-adjacent cloud hosted AI options right now is:
- Providers offering TEE / Confidential Compute
- Non-TEE / trust based privacy
- Duck.ai
- Proton Lumo
- Brave Leo
- Whatever Apple’s muddling their way through
On DDG / Lumo / Brave
But as I understand it, isn’t DDG overselling the privacy thing here a bit? They basically act as a proxy to whatever ai service you call. And while maybe that protects you directly, it doesn’t prevent the query itself from being logged by the AI.
Maybe slightly, but the contracts they’ve made stipulate that conversations will not be used to train models and no data will be kept beyond 30 days (with exceptions for abuse/fraud and compliance iirc).
Furthermore, the recent NYT case basically means that any query sent through DDG has to be logged by the AI provider anyway, as I understand it.
As I understand it that specifically applies to OpenAI hosted models. It’s a consideration if you use these models, but if you use open models, I don’t think it’s relevant.
Fwiw, Lumo and Brave are not prevented from logging queries also. In all of these cases we are relying on trust in the service provider(s) and whoever is physically hosting the models.
My understanding is that those providers who offer TEE implementations can significantly minimize–but not fully eliminate–the trust component. I won’t pretend to have a deep understanding of the technical benefits and limitations of this approach though.
On Nano-GPT's privacy considerations
Just to be clear, nano-gpt offers both models that run from trusted execution environments and standard models, only a subset ~10-20% of the models have TEE implementations available. If you are using the standard models–especially the proprietary ones–you can pay privately but that is just one piece of the puzzle. The chats are not confidential.
I do think that Nano-GPT is a pretty attractive option if you stick with the TEE models. I like that it’s pay-as-you-go (pay per token), I like that they offer various open models, I like that they accept Monero payments, and I like that they offer TEE as an option.
It’s the same basic mechanism as Confer afaict (encrypted in transit + processed in a trusted execution environment.
Have you been able to figure out what models confer is using and where they are hosted?
Also is there anything that gives Confer’s approach an edge over others using confidential compute, or is this just the service you chose ot mention/are most familiar with? (others using confidential compute include Maple AI, PrivateMode AI, and Nano-GPT)
Afaict when I briefly tested, the webapp didn’t specify which model is being used or offer any control over that. It was a super simple UI (pretty similar look to open-webui if you are familiar, but a bit more basic), no settings, few to no choices, quite barebones.
I’m having login issues with their passkey implementation so I can’t login to refresh my memory at the moment.