Is this a breakthrough?
Iām not really sure about how this works exactly. Using a local model was already private so does this mean that even using a non local llm api would make the requests private thanks to TEE?
I too donāt know too much about the subject matter. Just thought Iād share this anyway.
Essentially itās supposed to keep your requests isolated and private from even the owners of the infrastructure, and itās supposed to allow you to verify that itās properly running in a TEE on secure hardware and what is running on it.
So as I understand this, the request gets processed encrypted in the TEE and the api AI canāt know whatās been discussed anymore.
That totally solves the AI privacy problem!
This nVidia GPU capable hardware is running on the data center I think and the software take care to validate the security of every step?
Iām not super familiar with it but I think the idea is you can verify it yourself if you want, I donāt know how to do that I would have to look into it. If it works perfectly as advertised then yes it essentially prevents anyone from peaking in I believe, however there have been vulnerabilities discovered in this technology. Something like homomorphic encryption would be better I think since they would have to crack encryption rather than find vulnerabilities in hardware in order to get your data. But I think itās definitely a huge improvement over not having it.
Not really. In fact, both Apple and Google have announced more comprehensive āprivate AIā solutions.
In the threat model that the TEEs of the world operate in, local models are not private (since the adversary is the Host OS itself).
Unfortunately, the TEEs, on their own, do no such thing. What they do offer is an isolated compute environment for confidential code/data. The āinfrastructure ownersā can pretty much run whatever they want in TEEs.
It is the other way around. The hardware here enforces the authenticity of the software & security of the data.