For many of the features I see the below statement. Is there a privacy or security risk context I am missing?
On-device AI
These use small AI models that download to your device if you use the feature. This approach helps protect your privacy.
For many of the features I see the below statement. Is there a privacy or security risk context I am missing?
On-device AI
These use small AI models that download to your device if you use the feature. This approach helps protect your privacy.
I am not aware of any specific risks, and local translation is a substantial privacy improvement in my eyes. The rest of the features seem neutral/benign (with one exception).
The only current exception would be the AI Chat in the sidebar, which is benign out of the box (because it is inactive) and isn’t a privacy ‘risk’ per se. It just depends on what model/service provider you choose to use. Unless you are using a local model there will be privacy/trust tradeoffs in one form or another (that’s just the nature of using a chatbot hosted by a 3rd party). Beyond your chosen chatbot, I’m unaware of any specific risks.
I think the people with an aversion to AI in all of its forms, are not primarily motivated by privacy or security, they have other personal reasons.
Using self-sufficient local models means you are no longer requesting external resources or sending external queries, which means there is no network traffic for a MITM to observe or tamper with.
Yeah, the question was a bit rhetorical. It was split off, but wasn’t intended to be its own thread outside of context.
My fault. I took your question at face value, which probably caused it to get split off.
The posts were split into a separate topic because they were not relevant to addressing the host topic’s question.