Trying to chat on privacy Reddit and found someone’s account. Their claims about ChatGPT (particularly its use for therapy or discussing personal things) actually motivated me to get my data and delete my accounts (at least I got some insights from them?):
As I just said in another thread: dont forget that ALL of the data is stored forever, is being used to profile you, and could plausibly be leveraged against you in the future. This huge push to use AI as a therapist is all over reddit for a reason.
Do you think it’s true? I’m torn between it sounding very plausible, and questioning the reliability of someone privacy-obsessed who thinks anyone who says opposite is a “bot”.
LLMs need all the training data they can get, it’s basically the fuel that makes them better.
I strongly assume that almost all of them never delete anything, despite any “do not use my data to train” or deletion requests.
Whether it is true or not makes little difference when you are not supposed to share any PI or PII with any AI anyway.
Cautiously use AI for help you actually need but never for mental health or anything personal like that. Use it as a tool for utilitarian purposes only.
And yes, any AI will indeed store any info fed into it. There are or may be some exceptions via API if you use another front und for the service to select the option of not using your queries for training data but only to provide answers too. But even then, I’d not trust it fully.
If someone is lacking supports, reaching out to an AI chatbot is better than nothing and can help point them in the right direction.
If you can’t run a model locally at least use anonymous/no-account websites and do so strictly though Tor Browser and don’t share any identifiers (locations/names/etc).
You can also use and pay for NanoGPT anonymously and it supports pay as you go model so no subscriptions. And any model is available on it. It’s a privacy focused front end for all commercial generative AI models.