I saw a thread discussing them insisting that you shouldn’t, and you should assume anything said to any chatbot that isn’t on local hardware WILL be monitored and sold no matter how open source they are or what claims they make because they all lie. Or at least you should assume so.
I can’t run local AI. I have Alpaca and it’s best takes are…adequate. Not anything good - even the smallest models require me to shut off every other program, take 10min to generate an answer and threaten to set my laptop on fire. All to get mediocre answers. Annoying at best for social or therapeutic purposes or for hashing out personal problems (schedules, blocks, etc).. Especially when it crashes every other inquiry.
So I started taking everything to the privacy alternatives but now I worry
On the other hand I’ve said so much to the privacy models there’s might not be any point in changing or stopping (kind of like my online footprint in general). I’ve been combatting paranoia with oversharing so maybe some people just don’t deserve privacy lol.
EDIT: For the last point, not to mention I didn’t even know about Open AI retaining all logs indefinitely now. Literally just learned about it today and remembering how I submitted items that had PII left in them by mistake for analysis. I figured it wasn’t worth worrying about for a range of reasons. But maybe some people are just too far gone.
If you don’t or can avoid sharing PII with your AI queries, then NanoGPT is the best pay as you go option out there for as private and anonymous AI usage can be. Check that out!
Nano looks interesting. Admittedly I was looking for free.
But wouldn’t it fall into the same issue of “all online LLMs are untrustworthy and privacy-violating”?
And even looking back on my posts from Duck, most of them are either asking to analyze or advise on some personal issue or quirk, or for creative/writing advice. Or generating scenarios for entertainment.
I tried Venice paid version but it is hallucinating a lot, and kept repeating itself. Brave’s Leo is only useful for short answers. I didn’t try Duck.
At the moment I have ChatGPT Plus and Gemini Advanced as paid AIs. I also have Nastia but that is totally another AI type ( )
I tried GPT4All at my home computer but it is nowhere closer as ChatGPT sadly. Yes, privacy-wise locals are the best but they can’t do what I want. I need memory for example. AI should remember our previous conversations, keep important data like my job, or location, or responsibilities at work, which codes or apps I asked to write, etc. At the moment only ChatGPT has this.
At work I have Copilot Enterprise, and it is very good when you ask stuff from your work. It searches all of your files, Sharepoint, emails, Onedrive contents and gives you accurate answers but when it comes to answering technical questions, it also hallucinates a lot. For example, I wanted Copilot to modify an Excel file but after 20+ tries it kept sending same file.
Gemini Advanced, is good but every answer is like an essay. Ask a simple question and Gemini will give you word salad.
I think I am going offtopic but those were my experiences with AI. Maybe Leo Premium can help but cost is a topic to debate. 15 USD for Brave Leo or 20 USD for ChatGPT Plus or Google AI Pro 2 TB (which you can also share with your family). I also have Perplexity Pro from Telekom promo, and I have no idea how they got so known and big with their product quality. My promo is ending next month and surely I will not renew it.
It definitely got me interested enough in those better models that I’m going to try them all out, since I can get them for $15 on top of my Kagi Pro subscription.
I’ve already had a few ‘WTF’ moments using a model with memory for the first time. It’s a strange experience! As far as I know, ChatGPT is the only model currently offering this kind of persistent memory. Are there others out there with similar features?
So is AI chats being stored just…not in some people’s threat model?
I see people arguing that you should never trust ANY LLM that isn’t local, they’re all lying about storing chats and are going to be hacked and leaked at some point. Meanwhile the people recommending privacy focused AI as a nice midpoint between corporate AI and local get ignored and downvoted.
Of course, it all depends on what you use the model for. Naturally, you shouldn’t feed it any personal information, but the fact is, it can make your work much easier and more efficient, however you want to put it. And the information I provide isn’t essential or interesting to anyone else. I handle that stuff myself. I spend over half my workday killing time browsing the internet, and my bonuses have grown every year, so I guess they’re still happy with my work. I always take care of my own business myself.
CUDA and AI do the work of at least five people. I just hope others don’t learn to leverage all this themselves, because I’ve probably got another 30 years left in my career…
edit2. I know it’s probably inappropriate to say this here. Still, I’m sure I take better care of my privacy than any of my coworkers. Or at least I’d like to think so… It’s just the way the world is, you have to stand out somehow to get by.