Please do not spread misinformation.
You can host and run many LLMs on your own hardware with no internet connections. Your claims are baseless.
Please do not spread misinformation.
You can host and run many LLMs on your own hardware with no internet connections. Your claims are baseless.
Of course I meant ONLINE AI services, not offline local models. Offline AI is great. But the problem is, only wealthy people can afford the necessary hardware to run good models. How much would I had to pay for a machine to run Qwenâs high-end model? A lot of cash.
Thatâs generally, Iâm talking about whatâs happening right now, where the New York Times is using bullshitty excuses to gain access to everyoneâs personal data.
Of course. Everyone should already see that coming. The cloud is someone else computer and theres no guarantee whatsoever for deletion or retention on someone else computer. Hence why Iâm being very careful by choosing local 1st solution, choosing encryption etc etc.
Local solutions are very limited, we need responsible AI companies that actually cares about their users.
âŚso is openai. Even if you pay for openai, they still keep everything and you agree to their surveillance and data vacuuming in the terms of service.
Only by your hardware/budget, competence, and attention span. Local solutions scale from tiny models that can run on a smartphone, to large capable models, and even very large models that require substantial hardware to compete in the same ballpark as the big name players.
Deepseek or Qwen 235B would be a couple examples.
But the user gave consent for them to do that so thats like their problem and they can always revoke that consent but now they canât.
If your a normal person and you locally hosted those models (qwen is pretty good, deepseek is garbage imo) it wouldnât come close to what ChatGPT can do now, if you really want to locally something that can compete with stuff like ChatGPT and Gemeni and Bing you would probably need to graduate from computer science 3 times and have a few million dollars up your sleeve and by then you might as well just make an AI company that is responsible with their data
Well, like with everything, you need something in between. Thereâs a saying in the community âIf a product is free, you are the product.â Sure, you can use online LLMs, but the price is giving all of your data to the company. On the other hand, using offline LLMs requires at least a modern computer with a GPU (that can run Win11 as an example). It wonât be as fast as you would like, but at least you can be sure itâs private (you can always double-check with sniffers like Wireshark).
You seem to dunk on NYT a lot, without really acknowledging that OpenAI is just as greedy. The log harvesting would start anyway, if it wasnât already happening.
Yes, for short tasks like a simple query Iâm using GPT, or Claude. I try to promptin general, without precise information. For daily tasks however Iâm using Mistral 7B. If you donât have the budget for running even the lightest LLM, you might wanna quit it altogether. People used to live without it, you know.
we need responsible AI companies that actually cares about their users.
Not happening.
You will never be 100% sure that a company you picked doesnât collect and sell your data behind your back.
There are multiple examples of that. Even Proton ,thatâs a pillar of security and privacy, needs to comply with law enforcment requests. Source: Privacy Policy, #5 Data Disclosure. So they obviously gather some information.
Next we have VPNs, and donât even start me on that ![]()
LLMs will always collect logs, data, and in most cases - send it over to advertisement companies. Thatâs how the world of IT has been working for the past 20 or so years.