How about we don’t use off-device AI processing in the first place? It is incredibly hard to convince anyone that there is a method of private processing besides doing it locally anyways.
Reminds me of the following, I was looking for some decent way to automate image cropping for a project. I found out about thumbor but proceeded into lurking a bit further and found out some random service.
Difference? 2nd one is probably just a wrapper around the first open source project. With a price tag and “AI” slapped on it.
It’s as if nothing is doable without AI nowadays, damn… People forgot that algorithms exist and that you don’t always need approximating black boxes that eat a bizzilion of electricity to narrow down a solution to a not-so complex problem.
Best part? Can be done locally in less than 50ms. ![]()
Not really possible with advanced LLMs that need a Terabyte of RAM and a lot of compute on top. The small 3B LLMs on device are nice to have but they don’t compare to the upcoming 1200B model that Apple wants to buy from Google and run on PCC.
I think differently on this.
If I am ever using AI, then I’d like the best of the best available to output the best results as much as possible. In other words, I value quality more than anything for however infrequently I use AI. So this clearly won’t be possible on any local set up. I also suspect more people than not would value quality over anything with AI when they expect the best outputs.
For some basic searches, sure - offline models can work for general knowledge stuff.
Actual information:
Much like Apple’s solution I suppose this is the best that can be done within the constraints of today’s technology, but I still think we’re in a bad situation if society is going to collectively give up on improving local compute in favor of perpetually renting datacenter compute from our big tech overlords.
Most AI feature aren’t that compute-intensive. Like things like translation, voice recognition, image editing can be done locally.
I agree LLMs are still though to use on phone, but it’s getting better.
I think a community will always exist home lab, self host, and prefer better options still. It’ll still be as niche as it may be today as desktop Linux is but such folks will always exist. Let’s not forget this and be thankful for people who spread awareness and continue to showcase this tech hobbies.
But thanks so much for sharing the PDF. I look forward to reading and learning more about this “solution”.
I was referring to the myth of expecting privacy from cloud-based LLMs. I do agree that nobody is running these type of models