I know that some of you are as deep into privacy as I am into tech so I am hoping to get some different perspectives than just my own.
Background is that we like local AI because conversations are guaranteed to stay private, but hardware (the “power” of your device) is a real limitation.
What’s new is that technology has advanced quite a bit in the last 4-6 months and I can now run a good enough LLM locally on my phone to chat with docs, draft emails, summarize websites, etc. Ditto for consumer-grade laptops; just better quality. For image generation, it works on a consumer-grade laptop; haven’t tested anything on phones yet.
The other cool recent-ish development (I think) is that we can now build it as a browser app. You don’t need to install anything manually (zero setup) which makes it easy for less technical people. You’d also benefit from sandboxing/security provided by modern browsers.
Combined, these two things seem useful enough as a technology that I am thinking of making this more widely accessible.
So here’s my crazy idea: Build an AI chat app that runs locally on your phone, tablet, or laptop and that has all the features of “commercial-grade” AI apps like ChatGPT or Claude but (1) stores all your data locally, (2) runs the compute locally, (3) only has access to files you actually give it access to, and (4) is open-source so people can inspect what it’s doing.
Another thought that crossed my mind is to have an opt-in fallback option for very low-end phones or really demanding tasks to do compute remotely. This would mean running the inference on private servers with (1) approval that this should happen, (2) no logs/storage/training, (3) masking of any PII before sending the request, and (4) open-source.
In addition to the above, what would such an app need to have for you to consider giving it a try? (Also, why is this a stupid idea and I should be ashamed of myself for even suggesting it :D)