I came across this feature on this website. It talks about an AI VPN and I’m not sure what to make of it. It seems like a novel idea. This is what it says, “separate your prompt and context of your prompt from your user information. Prevent AI from knowing you better than yourself.”
NanoGPT does this too. You can pay for it and use it anonymously. They have no trackers on their website and they always ensure they add AI models to their offering with the setting that the AI companies not use any data from prompts for their continued training. Plus, you can get almost all of the major and some minor AI models on it.
This website/service seems sketchy to say the least.
I went to a Synology event last year and they were demoing AI stuff where basically you would type something in to their AI helper, and then it would replace any sensitive data like names, phone numbers, etc. with fake placeholders before sending it to OpenAI. Then it would take OpenAI’s response and substitute the fake placeholders with your real information again before giving you the answer.
Anyways… I think AI is pointless and you just shouldn’t enter this sort of information into it at all
Really? Gen AI does have some legitimate use cases (even though one must always be vigilant and not trust it outright with every piece of info it spits out).
There are plenty of use cases, even if we use a localLLM. From Transcribing / Summarizing / Translating / Use as local KB with RAG / Image generation, etc.
With Deep search enabled Generative AIs, there are even more use cases such as Content Creation
Though for day-to-day work, there are much more constrained due to data ownership issues , but I think it will definitely transform they way we work.
Major concerns almost all companies seem to not be focusing and prioritizing is the accuracy of the AI’s work and output (aka quality of its operation). I do every now and then test AI with stuff I know a lot about and it does still get 20% of the things wrong or incomplete. I don’t think this should be acceptable. 95% is where I think the threshold should be for as much dependence as companies seem to want.
I’ve been using GitHub Copilot since before its public launch, for all sorts of things (related to codebases, not just coding), and it has been a productivity booster. If I were to hazard an estimate, Copilot authors close to 50% of all code I write these days (and an equal % of very subtle bugs, too… but I digress).
I think … Copilot (now that some nerfed version of it is free for all registered users) is capable enough to help non-technical folks evaluate codebases (ex).
With chatbots specifically, the newer “reasoning” / “thinking” models are a fun web research tool (fun to ask it to “search some more” / “think deeply” / “reason clearly” to make it spend more time on research), if nothing else.
I’ve had the same and every few months I try it and I’ve found it extremely underwhelming.
I do think LLMs can be helpful in general with some questions. I recently used one a few weeks back for examples of some syscalls and it was useful, but then a day later I asked it to write a simple shell function and it hallucinated curl arguments.
Not sure, it’s only suitable for dedicated applications like I know Google uses it for Safe Browsing. Maybe I misunderstand what this is, it’s not its own AI it’s an extension that tries to replace personal info when you’re using AI like ChatGPT?
I think trying to use a third party extension like this to make AI private isn’t the right way to go about it. We need AI companies to start implementing differential privacy, OHTTP, homomorphic encryption, and make use of on-device processing to protect user data. Apple has a cool post about it.
A browser extension that tries to catch personal information is inevitably going to miss stuff and also annoy you with false positives. Not to mention, I really don’t want an extension reading personal data off of web pages.
I’m not surprised that it isn’t as good in a very specific context that requires expert coding skills (like impl a cryptographic library or hardening libc / kernel etc).
As for me, our codebase is tiny and so even the smallest coding model (the last time I ran one locally) turn out to be pretty good at writing tests or docs or debug statements or generate code for common problems (like: “non-blocking ring buffers” or “lock-free stacks”).
I feed code files (not the entire codebase) and well written bug reports into Claude Sonnet (Copilot Chat) from time to time, and to my surprise it solves them like 20% of the time.
Also, I’ve found Copilot Chat (on GitHub) pretty decent at finding files I should look at, say, GrapheneOS specific commits for some feature x across their entire forked codebase. Or looking at the specifics of Android RunTime’s allocator / garbage collector. Or, trying to make sense of Project Zero exploits (of the Binder subsystem, say). Hallucination is a problem here, but when it doesn’t, it reduces time & effort required by that much.
Copilot is also very good at auto complete (and it is uncanny how it auto completes in exactly the way I would have written it myself). And so, even when Copilot isn’t helpful, I need it to auto complete; not exaggerating when I say I can’t code without it anymore
I’ve also found AI super useful for debugging code.
Back on topic, the feature seems gimmicky. It would be easier and more efficient to just replace names yourself if you really need to include in the first place.