I have a ChatGPT subscription. I use it for work related stuff only.
I’ve read here on this forum about Kagi and NanoGPT and I am trying out both now. I sometimes use ChatGPT to upload a pdf file and ask questions about it. It looks like this can’t be done through neither Kagi Assistant nor NanoGPT?
I’m a little confused on what your intentions are for trying to move to Kagi and NanoGPT.
Are you trying to use ChatGPT without it siphoning your useage data? Are you trying to use the models without it linking to your identity?
Are you concerned about your usage and/or the PDF you upload being used to train future models?
Regarding Kagi, it actually has strong privacy commitments and may intentionally limit file handling to reduce exposure. Worth checking their docs on PDF processing.
There are business/enterprise tiers for most popular models that promises that they won’t use that data to train their models, but your data still passes through their servers unencrypted, so you’re trusting policy, not cryptography. Plus they charge extra for it.
The only truly private option is investing in locally-run AI like ollama or GPT4ALL. Framework desktop is clearly designed to run local LLMs due to the configurable RAM allocation (allocate a ton to VRAM) so that might be the cleanest route (not cheap, but less expensive than most enterprise solutions).
While it provides model access, this just adds another intermediary that sees your data.
In the same way any frontend does. The idea is the privacy policy is far better from openrouter then directly interacting with ChatGPT or whatever. Unless you are telling these models PII, using openrouter obfuscates you from the service.
You are correct. If it’s about identity-obfuscation, that works as well as Nanogpt. And their privacy policy is probably better than ChatGPT themselves for sure.
Kagi converts PDF files to markdown before passing them to the LLM, so depending on the file it may be unreadable. In addition to that, it’s up to the model to determine what to request from the “Librarian” tool from the attached document, so they could just end up taking a summary rather than the full contents. And then you can even add on to that and note that if you regenerate the response, previously attached files will no longer be included, so you might forget to upload them again.
To be frank, no privacy-respecting service currently handles file processing nearly as well as the big names and their native apps/sites (Claude, ChatGPT, Gemini, Grok).
My motivation to try out Kagi and nano-GPT was that Kagi includes a good search and many different AI models. It isn’t about privacy. I use AI strictly for work only. If Sex_Tips is right, then I should go back to ChatGPT. Or, if the free version of ChaGPT can read my pdf’s and answer my questions, I could keep both. I don’t want to pay for both Kagi and ChatGPT.
Is there a reason for this? Besides coding, 5.2 is objectively worse at everything in practice than GPT 5.0. They focused too much on benchmarks that real-world use suffers.
Yes. It is not a big effort to try it yourself. Whenever a new version is released, the previous one is made faster and less intelligent because people want an immediate answer. If you are doing anything more than simple tasks, you should definitely always choose the newest GPT.
The issues I mentioned aren’t related to the model, it’s how Kagi handles document upload. They don’t just pass along the file in the API request and let OpenAI deal with it, they have their own file processing pipeline.
I don’t either, but it’s a common complaint. I do use CSV files a lot for work, and Claude.ai handles them much better because it can repeatedly query the file to find what it needs, vs Kagi Assistant is just one pass, either the full content (which could often overload the context window) or just takes the columns/first few rows
All of the benchmarks I’ve seen are being gamed where LLMs are specifically trained on the benchmark results to be able to solve them correctly. OpenAI is particularly egregious of doing this although all the big boys do it too. Do you know of any reputable ones that were proven to not be gamed?
Yeah, benchmark gaming is a huge problem, so current consensus isn’t really “which benchmark is unbiased”, it’s that you need multiple perspectives to get a clearer picture, like formal benchmarks (like LiveBenchmark which updates monthly to stay as clean as possible), human preference (like Chatbot Arena), real task performance (SWE-bench), and production usage metrics (open router just released a great blog about their observations)
It’s evolved from “which model is better” to “what are you using AI for?” then looking at which model actually performs the best in that area.
The thing with ChatGPT 5.2 is that it focused too much on professional/formal outputs that it feels super neutered. BUT I will say it does a great job of actually doing only what you tell it rather than like other AI that will start adding tangential stuff you didn’t ask for, which is part of why it’s creativity feels neutered, but better for coding and getting more consistent results.