I’ll try to be structured about it because there’s a lot to unpack here and I want to be fair and objective. I’m sorry if you think I’m not always achieving that
. I like to believe you want to actually help me and I am thankful for that
even though I have to remark some of your comments feel a bit immature.
So it’s not specifically about how long you’ve been alive but it’s also about what you did in that timeframe.
Fair point and I agree. But you’re kind of making my argument for me here? Chutes has a working product on OpenRouter with 19 models available, processing hundreds of billions of tokens monthly (855.1 billion tokens in the last 30 days) . This makes them bigger than competitors you probably know like Mistral, Grok, DeepSeek, Azure, Cerebras etc. They have a public GitHub org, named team members with defined roles, published architecture docs, and they’ve been shipping consistently since 2024. That’s not “bold claims + abstract futuristic starter pack visuals.” That’s a working product at scale with measurable output. You can disagree with their approach but pretending they haven’t done anything in their timeframe is just not accurate.
Meanwhile, when I do visit chutes.ai, I am greeted with this amazing stack of tracking in every direction.
I mean yeah, fair criticism on the tracking. I’d love to see them clean that up and it’s a very valid thing to point out on a privacy forum. But judging a backend AI inference platform’s privacy posture by their marketing website’s tracker count is like judging the security of BitWardens Password Manager by checking if their website includes trackers (Spoiler: It does). The product is the API and infrastructure, not the landing page. Their frontend dev should fix it, sure, but it doesn’t tell you anything about how they handle your inference data.
trust me bro, we really do not train on any personal data fr fr
Look, I get the skepticism on “trust me bro” claims. But you’re selectively ignoring the actual technical mechanisms I’ve been talking about this entire thread. I’m starting to wonder if you just don’t understand them. TEE/confidential compute isn’t a tweet. It’s a hardware-enforced isolation boundary that even the provider can’t peek into (if implemented correctly of course). Chutes themselves literally list “TEE/Secure Compute” as a product category. You can evaluate the implementation, sure, but dismissing the entire concept as “trust me bro” is intellectually lazy when the whole point of TEE is that you don’t have to trust the operator.
Let’s assume it’s correct, so then…how about their sustainability?
This is actually the first solid point you’ve made imo. Sustainability and business model matter for any provider you depend on. But Chutes isn’t running on kofi donations. Once again I don’t know where you got that from… Sometimes it seems like you’re making arguments without even thinking just for the sake of it. They operate as Bittensor’s top subnet (Subnet 64), with multiple revenue streams: subscription tiers (Base, Plus, Pro, Enterprise), pay-as-you-go per-token billing via TAO or fiat, invoiced enterprise clients, and private compute instances. Revenue is auto-staked back into the network to reward the GPU miners who provide the compute. It’s a decentralized compute marketplace, not a VC-funded “move fast and figure out monetization later” play. You’d know this if you spent 5 minutes looking into how they actually work instead of assuming they’re a scam because their website uses shadcn. And yes, the site looks like a mass produced shadcn template. You know what that tells you? That the people building it are probably backend and infra engineers, not designers. Anyone who’s worked in IT knows the meme: when a frontend dev goes fullstack nothing works, when a backend dev goes fullstack the website looks like shit or very basic. Chutes is clearly the latter. The infrastructure works, the API works, the models work. The marketing site looking generic is the most backend-engineer thing imaginable. But I get it, if you judge tools by how pretty their landing page is rather than what’s running under the hood, I can see how you’d end up skeptical. Not everything that shines is gold though, and not everything that looks plain is worthless.
I’m not sure how/why you would use GLM-5 in a personal context
You built this entire argument around my GLM-5 example as if I said everyone needs to run it locally??? Whats going on? I was making a cost analysis point to show that the “just use a VPS” advice you keep casually tossing around falls apart once you need anything beyond a toy model. You yourself acknowledge “AI is unfair” and “if it’s a need, pay for it.” Great, so then we agree that for capable models, people need cloud providers, and evaluating which cloud provider has better privacy practices is exactly the kind of conversation a privacy forum should be having. Which is literally what I started this thread to do before you derailed it into “everything is equally bad so why bother.”
And honestly, I’m getting tired of having to argue for a company I’m not even entirely sold on myself. I’m still skeptical about Chutes. I came here to have that conversation, to poke holes, to evaluate them critically. But when the counter-argument is just “everything is a scam, nobody cares, don’t even bother looking” then I end up defending them by default just to push back against the doomerism. That’s not a productive dynamic. I was taught growing up that if you don’t have anything constructive to add, it’s better to just not say anything. I came here looking for actual technical feedback on AI privacy, threat models, provider comparisons, implementation analysis, and instead got paragraphs of philosophical doomerism about how the whole field is a VC scam. That’s a lot of time spent writing to essentially say and do nothing useful (by both of us).
For some reason, people 3 years ago were still able to do their job just fine. What have changed?
According to the U.S. Bureau of Labor Statistics, software developer employment is projected to grow 17.9% through 2033, with AI cited as both a driver of demand and a key productivity tool. The World Economic Forum (small side note: fucking hell what is this timeline where I’m citing WEF) reports 65% of developers expect their role to be redefined in 2026 toward architecture and AI-enabled decision-making. The Stanford AI Index 2025 documents consistent 10-25% performance gains across knowledge tasks like writing, research, and programming. These aren’t marketing claims, they’re labor economics data. Three years ago people did their jobs fine without these tools. Doesn’t mean they’re not genuinely useful now.
Even nowadays, some people still “NEED” to have a Bugatti to show up at a client’s meeting
The Bugatti analogy doesn’t land mate. A Bugatti and a Ford both get you to the meeting. A 7B quantized model running on a €184/month Hetzner GPU and a SOTA model are not comparable outputs. One regularly hallucintes on basic tasks and the other can do complex multi-step reasoning, code generation, and analysis that actually matches the quality bar my work requires. This isn’t vanity, it’s a capability gap. You as a developer should understand this better than most (and I’m pretty sure probably do but yeah)
I mostly realized that if you’re exposed for long enough to some stuff, you then kinda start thinking like it is an actual need.
And I mostly realized that the same logic applies to privacy nihilism. If you’re exposed for long enough to “everything is a scam, nobody cares about your privacy, all companies are equally awful,” you start thinking evaluating anything is pointless. That’s not healthy skepticism, that’s learned helplessness. And it’s exactly the mindset that benefits the companies that actually are terrible, because if nobody evaluates the differences, the worst actors face no competitive pressure to improve.
I won’t comment or explain my own choices to a new account, not worth my time. 95% of my thoughts are already online publicly and you can investigate further if you wish
So let me get this straight. You spend multiple posts scrutinizing Chutes’ website trackers, their team size, their business model, their shadcn starter pack, their Vercel hosting, their VC funding, and their Twitter presence. But when someone applies the same level of scrutiny to your publicly available posts, suddenly it’s “not worth my time” and “no need to link my own posts”? Then again you literally tell me to “investigate further if you wish” and that your “answers are out there already.” What I found is a pattern of double standards that I think is worth addressing, because it directly undermines the arguments you’re making in this thread.
You don’t get to put someone else’s entire operation under a microscope and then wave away your own contradictions with “I’m very well aware of what I’m saying.” Being aware of your contradictions doesn’t make them less contradictory (but realizing your mistakes is a good first step). If a company you don’t like would do that you’d probably jump at the chance of critizing them.
Your choices are publicly visible so I’ll just note once again what anyone can see: you run a Mac Studio, stream on YouTube, speak at Google tech conferences, use Brave (which has Leo AI built in, you might have disabled that though), and you’re considering Cursor, an AI-powered code editor that sends your code to cloud LLMs for inference, and your previous screenshot is made by CleanShot which is afaik only available for macOS. All while telling me that everyone who finds AI useful has been brainwashed by marketing, and that Google/Apple/Microsoft are all “equally awful.” You don’t owe me an explanation, but the contradiction speaks for itself.
And the startup double standard is still there, and it’s actually worse than I originally thought after looking more into it. You dismiss Chutes as a “random no-name startup” that can’t be trusted. I have to say it once again but when you shared urban-privacy.com on this forum, a company selling anti-facial-recognition clothing with zero peer-reviewed testing data and no published validation against modern FR systems, your response to criticism was “let’s be patient and not kill them already” and “let’s assume their intentions are honest.” This is a company that literally markets their OFLAIN bag to people worried about being tracked at protests, telling them basically “Worried of being trackable – even at protests? Put an end to it!” for €115. They’re selling unproven counter-surveillance products to potentially vulnerable people: protesters, activists, journalists, people in countries where facial recognition is used to identify and arrest dissidents. If that anti-FR clothing doesn’t actually work (and nobody has proven it does), someone at a protest trusting it could face very real consequences. You probably heard what’s happening to protesters in countries like Iran. The stakes for protesters are arguably higher than a cloud AI inference provider, because the failure mode is physical danger, not data exposure. And yet that startup gets “let’s be patient” and “assume honest intentions” while Chutes, which has a working, measurable product processing hundreds of billions of tokens, gets “random no-name scam.” How does that double standard work exactly?
Oh and since you brought up the chutes.ai tracker screenshot: urban-privacy.com runs on Shopify, which comes bundled with its own analytics tracking, third-party cookies, and Google integrations by default ![]()
Was never my intention.
I believe that. But the end result is the same: multiple long posts that amount to “everything is bad, AI is a scam, you’re brainwashed if you disagree, and nobody has a solution so just accept it.” That’s not actionable. That’s not useful for someone coming to a privacy forum trying to make informed choices.
If you genuinely think everything is equally hopeless, I respect that perspective, but I’m not sure what value it adds to a thread specifically asking “how do I use AI more privately.” That’s like going into a thread about what encrypted email prodiver to use and writing an essay about how email itself is fundamentally broken. Nobody asked for that.
Nuance over nihilism. That’s all I’m asking for ![]()