Here we go then…age prediction has finally reached LLMs. Even Sam Altman himself admitted the privacy trade-offs here.
OpenAI announced Tuesday that it is rolling out age prediction and identity verification systems in an effort to protect minors who use its services.
The announcement comes weeks after the parents of a teenager who killed himself sued the tech giant for allegedly helping their son draft a suicide note and giving him tips for how to do so most effectively. On Thursday, the Federal Trade Commission launched an inquiry into AI chatbots and child safety.
In a blog post on the OpenAI website, CEO Sam Altman said the company will “prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection.”
Altman said the firm is creating an age prediction system which will guess users’ ages based on how they interact with its ChatGPT chatbot. Identification will be required in some cases when the system cannot definitively determine a user’s age, the blog post said.
“We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” Altman said in the post.
If you do need use an LLM, consider self-hosting a model or do so through a “private” wrapper that does not corelate your age. Ideally the former, not the latter.
GPT-5’s current performance is superior even compared to other competing models, and it will likely become increasingly difficult for dedicated ChatGPT users to give it up over time.
Given their tendency to sacrifice privacy for safety, I felt this was to be expected. I’m interested to see how transparently they can explain how they achieve age estimation.
If they don’t keep everything a black box for safety’s sake, I’d say they’ve made a slightly better choice. At the very least, I’d like them to clarify what data points they use for estimation, how they store the estimated data (e.g., whether it’s a binary value of 18 or over, or a specific estimate), and how they use it (whether they use it for marketing or statistics beyond safety).
It may be too good for most folks to switch, but if you annoy them to a point where ID verification is needed, it may tip the scales.
I heard that Gemini overtook ChatGPT in app downloads recently. It could be entirely based off its 1 year free promotion for university students though,.
So.. They need to verify if the user is a kid who’s trying to proceed with self harm, because when it’s an adult, they don’t care? What’s even the reasoning behind it? Yet another reason to verify, verify, verify.
Don’t spread misinformation. Just because it may be in the upper parts of “Ranking table” doesn’t mean it’s superior. It doesn’t have more functionality than other models. It’s just the reasoning that has been improved, but it’s far from being an “intelligent” form of anything.