[…] today in a new blog post, co-founder and CTO Stanislav Vishnevskiy went much further.
Per the post, age verification will no longer be required in March. Instead, it’s being pushed to the second half of 2026.
More significant than the delay, though, is several key details changing that should make the eventual rollout far less controversial.
- Adding more verification options. We already had alternatives in development, including credit card verification. We’ll complete and expand those before scaling globally so you have more options you’re comfortable with.
- Vendor transparency. We’ll document every verification vendor and their practices on our website, and make it clear in the product who each vendor is. We’ve also set a new requirement: any partner offering facial age estimation must perform it entirely on-device. If they don’t meet that bar, we won’t work with them.
- A new spoiler channel option. We know many communities use age-restricted channels not for adult content, but for topics people prefer to engage with on their own terms: spoilers, politics, and heavier conversations. We’re building a dedicated spoiler channel option so communities don’t have to age-gate their server just to give members that choice.
- A technical blog post before global launch. We’ll publish a detailed post explaining how our automatic age determination systems work, including the signal categories and privacy constraints. So you can evaluate our approach for yourselves.
- Age assurance data in our transparency reports. We’ll include how many users were asked to verify, what methods they used, and how often our automated systems handled it without any user action.
Wow that’s amazing! Turns out public backlash works.
Still gotta keep the pressure on though
Good news indeed. But I sure hope they have significant permanent damage so other companies think twice before attempting such a thing.
They are still not committing to (what is in my opinion) the bare minimum of only enforcing age verification where they are legally required to do so.
At least in such situations they can make a reasonably convincing argument that it’s out of their hands.
I have mixed feelings about this, personally.
I have always been unequivocally opposed to government mandated age verification schemes because I believe, as a matter of principle, that the government should never be able to arbitrate what is or isn’t “adult/obscene content,” or put themselves in the middle of two parties exchanging any sort of information under any circumstance. Mandated verification to access information is always a form of government overreach and censorship.
On the other hand, I think it can certainly be argued that some platforms/services have a justifiable interest in knowing whether their users or customers are adults.
With some platforms like YouTube this justification is iffy to me, because they do not host pornography, but when it comes to Discord I believe they do allow channels to share pornography and other NSFW content, and I can see why a platform operator in that situation might want something more robust than a “yes I’m 18” button.
To prevent more robust age verification from happening even voluntarily, we’d need to go beyond our calls to stop mandated age verification, and start calling for all online age verification itself to be outlawed. I’m not sure if that is a justifiable position, honestly.
The biggest argument against voluntary age verification is the massive security risks involved, but the difference is that from the pure security point of view, these problems are solvable, unlike the unsolvable problem of government censorship in mandated schemes. And from Discord’s announcement it sounds like they are planning to handle this better than other platforms.
If we completely outlawed companies from doing any form of age verification, I think we would see many more situations like when Tumblr purged all their NSFW content back whenever that was, which is frankly not an ideal outcome for some communities.
I think it would be more reasonable for the government to outlaw unsafe forms of age verification. For example, the government should make it illegal for companies to ever collect images of people’s government IDs, that would be within their right. But other forms of age verification like via credit card purchases that don’t require logging credit card numbers indefinitely, or with digital ID systems that let you share only your age and no other info, or biometric systems that work entirely on-device and don’t send those scans to a third-party vendor… I’m not sure those should be disallowed ![]()
There’s always a fine line here—at least in the United States—where when the biggest players like Discord, Google, Facebook, etc. voluntarily do something, it can have an effect where the government no longer cares to make that thing mandatory, and that has massive benefits for the many smaller players or independent operators in a space.
If platforms like YouTube and Discord voluntarily implementing age verification in a secure fashion prevents age verification mandates from being applied to services like Signal or decentralized networks like Mastodon and PeerTube, is that worth it?
I’m not totally sure, but I’m leaning towards yes, maybe it is worth it ![]()
Damn, I posted a hot take hoping to get more pushback lol
On this, we agree.
I don’t agree that access to NSFW content should be restricted to 18 year olds (or whatever arbitrary age your society dictates) by any supposed moral authority be it a government or a private company. Parent’s should be responsible for the parenting of their children.
Firstly, NSFW is a completely arbitrary term; sex education, homosexuality and religious texts can or have all be interpreted as pornographic or NSFW in nature.
Secondly, where there is content that genuinely is harmful to children, which there is undoubtedly a lot of, that is the parents’ exclusive responsibility. It is bewildering to me that governments cannot get it through their thick skulls that the reason people no longer choose to have children is in no small part due to the complete and total destruction of both the family unit and parental authority.
Also, I am upset by the American defaultism taking place here. In many European countries the age of consent is between 14 to 16 and yet Discord is going around restricting access to NSFW content to 18+ year olds out of a misguided desire to play morality police.
I completely disagree. My stance, based on my principles, is that no one should be allowed to process my personal data except on the basis of my explicit and freely given consent. Even if you could convince me consent shouldn’t be needed for data essential to the operation of a service, my age is decidedly not such data.
Robust and private age verification is an unsolvable problem. Even if we accept the false premise that the objective is to “protect the kids” and we assume there is any desire to implement a privacy-preserving method of age verification, that desire will crumble since the guarantees will not be robust enough. A truly private age verification system could not prevent a parent from verifying their own age for all their children’s accounts. This, mind you, is a good thing in my mind, since again it should be up to the parent to, together with their child, set reasonable limits on what they can access.
I don’t think the government should be interfering in these matters at all, and these private companies would not be pushing the issue if not for the pressure from governments. I never had age verification during my childhood, and I think I turned out fine.
No, not at all. Just because you can avoid the issue by using Signal or similar doesn’t mean we shouldn’t fight for the rights of the people who aren’t so well informed and just want to get on with their day-to-day lives. That’s without getting into the fact that even well-intentioned privacy-preserving age verification will inevitably be extended and misused in the future. If you give them an inch, they’ll take a mile. Therefore, we can’t allow it to stand even now, at its least harmful stage.
I think so too. But saying “only on your device” feels a bit like “trust me, bro.” How am I supposed to verify that my face data isn’t being sent somewhere?
Unless an independent organization like the Chaos Computer Club (a German NGO focused on privacy and ethical hacking) audits the app and confirms that it’s trustworthy, I wouldn’t feel comfortable trusting it.
Overall, though, it should be the parents’ responsibility to protect their children. What’s happening right now feels like governments around the world are trying to eliminate anonymous internet accounts altogether, and I strongly disagree with that. I have a right to my privacy.