In April this year, Discord started to test age verification systems using facial or ID scans, as a way to comply with Australia’s and UK’s new laws.
[. . .] but don’t be surprised if it soon gets implemented at the account level for users everywhere.
@em totally called it.
The last thing I want to be is Mr Dystopian, but combining the increasing adoption of ID/age verification for mundane things with the recent events surrounding ICE and their subpoenas[1], I fear that the future is looking very dark and despotic. If age verification is inevitable, which it is looking to be, then the next best thing is to ensure that it is private wherever it is implemented. So I agree that this is a positive, if anything.
However, take a look at this paragraph from Em’s article (linked above):
Conducting verification “on-device” offers only few additional protections considering this information still has to be checked and reported with an external service, somehow.
Moreover, processes used to keep this data “on-device” are often opaque. Taking into account how valuable this information is, it becomes very difficult to trust any for-profit third-party services which such a sensitive task.
There needs to be additional protections to ensure privacy. Not just at a legal level, but at a technical level. The on-device processing also only refers to facial scans. The other method for verification, which is more invasive to privacy, is the use of official identity documents. Discord claims this method is “privacy-forward” in the following ways:
Quick deletion: Identity documents submitted to our vendor partners are deleted quickly— in most cases, immediately after age confirmation.
IDs are used to get your age only and then deleted.
Discord only receives your age — that’s it. Your identity is never associated with your account.
Our vendors perform these verifications in a way to minimize the data collected and stored. [. . .] IDs are processed to get your age only and then deleted.
The privacy advocate’s goal is to keep Discord accounts and legal identities decoupled, so this is good if we take it at face value. But it’s unclear to me how they guarantee this. Is this more like a privacy policy? or is it impossible on a technical level?
Which vendors are they using? They claim that they and their vendors quickly delete any submitted identity documents, but they also “minimize” stored data. What other data do they store? and is this stored data able to backtrack the submitted identity documents (effectively mitigating privacy protections)?