The changes – which experts argue will put people’s anonymity and secure encryption at risk – would widen the net of impacted service providers to virtual private networks (VPNs), messaging apps, and social networks, having previously only impacted mobile networks and internet service providers (ISPs).
Well… shit.
(I got nothing else to say for now..)
It’s important to note that the current amendment is not subject to a parliamentary vote or public referendum under Swiss law.
That to me feels like the most concerning part in terms of the ability to fight this.
That pretty much summarizes everyone reactions, our last bastion of privacy friendly country. Now torn in pieces.
We will still have to wait and see what actually happens. But this is terrible news nonetheless.
So much for direct democracy.
I’m pretty sure 404 Media will report on this should anything major happen. Atleast I hope they do.
Did Switzerland even try to provide a rationalization for this?
Between this, this, and this, all 3 of PG’s recommendations won’t / don’t meet min criteria for VPNs?
May be I’m mistaken, but I don’t find any criteria on PG for “secret backdoor mandates” for encryption software like Messengers? In fact, this criteria should also apply to Web Browsers, since (most) email providers & (some) messengers are accessible over web.
I can see the point. Jurisdictions are important, and almost all the recommendations usually presented come from compromised jurisdictions (US, Sweden, France, India, etc.).
I think excluding all tools based on legislation made in origin country would mean no tool will pass the criteria. Adding warnings everywhere will make the site look ugly. A solution can be to rethink recommendations from scratch and only suggest the ones that have taken precautions to mitigate legislative threats, keeping in mind practicality.
For example, a tor based messaging app should have reproducible clients, a decentralized server one should have reproducible clients and verifiable servers, or have clear disclaimers about the extent of damage an actively malicious server can do along with a reproducible client.
Email clients should be reproducible. VPNs should have verifiable servers, and/or multi provider mixnet/hops, and/or protocols that preserve privacy/security by design. Reproducible clients and use of common protocol implementations instead of custom ones.
DNS providers should have public transparency logs that are immutable to discourage DNS poisoning, similar to CAs.
Cloud storage should have verifiable servers and reproducible clients, should also have immutable by design logs of file/account access that do not violate owner privacy.
Search engines and online services are easier since you can get away with using better access methods and maintaining anonymity.
Offline tools are easier, since you can just warn user to sandbox/VM/restrict them to contain damage if malicious. Reproducible preferred of course.
Browsers and operating systems are the hardest to check (although Debian recently became reproducible). I do not have an approach outside of extensive efforts by vendors for reproducibility (which is not even that useful given the amount of code to be reviewed making it easier to slip in things).
This is all very optimistic to the point of delusion I know, but projects are taking steps in the right direction. I think PG should also follow by keeping the recommendation a moving target as threats evolve. Don’t recommend things that half ass stuff they claim to protect against in threat models. Only show projects as harm reduction if they are not worth recommending. The less spaces like PG legitimize half baked solution, the more user pressure can be built to work on these features.
For VPNs, in face of secret govt mandates, lot of guarantees simply fall apart. There’s no need for PG to continue to sell VPNs as things they are not. There isn’t any need to relax the criteria just to recommend encryption products whose cornerstone is … encryption. Ditto for identity masking services.
My point was specifically for e2ee email providers, messengers, & VPNs. And the clients folks use to access them (like Web Browsers).
Should be adopted by PG as min criteria (which I don’t think most PG recommended VPNs / e2ee Messengers / Email providers currently meet).
Hear, hear.
I agree. Most commercial VPNs only have policy level protections right now, and act as your ISP. Depending on threat model, you might wish your government ISP to have/not have your data and prefer/not prefer VPN provider.
I was more thinking out loud. I agree, at least services which have more policy based trust [VPN (trusted to not log), encrypted email (trusted to not read plaintext emails), messaging (trusted to not aggregate metadata)] should be the first movers on this.
How is our current approach in any way misleading? This is exactly how we “sell VPNs” currently:
Can you quote where I said PG is misleading?
I was just suggesting that PG act less as a list and more as a catalyst. By clearly showing that no VPN passes the bar right now, you can create a positive pressure on companies to be the first mover.
This is again my vision of how privacy advocacy should work, the site is free to not do it.
Sorry, I was only replying to @ignoramous, who seems to agree with you but also thinks PG is falsely advertising VPN features.
Ah, my misinterpretation then.
I am a bit confused about how a criteria like this could even be evaluated. If its a secret backdoor PG won’t know about it and, I am not sure there is any value in PG explicitly saying they would not recommend software with known back doors.
This is way to broad of a characterization.
This solution is already there via people using the tool suggestion to recommend removing tools that no longer fit the criteria.
This topic seems to be getting off course and has become a bunch of criteria suggestions for all manner of categories.
I disagree. The issue is about switzerland trying to making privacy invasive state surveillance laws, the response from me is about how this is becoming common and why I think PG should evolve its criteria with this in mind, especially since PG has ditched five eyes ideas without good reasons.
Name a country any of the Pg recs are based in, I can point to surveillance and gag order laws that are there or in the pipeline. I do not think the ostrich approach is good here. Most major privacy and security vendors agree and are actively working on mitigations.
So I am not sure what is “too broad” about saying “no tool will be left if PG only considers country of origin”. There are legitimate issues with US, France, Sweden, India, China, etc. all extending their hands to try to break legal and/or technical protections (with technical protections harder to modify compared to policy based protections).
No, it is actively blocked by PG not considering jurisdiction relevant as of now. Again, using a related discussion to talk about a meta change is not anything out of ordinary in a forum where topics do not exist in isolation. When news about “XYX country breaking privsec” comes out, it will lead to discussions about should countries be relevant and not just discussions about “XYZ is bad, hope they don’t pass the bill”. Otherwise all these topics can just be a RSS feed instead of a discussion.
I’m sorry if you felt that your opinions aren’t being heard in any previous post.
We don’t a specific policy against including jurisdiction as a criteria for VPNs. Most of how we build our criteria is based on community consensus, so naturally there will be a lot of opposition or support depending on the topic itself. Feel free to DM me if there is a particular issue you noticed about forum discussion quality. I’ll take a look into any past discussions accordingly.
Anyways, this discussion is getting off-topic. I’ll delete most of the ad-homenien arguments here but keep the discussion open to on-topic discussion.
The solutions here (leaning toward open-source, reproducibility) do not seem to fit the threat (legislative harms & access or compromise requests). Just so anyone might be able to help me understand: the threat example here is that the state (presiding over where the project’s representing organisation is located) passes a law that disallows registration to an online service without certain personal information.
How does an organisation like the Signal Foundation mitigate this?