Hi @jonah and all.
We are huge fans of your work, and you are creating a lot of value, and help increasing privacy and security of communications.
I 100% agree that the current framework for assessing whether the products are good or bad from privacy and security point of view is not sufficiently robust and can result in some mistakes. That you started the discussion about how to reduce the risks in mistakes in evaluations and avoid recommending some projects that stop seeing privacy as a priority, is a really important an timely one.
At the same time, I strongly believe that applying formal filters, without analysing the realities behind that, would also be a mistake, as these formal filters are easy to manipulate and exploit, and it would result in both withholding recommendations on the basis of being VC funded for the projects that increase privacy now and in the long term, and also recommending the projects purely on the basis of being non-profit would equally result in promoting the projects that undermine users privacy.
To illustrate how wide is the spectrum of the possibilities, there are two notable examples from the rather recent history of Internet evolution:
-
Netscape, a VC funded and successful company. Without Netscape, the open web as we know it would simply not happen, as this company single-handedly evolved the rather immature protocol for the online documents (v1 of the web) to the robust online application platform (v2), by adding SSL (security), JavaScript (applications) and cookies (user authentication and authorisation). While third-party cookies are used as a monitoring anti-privacy tool, and they are rightly prohibited now in many browser (or, at least, can be prohibited by the users), the first-party cookies are the essential mechanism without which you simply cannot provide services via the browser - even this forum would not be able to function without the work that Netscape did. Would this happen if Netscape was not a VC funded company? Absolutely not, there would be no appetite for that level or risk. Instead we would have a big tech oligopoly we have today 20 years earlier, just with the different players - the ideas that dominated tech industry at the time were about “information superhighway” developed and tightly controlled by few players like Microsoft and IBM. My strong belief, without discounting the risks of VC funded model, is that the company that has any chances of disrupting today’s tech oligopoly can only come from venture funded space - this level of risk and degree of innovation required to get the idea from v1 to v2 is almost never available to non-profit organisation.
-
The opposite example of a non-profit that did its best to undermine people privacy is thorn.org. Funded by big tech companies it lobbied the logically flawed narrative that “privacy undermines child safety”, they managed to promote substantial legislative advances in various countries, that have the potential to undermine privacy, irrespective of where the project is located, but purely on the basis where the users live - effectively trying to give the national governments the legal right to mass surveillance. It was only thanks to a strong oppositions from some commercial companies and pro-privacy political groups the most damaging provisions have been removed from the laws that have been passed, and both the scope and severity of the allowed measures to achieve child safety have been substantially reduced to the point of having no tangible impact on privacy. We can discuss it in more detail if you are interested, there is a lot of FUD being shared in online forums from the misunderstanding of what laws were passed and what impact they might have.
And there is a huge space in the middle of these two extremes. Simply applying the filter “non-profit good / vc funded bad” will obviously result in really bad mistakes.
So while I do agree with the need to revise the framework to assess and to regularly re-assess the products, also separating assessment of the current status and the future risks, and the ability of the users to mitigate these risks (e.g. via data sovereignty, as was correctly pointed out by @dngray), and also licensing constraints, I don’t think that the proposed “ban” would be productive - it would have exactly the opposite effect, and will make the repetition of “Netscape phenomenon” that we are very much in need much less likely, and also “thorn.org” effect more likely, if people who have some standing and trust from the users community start indiscriminately recommend non-profits and ban vc funded projects (which is already happening to some extent).
I have a lot to contribute to this discussion, but it does feel like forum format might be not the most efficient from the point of view of reaching the consensus. I propose to make it a public debate, that would be live streamed where multiple participants could share their views, from both sides of this discussion.
Coincidentally, I am currently working on the proposal about what I believe are really important criteria for assessing the communication products (I am aware of course that your scope is wider than that, but this is the space I am interested in) from the perspective of their positive or negative impact on the privacy and security of the users, including technical parameters, operations and distribution parameters, licensing, governance, funding and sustainability. All these things need to be carefully assessed for the correct evaluation of both the current status quo of the project and also its future potential to improve and risks to become worse, and then such assessment has to be repeated annually, or at least after 2 years, similarly how security assessments are reasonably expected every 2 years.
We could call it “Privacy impact assessment” - that would result not just in a decision to recommend or not, with a one-paragraph summary, but in a long-form document, similar to “Technology security assessment” - both should be based on a holistic multi-faceted framework, and not just on formal criteria. I think in the same way projects are funding their “security assessments”, they would fund “privacy impact assessments”, if the criteria are understood and agreed in advance.
I’ll share what I think this framework could be in a week or two, but the sad conclusion is that at least from technology design point of view, right now there is not a single product that can be seen as truly private and secure, at least on some parameters, and all require some substantial changes, and everything is a trade off. If I were an independent reviewer, I would not be able to whole-heartedly recommend any of the existing communication solutions without some disclaimers, including the one we are building, and would only be able to explain pros/cons, and help the users make an informed choice based on their own circumstances.
While technical parameters I am applying to communication solutions are likely to be not relevant to the other product categories, the framework for assessing funding and governance models (the source of risks we are discussing here) would likely apply to all categories. In any case, it is much more complex than black/white vc/non-profit assessment that is often applied in privacy community.
It’s like when you are buying the car - you cannot just look at the body shape (which in this case would be non-profit/vc-funded), you have to also assess the engine (governance model and board composition) and transmission (control provisions that the investors or sponsors have, or expect, based on their origins).
Let me know if you interested in having a public panel discussion online, where we could debate some of these questions.
I will share more soon.
In any case, thanks a lot for the work you do and for this discussion - this is very important.