Open source is a complex with lots of different takes that depends on what kind of perspective one looks from. For some open source simply means that you can check the source code. For the other the exact license about what one can do with that code matters (FOSS vs open source).
Another thing to keep in mind is whether we care about reproduceable builds, so a user can make sure that the source code that is published is the exact same as the one that’s used in the published application.
Seeing as we have multiple categories where open source is a requirement, we would do good with an exact definition about what Privacy Guides see as “open source”. I would like both the communities and the @team 's opinion on this.
This video has been removed for violating YouTube’s Community Guidelines
Anyway, OSI definition which is mostly equivalent to the definition of free software.
Grayjay is source available and therefore shares the potential privacy/transparency benefits of FOSS but not the maintainability benefits because forks cannot be monetized due to the noncommercial license
EDIT: to be clear, my opinion is that Grayjay is fine, but (edit2: nvm, I dont like it anymore) it’s neither free, nor open source.
I think licensing doesn’t really matter a whole lot in terms of why we care, ie verification a product is what it says. That means even something under a less-free license but is open source is technically open source but not what we want to see.
What I think matters to us is transparency, so code being in a living source repository is better as it’s unlikely to differ to much from a tagged release.
Well this was one of my first stances as well, But then something else hit me.
In general, we try our best tor recommend products for a long term. Readers shouldn’t have to worry whether todays recommendation is gone the next day, to a certain degree at least.
Having a license that allows a project to be forked, whether commercially or not, does allow projects to continue it the case of a sellout or a company going bankrupt, see Simple Mobile Tools/Fossify as an example.
I haven’t made up my mind about this yet, but it does make me wonder if just being able to read the code is all that should matter to us, or if we should think about more.
I don’t see PG as having a different definition of OpenSource than accepted standards - or at least - I don’t think them adopting one would benefit anyone. I think instead the question should be: when PG requires Open Source on a product, what is the benefit they’re seeking and can it be achieved by other licensing options?
As I understand it, the benefits that PG seeks are:
Code auditability.
a. The ability for anyone to go in and verify that nothing fishy is being done privacy-wise, and security measures are properly implemented.
b. Not to be confused with getting professional audits, which should still be desired in many cases.
Reproducability.
a. The ability to build the code yourself, to ensure nothing has been injected into the publically released binaries.
Longevity.
b. As Niek just pointed out, the ability to fork the project if the original developer goes rogue or abandons the project.
I think for anything to meet points 1. and 2., it would be acceptable to get away with only requiring Source Availability. But indeed, to meet point 3., a true Open Source license OR a special Source Available license with a killswitch that accounts for abandonment or rogue behavior is necessary.
But one could argue that most things only need to meet point 1., with more sensitive things (Or situations where you may not trust that additional code isn’t being injected during the private build process ) also needing to meet point 2.
Meanwhile point 3. probably only needs to be met when it comes to offerings where long-term use is expected and/or use wouldn’t be able to continue without the original developers - and largely that means cloud offerings. Though it’s worth noting that projects tend to die when the original contributors disapear anyway, so even though the ability fork may be there, it may not actually extend the life of a product.
Tools like Youtube frontends, Android Keyboards, Markdown-compatible note taking apps, basically any offline tool, etc… can either be easily replaced - or just continued to be used without updates.
Code-available. You can see the code, but it’s difficult to do anything with it due to license restrictions, missing components, or near-impossibility of trying to host/use it from source. This isn’t “open-source” but it might be better than completely non-public code.
Open-source without reproducible builds. The code is available with an appropriate license. Due to political disagreements, this includes both permissive licenses (like MIT) and restrictive, copyleft licenses (like GPLv3). It doesn’t include other licenses that aren’t deemed open-source by a leading organization. In practice, it probably should be this specific list. Open-source is better than code-available for verifiability, since a user is at least allowed to build things themselves, even if most users rely on a binary.
Open-source with reproducible builds. The same as #2, except the builds are reproducible for a high degree of verifiability.
For a points evaluation system, 3 > 2 > 1. If open-source is a requirement, then the software should at least qualify for #2. For very competitive ecosystems (such as VPN clients), it’s probably best to push for #3 in the medium-term.
Its not just being able to fork, its for other projects to be able to make something new without having to reinvent the wheel.
While I think that apps from FUTO are great alternatives to Big Tech services, they are not open source and FUTO does not call them open source currently. Their business model is great for auditing code, contributing upstream, and trying to keep the business alive since they want people to pay for software. Source available without a killswitch means the software is basically dead in the water if we believe that the company will eventually go under (as many do) or get bought out (unlikely in the case of FUTO as they are funded by a sole multi-millionaire and paying users).
I think the term “open source” muddies, confuses, and brings a a false sense of 100% confidence to users. There are simply different licenses which do or do not provide source code. They will mean entirely different things depending on where the application is ran and how you interact with it. I find these things important. Different licenses dictate how I use and run the software, and contribute to threat models.
These discussions come alive with how we interact with the software. Linux can run proprietary blobs of software because of GPLv2. I get to access the source code of a website that is running AGPL for audit. I get to compile and run MIT software on my machine knowing I can use it however I want, and Microsoft gets to pack telemetry in the binary they provide for the same MIT software where they wrap extra code around it. These are all quite different scenarios which require a basic understanding of what to expect with the licenses.
I don’t know if you have ever joined a few linux based subreddits, but there several people/organizations claiming it means something different, which is the entire issue here, we want to specifically define what privacyguides means when we say “opensource”.
This way we can move more easily forward when we are reviewing new software or services to add.
Edit: once this discussion is said and done, a knowledge base article on the topic with an explanation of our stance might be a good idea.
This is mostly important when there is no company behind the service imho. For products where a paid service is provided this seems less relevant to me.
Here is an interesting article about the difference between FOSS and FLOSS
On an aside note, if we decide to follow the OSI definition of open source, we should clarify whether we also incorporate the non-discrimination principle. This principle basically says that you cannot state who and who cannot use your software. So you cannot say that companies shouldn’t use your software or that your software shouldn’t be used by the governments, by the army or for things you don’t like, like warfare, human rights violations, etc.
In my opinion, it is not within the purview of Privacy Guides to redefine the term Open Source, particularly given the existence of a widely accepted definition established by the OSI. Such redefinition could lead to confusion and misinterpretation. I think the open source ecosystem would be better served by adhering to a unified definition, rather than allowing for individual interpretations.
In the case we choose to establish distinct criteria for our conceptualization of Open Source, it would be reasonable for us to adopt a different term. Conversely, if we decide to align with the OSI’s definition, we should communicate this alignment.
Well its not that we are trying to redine it, its that its currently not clear what is what, do we follow the OSI, the FSF, make our own definition? The problem as with most software is that there are multiple people and organizations who claim the they are the defacto standard. What we try to do is to define what we will adhere too.
I am not sure the OSI definition is reasonable to follow in our context. Technically, any license that states that no entities on the US embargo/blacklist can use/obtain the software, is against OSI principle of no discrimination.
In all practical manners though, we shouldn’t except companies to not follow US law (or appear not to). I am not sure it’s a requiremen to bar US-blacklisted entities to use the software, but that’s why big companies like NVIDA have a special custom license.