Here is perhaps a better and more relevant analogy than a doctor, and one I know something about: functional safety engineering.
In order for machines to be useful they must do work and work inherently creates hazards, so the goal of industrial safety is to get machinery to a point of “acceptable risk.” We explicitly reject utopian visions of a world without risk, but we also reject visions of safety that go against predictable human behavior. In doing risk assessment, you account for the inherent risk, but you also account for exceptional situations (cleaning, maintenance, troubleshooting, resetting) and we account for different kinds of people (colorblind, non-English speaking, no industrial experience, passerbys). In addition, you have to account for misuse: when “user shoots foot” not because he is an idiot but because his behavior was readily predictable (he made a decision of elevated risk because it made his job easier, faster, more productive, more comfortable). Once you assess risk you go through a risk reduction process where you design away hazards, guard against hazards, warn against hazards, etc. in order to reach an acceptable risk. In this process you have to be careful because designing a guard that makes work frustrating encourages misuse and motivates people to bypass, and the majority of industrial injuries happen from bypassed safeguards.
By analogy, in software, to do useful things you need to have powerful systems with sensitive information that would be dangerous to exploit. There are known categories of security hazards which you should work proactively against, preferably designing them away. For example, using a memory safe language or a hardened memory allocator to eliminate entire categories of security risk. At the same time, if users find that it makes their lives difficult, they won’t do it. Many people shut off their firewall when frustrated from troubleshooting for example.
So I have a hybrid view between @jonah and @RoyalOughtness. Getting someone from totally hazardous to pretty good is a win, even if long term you should always be pushing for better. A project like secureblue is great for that because forward-thinking users will do the legwork to figure out how to make things as good as possible and over time it will likely get easier and more user-friendly to manage as these practices get integrated into people’s habits, much like Graphene was a learning curve for a lot of people but the ecosystem benefits from that.
Having a project that is totally focused on security this allows more indifferent people (say, those who release AppImages for convenience) to have some targets for best practices.
To me, it is a no-brainer that we need a project like this, and Jonah is the reason I know about it 