When Privacy and Security Conflict

Something about the online discourse around privacy and security has been bugging me lately, I try and articulate it below.

  • Main Point

I think there is an alternative to the black and white approach demanded by security experts and security enthusiasts. As a premise, they state you cannot have privacy without security. I do not accept this premise wholesale. I think it is factual in many cases, but not all of the time.

  • Concession

But first, a concession. I am not nearly as technically adept as many on this forum. Neither my training nor my profession is in tech, or even tech adjacent. I learn what I can when I can to gradually increase my privacy posture as a political protest - fuck big tech, fuck government overreach.

That being said, I’d like to think I’m not wholly incapable of reasoning and thinking critically about advice given. I have noticed that there are times where suggested advice for /maximizing/ security is directly at odds with increasing privacy. Furthermore, I believe there are times where it is OKAY to trade security for privacy, and that some people might in fact prefer to do so. In these times where it is OKAY to make such a trade-off, I say that those doing so should /absolutely/ understand the risks of the trade-off they are making. And this is why we /should/ listen to what security experts and enthusiasts have to say, so that we may make informed decisions.

  • The Behavior of End Users & the Responsibility of Security Experts

Security Experts are uniquely responsible for providing accurate information, authoritatively, for the protection of End Users and/or organizations they serve. They are advocates that must anticipate risk and advise accordingly. This is a good thing, we love them for that. At the end of the day, they just want us to stay safe.

Occupying this position, however, they need to give advice that accounts for gratuitous stupidity, because they cannot control the behavior of End Users, and End Users tend to be pretty dumb - as a whole. In short, they must account for the fact that often times problematic outcomes are simply a result of PEBKAC.

However, this denies the End User their agency. At the start of one’s privacy journey, one should absolutely heed the advice of security experts. But once someone has put in effort to understand risks, to develop a threat model, to understand how technology is leveraged against privacy or anonymity, there is SO MUCH that they can accomplish simply through mindful behavior that accounts for /security/ risks.

There’s a saying I’ve seen many times, and while it may be merely “conventional wisdom” I think it has merit and is relevant to my argument: “the best antivirus is common sense.” Know the risks that come with online behaviors - for example the obviously risky behaviors of pirating software or clicking on random porn site links - and behave accordingly. For example, with a little research, people can make smarter decisions that allow for the desired outcome without putting themselves in risky situations, or simply not engage in those behaviors in the first place.

But that example is easy, let’s try another. Perhaps your preferred mobile browser lacks per-site process isolation, tabs can snoop on each other, and active exploitation could result in exfiltration of saved passwords or session cookies. That is absolutely a good thing to know. Armed with this knowledge, this is something I can keep in mind and I can adjust my behavior accordingly. For example, if I take a /compartmentalized/ approach to privacy - and I believe that everyone should - then I might use this browser with a separate IP and exclusively for browsing activity not tied to important accounts and simply be mindful of which, and how may, tabs I have open. In short, this very real /security risk/ can easily be mitigated through /behavior/ that also enhances /privacy/.

  • The Right Tools for the Job

I think compartmentalization is the only sane approach to privacy. There are simply some instances where it is infeasibly inconvenient to try and maintain privacy by withholding facts about behavior and identity. For instance, I utilize credit cards, and shop online for many things. All of that data is processed by financial institutions with 0 privacy (one brick and mortar merchant was able to automatically pull up my cell phone number after I used my credit card, so that I could receive texts about the service I was purchasing; I had never patronized that business before).

And yet, there are instances where I would still like to be able to research a topic somewhat privately, or visit a site without that site getting to associate my visit with my ad-identity, etc. So I employ a compartmentalized approach to privacy. I won’t get into the specifics, but to give you an idea I have purpose-dedicated machines (virtual & physical), browsers, IPs, etc. that I use for a granular approach to identity management.

Why is this relevant? This community has the mission of making reliable recommendations for privacy tools and privacy respecting alternatives. Solutions offered by PG should absolutely meet certain security standards. As such, security experts and enthusiasts often contribute very strong opinions on the /security/ merits, or lack thereof, of any given product. However, in discussions about whether or not a certain product is advisable to use, there’s always this assumption that /only/ this tool will be used by an End User for all activities and that single tool will be trusted with all of the user’s information all of the time. If you’re a security expert, this is an assumption you have to make, since you can’t control the behavior of the End User.

However, I think that assumption is harmful for developing a compartmentalized approach and I think that we /should/ encourage people to adopt compartmentalized browsing behaviors. No single tool will be the best choice in every scenario. Through compartmentalization, users can create spaces from which security is less of a concern in order to use products that have better privacy features at the cost of security.

For example, it is widely accepted that Linux is woefully insecure compared to proprietary options. Yet, I daily drive it on my main machine that is more focused on general browsing and hobby stuff, because Linux is much more private than proprietary options. There is no “valuable” or high-stakes data on this machine, and my browsing habits on this machine limit themselves to non-personally identifiable activities. I exclusively log in to my financial and true-identity-tied accounts on Vanadium in a dedicated profile, because for that sort of browsing I want maximum security and I’ve already relinquished most, if not all, privacy for the sake of interacting with those entities. For that type of browsing, my threat model wants maximum security because the data in question is high-stakes if it were to be compromised.

  • The existence of vulnerabilities != certainty of exploitation

There’s this assumption from security maximalists that the existence of a vulnerability is essentially no different from that vulnerability be actively exploited. I think this is counterproductive. There will always be a vulnerability, or the risk of a 0-day being discovered before you can meaningfully adjust course. True security exists wholly in the realm of theory, not in practice.

The average person continues to use mobile phones long after the security updates have stopped, and yet the average person is fine. Most people are vulnerable to violent crime, and yet most people do not get murdered. My house doesn’t have an alarm system, so all someone would have to do to break in is smash a window while I’m not home and they could have free range of my house. But guess what, I don’t sit at home all day every day waiting to apprehend an intruder on the assumption that because it /could/ happen, that it /will/ happen.

To me, it seems like FUD to conflate /could happen/ with /certainly will happen/. Risk can never be wholly eliminated. The best one can do is be educated of the risks, take some reasonable measures against common risks, and proceed about their daily lives.

  • Why it matters

We know that corporations, large, medium, and small, have a financial incentive to violate our privacy. We know for a fact that many of them do. We know that there are whole industries built on violating the privacy of individuals. This is what a lot of people interested in privacy want to escape. For this class of people, they might want to compartmentalize their online life and employ the best privacy-respecting options available, even if that means trading some security for more privacy. These people are unlikely to include targeted attacks or active exploits of vulnerabilities in their threat models.

A lot of people want to use products that are designed around the concept that the End User is the Customer. This is less and less possible with proprietary digital products as time goes on, where shareholders are the true customers. So, many people might want to use as many open source alternatives as they can. However, it is commonly noted by security professionals that just because something is open source, doesn’t mean it is secure, and often smaller projects don’t receive the required scrutiny of their code base to make sure they’re employing privacy/security best practices. Does this mean people shouldn’t use open-source? Of course not.

All of this to say, there can be other motivations that factor into someone’s decision making besides security. Security is important, but it really is not the prerequisite that security maximalists say it is.

In the same way someone should /not/ be told “yes, using product XYZ will 100% guarantee your privacy and security” because it has better privacy and security features, people should /not/ be told “no, never use product ZYX, it will 100% guarantee in you getting pwned” because it falls short of perfect security.

I’m not asking Privacy Guides to abandon the consideration of security in evaluation of a product for recommendation, but rather to understand that maximizing security may not be the end goal of everyone here. I remain unconvinced that security is a prerequisite for privacy, but maybe I’m just a moron. I’m sure y’all will let me know :wink:

1 Like

Your post reminds of a similar thread where security (and its importance) is discussed on the Techlore forum you may find germane. I recommend checking out the whole thread.

1 Like

I think once you begin to learn and gain enough knowledge about privacy & privacy tech, security & security tech, its fair to conclude that one who is tech savvy and discerning with critical thinking skills would know what they may want to prioritize with every tool in their privacy and security arsenal (including OSs) to use and how and for which use case or purpose.

I don’t think there’s an authority here claiming one thing over the other conclusively or with any certainty.

If you ask me, both are needed at varying degrees depending on the software in question and the use case too sometimes. But I also think it should be balanced with pragmatism practically with conveniences or inconveniences (depending on how you prefer to see it) if you’re thinking about all this privacy and security stuff from the end user POV (a.k.a user experience POV).

Make the right decisions with what you can and how with the info you have available or with the info you can avail. And make amendments to your threat model and your digital set up as and when needed because technology changes where some things get better, some things worse, and some new things are born.

I don’t think overthinking here is needed but is indeed a good thought experiment to run for yourself to see where you land and how you like things before settling on or with anything.

2 Likes

Lol, yeah, I made a very similar argument to that lightningtoaater person.

I get what Ale was saying, everyone deserves max security and that I agree with. But even he said, and I totally should have used this term in my essay, there’s so much that just plain old OPSEC can do to make oneself secure. And that’s what I was driving at in the bit about behavior

1 Like