Off-topic
FTFY. Linux sandboxing is good.
FTFY. Linux sandboxing is good.
You couldn’t be more wrong.
Ok, will you elaborate? Maybe in a different thread so we don’t derail this one?
If you have an iOS device, enable Lockdown Mode, and open the Proton Mail app, then you will get the message saying “Lockdown Mode is enabled for Proton Mail.”
If you have a spare Android device, try disabling the WebView using ADB, and the Proton Mail app will stop functioning.
Why? Because Proton Mail and a lot of other apps use WebView to display remote content downloaded from the web.
Also, how do you think apps do A/B testing? Apps are more than capable of targeting specific users.
One last thing is that you can always click inspect element on the web page or in the PWA to see the code, but nobody is decompiling native apps to check that every single update doesn’t have anything malicious.
Websites and PWAs are the cream of the crop.
Electron apps are absolute crap.
Meta of all companies offers an extension for Chromium and Firefox to mitigate some of the risks. Source.
Yes, I use a VPN. Someone explained to me that it was the issue. I appreciate your feedback too!
From a bystander view, It seems like a lot of development defaults to Electron because it exists and has momentum.
Would advocating for privacy-aware software development be a reasonable initiative for PG to undertake? I can’t find a guide here on this but maybe I’m missing it.
In the context of this thread, would Tauri be something that could be in such a Guide? Perhaps it could include mention of Electron pro/con, Tauri pro/con, PWA pro/con, etc. I suspect such a Guide would have long-term impact. (I have no affiliation with Tauri, btw, it just looks like it has some advantages).
Never said there were not capable. But its easier to once inject code to the user shares his encryption key once, then to include it into your open source app that every users downloads.
It’s easier to leave no traces, but it’s not like everyone is auditing and reverse engineering every single update of an app, so even if you leave traces, the chances of someone discovering them are low.
Displaying remote content downloaded from the web isn’t the same as trusting the web server with handling your encryption.
In practice, the effectiveness of different E2EE implementations varies. Applications, such as Signal, run natively on your device, and every copy of the application is the same across different installations. If the service provider were to introduce a backdoor in their application—in an attempt to steal your private keys—it could later be detected with reverse engineering.
On the other hand, web-based E2EE implementations, such as Proton Mail’s web app or Bitwarden’s Web Vault, rely on the server dynamically serving JavaScript code to the browser to handle cryptography. A malicious server can target you and send you malicious JavaScript code to steal your encryption key (and it would be extremely hard to notice). Because the server can choose to serve different web clients to different people—even if you noticed the attack—it would be incredibly hard to prove the provider’s guilt.
Therefore, you should use native applications over web clients whenever possible.
Different attack vector. It is much more difficult to compromise something like the app’s signing keys to push out a malicious update than compromising the web sever.
I’d say you’re protecting against different adversaries as well. With the web server you’re main concern would be a malicious actor but with A/B testing you’re main concern is the developer (unless their keys are compromised).
Do you know how much skill it requires to reverse engineer an app and how hard it’s? Nobody is doing that for every update.
I understand. I’m not protecting against the service provider themselves though. I wouldn’t use them if I thought they were untrusted.
My main concern would be a malicious actor (outside the service provider) and it would be easier for a malicious actor to target a web server than push out a malicious update.
Okay, then use a PWA instead of a website.
PWAs don’t solve this issue. IWAs do.
Werent you the one just suggesting to inspect the code with inspect element ;p?
Proper PWAs like crypt.ee are real apps that just run in a browser, how does that doesn’t solve the issue?
You can compare the code that you’re running with the one on for example cryptee’s GitHub before you even type your passphrase.
Cryptee also recommends that.
This doesn’t work in practice; it is deception.
They look like real apps, they are not real apps though and still depend on the web server. Crypt.ee is still susceptible to a malicious attack on their web server. From their threat model documentation:
№ 1 UNAUTHORIZED BACKDOOR
The most relevant attack vector for Cryptee is an attacker somehow gained access to Cryptee’s servers without us noticing. Such an attacker could conceivably change Cryptee’s code to send malicious pieces of code to the user’s browser, which would either allow the attacker to get users’ unencrypted data directly or have the users’ encryption keys sent for future use in a MITM attack, which we’ll talk about below.
Cryptee has implemented multiple safeguards against this threat on both the organizational, server and application level.
At the organizational level, all our servers are protected with physical & digital cryptographic keys, multi-factor authentication, and biometrics where applicable.
At the server level, we rely heavily on a micro-service driven architecture where possible, reducing the attack vector significantly by not having a single point of failure. In addition, we have independent and distributed monitoring services that constantly scan for our served public code, and notify us immediately should they detect a change in our code.
At the application level, once installed, our apps check for the hashsums of each new version release, and if there is a mismatch for any reason or a release isn’t publicly reported, our apps simply don’t download new updates, and continue using the last safely downloaded update.
An attacker could still hypothetically gain control of our servers, gain control of other independent monitoring tools’ behaviors, and modify the software all without anybody at Cryptee noticing. The odds of this being successfully executed are very very low, as it involves compromising multiple independent servers and monitoring tools in harmony.
This attack will not be applicable if Crypt.ee migrates to an IWA eventually.
I’m unaware of any stories about people being affected by Electron vulnerabilities. When has this happened?
Edit: To put this another way, I am happy to encourage the adoption of alternative technologies if we believe they are better. Like if we think IWAs are better then we can publish things like fria’s opinion piece, I think that is very cool to write about. But if nobody has ever been harmed by a technology, then I am against discouraging said technology.