Portmessenger

I got this as a recommendation on youtube, https://portmessenger.com/
This is basically a messenger app but the catch is there is no phone number, email and unlike matrix, no username.
https://www.youtube.com/watch?v=7M9Ja43NOvQ - This is the demo video they put out.

Based on the demo video only, this is as far i can tell
-Users connect via qr code or link which can be one time or more
-No personal identification
-They are saying they have plans to be open source

I have no idea how this can be used to receive otp and don’t know if it fully local or not.

I originally planned to make this post on r/privacy but my comment karma didn’t allow it. Would love to hear your thoughts on this.

Seems similar to SimpleX but with a nicer UI. Still, hard pass right now.

Not open source yet, no audit, backed by venture capital, no clear business model.

They do have a page describing their protocol, and I’d be interested to hear someone more knowledgeable give their take on the protocol, but it’s hard to speak on that alone.

Granted they are a new product, and them stating their intentions of going open-source is a step in the right direction. I think right now they just need some time before port is something worth looking at.

Once a few more things are in order on the transparency front, I’d be happy to give it a go.

3 Likes

Not a professional cryptographer but I’ll look.

  1. Single Curve25519 Diffie-Hellman, no forward secrecy of future secrecy.

  2. Extremely weird endpoint authentication with Random Assosciated Data (RAD). From the looks of it, it’s a secret token that gets encrypted with the X25519 shared secret, and if it’s not being changed then they assume it’s authenticated. What this misses is a man in the middle attacker can obviously just pass RAD generated by Alice all the way to Bob, first encrypted using SharedKey_1 from X25519(SK_mitm, PK_Alice), and then encrypted using SharedKey_2 from X25519(SK_mitm, PK_Bob).

    The good news is this is not a practical attack since the QR-code is the part that’s handling user authentication. The bad news is the devs don’t seem to be too aware of the fact that this makes the RAD more or less pointless and as it’s again not a magic solution, the potentially_RAD == RAD doesn’t detect anything other than a complete idiot MITM, swapping random data around. Cryptographers attacking protocols working for the state or offensive security industry are extremely careful about not messing up like that.

  3. Snake oilish replay attack resistance

    The moment Alice receives the packet from the server indicating that line_link has been used to form a connection, the line_link is locally marked as invalid. This means that Alice will never accept multiple connections from the server that have used the same initial context.

    This isn’t how replay resistance is done. You either refresh the key every time with hash ratchet or Diffie-Hellman ratchet, or you use a non-repeating message counter every time. Forcing the SimpleX-like queues for message delivery to be one-time only doesn’t change the server will always see your IP-address and if it’s e.g. a unique IPv6 address, they will know it’s you. They can then use the queue to replay a packet and if your client decrypts it happily thinking it’s from the contact, the security fails.

  4. Using CBC without HMAC. The technical documentation talks about using AES-CBC, and it does mention SHA256 in passing, but funnily enough, the hash is never mentioned again. I was mostly expecting it to be mentioned as a primitive in HMAC-SHA256 construction since, it’s not an ideal hash function: Merkle-DamgĆ„rd construction is vulnerable to length extension attacks so these days the recommendation is BLAKE2 or SHA3-256. HMAC exists to fix the length extension attack vulnerability. Not using AES-CBC with any authentication (usually HMAC-SHA256) makes the system vulnerable to padding oracle attacks.

  5. No mention of X25519 shared key whitening: the shared key is a point on the curve and can thus be biased, so passing it through a hash function (even their SHA256 will do for this) for whitening, is mandatory. This might be handled by the library for them, but since the tech docs don’t even mention which crypto libraries they are using and like @bee said above, it’s not open source, there’s no way to figure out if this is the case.

  6. Keys are generated by OS level CSPRNG, that’s a good thing.

As its architecture is essentially networked TCB with content privacy by design, and metadata privacy by policy, it’s effectively in the league of Signal.

The lack phone numbers might be a privacy benefit over signal, but as the protocol lacks forward secrecy, future secrecy, remote key exchanges and public key fingerprints that requires, proper message authentication and as it’s thus vulnerable to padding oracle attacks, I’d say these amateurs will have to hire a professional cryptographer before they try again.

DO NOT USE.

EDIT: I sent the devs email about this post.

6 Likes

Thank you for the detailed response!

Glad you sent the devs an email—here’s hoping they address the issues you pointed out and improve the product for the better.

But yeah, as of now, will definitely be staying away.

Let us know if you hear back

Here’s my email response to their email.

I’m Abhi, the self-proclaimed CTO of Port. Ani forwarded your email about our cryptography to me. Believe me when I say that I appreciate it more than you know. There’s a mixture of poor communication and lack of transparency on our part as well valid criticisms. Theres a few things coming together internally, all of which I hope goes towards addressing these.

We’re hoping to publish our source code soon, at least for the important bits.
This will help people like you hold us accountable better as well as help make it easier for us to be transparent.

Unfortunately this is taking the wrong steps towards transparent security. It’s not about being able to see what’s going into the binary and then trusting the binary blindly. It’s about being able to build the binary for yourself if you need that. Ideally, in a reproducible way. Without the possibility to compile the program oneself, all you have is something that claims to be the source code. Sure, it lets you browse around and get a sense of code smell but that’s again not enough. Proprietary security products are not accepted on e.g. Reddit’s /r/privacy at all. PrivacyGuides is in that boat too. Also, when everyone and their mom is already on Signal that’s fully open source, you might run into tough competition and critique if you go that route.

After our recent burst of UX fixes, I have been afforded time to go back and work on things needed to improve our core infrastructure, including out security.

OpenSSL, which we use for our cryptographic primitives, added support for a few new algorithms accepted by NIST in 3.5.0 (which is in LTS which I also like). I’ve been casually observing this unfold for a bit. This has given me further inspiration to upgrade our protocol and encryption.

We’re reaching a point of frustration with react native’s ability to interface with native code, leading us be frustrated by some core architectural decisions we’ve made due to the limitations of our framework.

This is coming together as me rewriting the core of our app to make it easier to publish, audit and contribute to (internally as well as externally). I’m looking to improve our protocol, our responsiveness, and our background/killed state processing abilities. This is a fairly large undertaking, and I’m sure to have scope that I’m not seeing creep up on me.

So with the upstream changes in OpenSSL, some rework is planned. That’s good to hear.

Now to respond directly to your observations:

Yes. We do not claim not, nor do implement forward secrecy. This will be changing soon.

Why not implement it properly from the get-go is a bit strange, but better late than never!

You do rightly point out that RAD’ == RAD =/=> not MITM. To clarify, we assume that Alice and Bob (excuse the overuse of these parties) both have good copies of the app.

Alice creates a Port and shows it to Bob, who scans it effectively transferring the Port directly from Alice’s to Bob’s phone. Bob now has a peer_public_key that he will use to derive a shared secret using DH.

Ideally you’d deliver a long term identity key over the QR key exchange, which is then used to sign ephemeral X25519 public keys. This gives you forward secrecy and with proper X25519 ratcheting, future secrecy (i.e. break-in key recovery).

Alice on the other hand is the party that relies on the RAD. Bob sends a message to Alice containing his public key (used for DH) as well as the RAD encrypted using the derived (not yet shared shared) secret. Alice can now mix her private key generated for the port (correlated locally with the line_link) with the peer_public_key from bob, establishing the shared part of the secret.

Using this shared secret she decrypts the ciphertext and confirms that the RAD matches the RAD she shared with Bob. Since she is certain that bob is the only one who has a copy of the plaintext RAD she generated for inclusion in her Port, she is confident that she is communicating with Bob. If a man-in-the-middle attempts to connect with Bob to steal the RAD, they would be unable to decrypt the ciphertext sent by Bob since they don’t have the private key corresponding to the public key in the Port.

Oh I see, so the RAD is a challenge the purpose of which is to allow safe key exchange in the opposite direction after a single public key QR-code is scanned in a trustworthy way. Yeah that’s a valid mechanism. (I’d like to propose opportunistic post-quantum security by also scanning a symmetric key inside the QR-code that gets mixed in. If it leaks because the user sends the QR-code over insecure medium, it doesn’t compromise all security, and if it doesn’t leak, no amount of Shor is going to break the security.)

The protocol that you’re referring to only talks about setting up an initial context that peers can then communicate over. We, at this point, just need to invalidate the Port after the first attempted use. To do so we simply mark the Port as consumed or delete it. This results in the line_link being invalidated and disallows any further uses of the public key in said Port.

If the packet(s) to form a new connection are replayed, they are ignored by the client since they will receive a line_link as part of the packet and will note that no such valid line_link exists locally.

So the port is effectively a ~one-time token allowing client to accept certain types of packets related to key exchange?

If you’re concerned about replayed encrypted messages having un-intended consequences, those are protected against one level up. Every time we decrypt a message, we find the included message_id, a UUID4. We save every message and if we detect a message being re-sent, we ignore it.

Why use UUID4 when you have to store it for every received message and perform a lookup to see if it’s a used one? Wouldn’t a running counter reduce both time and space requirements? Sure, the counter tells exactly how many messages have been sent, but so does a large database of used UUID4s.

This was something we had to build I due to the nature of mobile notifications. We are pessimistic about the reliability of delivery, so we overcompensate and tend to deliver more often than we need to (most of the time).

With running counter you’d also be able to detect packet drops and request re-transmission of specific packets. Hard to request UUIDs that never arrived.

The client has had to become robust at ignoring duplicates. We also can’t be super confident about in-order delivery.

Again, running counter would work better with ordered packet processing. You can cache too early packets until the one matching current counter position arrives.

As a result, we felt uneasy using an incrementing counter. It’s not the most efficient, but it’s efficient enough, especially with the indices we’ve set up on the local database.

I’m really puzzled about the logic here. The UUIDs have time complexity of O(n) for linear search, and O(1) when using hash maps. But UUID always has O(n) space complexity. A counter has both space and time complexity at O(1). Disk based hashmaps might work but the relocation quite expensive, even with SSDs. Why go with the UUIDs?

Our implementation is gratefully taken from OpenSSL documentation. I can’t claim to know more than the maintainers of the project. I believe we chose an AEAD scheme. Regardless, this will definitely be something we consider in this upcoming enhancement.

I find it strange you know which one you should pick (an AEAD scheme), but not that AES-CBC isn’t one. Not even with HMAC-SHA256. You’d want XChaCha20-Poly1305 or AES-GCM for that. My advice is ChaCha, as unlike AES, it doesn’t have cache timing vulnerabilities on hardware without AES-NI.

As I’ve said before, we use OpenSSL. I’ll be completely honest, I didn’t check that it implemented whitening.

I’m not well enough versed in C to say myself, but given that pyca, that has OpenSSL bindings for Python, recommends using a HKDF to ā€œdestroy any structure that may be presentā€, it’s very likely OpenSSL isn’t doing the whitening by default. I wasn’t checking OpenSSL before, as your documentation The Port Protocol does not mention which libraries you are using. You should add that as it adds to your credibility. I was only checking whether you were explicitly mentioning hashing of the X25519 shared secret, which wasn’t mentioned.

I will ensure I do this when choosing my next scheme.

Hopefully I’ve answered some of your questions. Please poke more holes in our encryption and our protocol. I hope you will examine our source code when we publish it. If you’re interested, I can give you advanced access to our re-written source once I’ve fleshed it out a bit.

I like your attitude! You should really run the protocol by professional cryptographers, they’ll probably find some more. Trail of Bits and NCC Group are both solid choices.

1 Like

What does this bring compared to Simplex, which has a proven history and is also numberless (and much more) ?

This, in addition to no company information on the website (except information about their VC backers), make me doubt the viability. Is a company even created ?

This isn’t necessary always for open-source project (but much better if you have) but for a closed-source project a company seem necessary for accountability.