Good to see! I noticed the passkey UI seems to be broken:
And the icons seem really bright for some reason. I’m on Safari 26.2.
Good to see! I noticed the passkey UI seems to be broken:
And the icons seem really bright for some reason. I’m on Safari 26.2.
I was not able to reproduce the issue, even on my MacBook. This is really strange to say the least.
Happens on my iPhone, so it’s probably on the mobile version. I cleared my cache and restarted Safari and everything but it didn’t fix it.
Thank you for specifying the issue was occurring on your iPhone. The bug is now fixed.
We also pushed out a few other UI and QOL fixes/upgrades. I’m currently thinking about implementing a live chat system alongside the ticket system.
More importantly, I believe making the site work without JavaScript - while being a relatively complex task - would be a significant win.
That would be excellent for Tor browser users. I appreciate how responsive you are to feedback and the ambitious goals.
Yeah, tree shaking is really a build time static analysis on the code before minification/uglification. Honestly most people don’t really know how to do this correctly, cause its a bit more than just flip a switch to tree shake; but it can definitely help, but not the end all be all.
Perhaps he was meaning more specifically, code splitting, where you can dynamically bring in different bundles depending what needs to be loaded. via import()
I still can’t get over the insane cognitive dissonance in
Every company says they “care about your privacy.” It’s in every privacy policy, every marketing page, every investor deck. But if I can reset your password via email, I know who you are. If I log your IP, I know where you are. If I require phone verification, I have leverage over you.
That’s not privacy. That’s performance art.
And much later in the article:
Let’s be crystal clear about what we’re NOT claiming:
Anonymity ≠ Zero Trust Required
You still have to trust that we’re actually doing what we say.
And
Privacy is when they promise to protect your data.
Anonymity is when they never had your data to begin with.
You never focused the system to prevent the user from sending their IP-address to you.
Anonymity where IP isn’t stripped from user’s end from the beginning isn’t anonymity. You stripping that information on server end breaks your own “Anonymity is when they never had your data to begin with“
You’re contradicting yourself.
Again, like I said above. It’s fine to have privacy provided by policy. Signal is fine with it.
But you need to be upfront about it. You don’t want to be labeled as snake oil.
We offer onion and I2P mirrors, we literally are providing architectural anonymity. Users who want true “we never had your data” can access via those routes and the server genuinely never sees their IP.
The clearnet site existing alongside those is just a convenience option - it doesn’t invalidate the anonymous architecture that’s available.
As far as architectural protections go, do you support/looked into supporting confidential VMs?
A convenience option which is the default way to use the internet that isn’t private, and that you’re not expressing in your marketing language.
You’re the one who decided to market anonymity, why not be honest about what it actually takes?
Why contradict yourself in the marketing language? You’re a tiny start-up that can still prevent serious harm.
I love that you’re offering the onion service. That’s fucking fantastic! Just make sure the users who need anonymous services know that’s the route they’ll want to take. Give them some tutorial how to get started in truly anonymous way. Make that quicker to find than the insecure way. Make your service better. I don’t want to shit on effort. Think how much better the articles will be when they actually help users get what you seem to market they’ll get.
Something like “Halt, to get truly anonymous hosting you’ll have to take some precautions. Don’t worry, our guide will hold your hand all the way“.
Even with Tor the most it can be is pseudonymous. As soon as you log in they know you’re the same person as all previous sessions.
I will look into this first thing tomorrow.
Sure, that’s a good idea, I’ll sketch up a guide post and point new users to it upon registration.
Indeed. It’s unavoidable something that is managed for one entity has some identifier, the very least it’s a database ID. Payments need to be associated with something for accounting.
Wrt pseudonyms: It’s not just nicknames and user IDs. In practice, there isn’t a magical state of anonymity where some system offers only a rain of singular data points with no links whatsoever. Any time you issue a set of two or more data points about someone an identifier, you have the information associated with a pseudonym.
It’s the matter of “is the set of data associated with a pseudonym enough to identify the individual’s true name (orthonym)“. Until then, that person is anonymous.
Like Wikipedia says
Many pseudonym holders use them because they wish to remain anonymous and maintain privacy, though this may be difficult to achieve as a result of legal issues.
Anonymity is always relative. A publisher will know who Emil Sinclair is. Reader doesn’t. NSA probably knows who abc123.onion is, or will if they really, really want to. Does it matter? It depends. But my point is, you can be pseudonymous and anonymous at the same time, and only to some, and extremely rarely if ever to everyone.
Fantastic!
I was specifically talking about tree shaking there because kissu linked two 100kB network responses. There’s no way that a site as simple as servury needs everything in those files. tailwind and font awesome are typically tree shaken when bundling to avoid such large dependency sizes. Manually breaking up font awesome imports or implementing code splitting is not really a thing in modern web dev. It should be handled correctly by your tooling and you should run bundle analysis to verify that it was done correctly. I’ve worked with Next.js for years so my general approach is more tooling + JS unlike what ybceo is trying to do now. I believe most of the site can be pre-rendered, but it can’t fully function without js when you depend on stuff like cloudflare, three.js animations and osm maps.
I haven’t gone through the servury website to form a technical opinion on how anything should be structured.
Next.js does code splitting on a route level (last time I used it), but you can still handle dynamic imports manually in your javascript and should if you need to. Route level code splitting is fine and does a lot.
Tree shaking is kinda tricky because this depends on a lot of factors since you need to transpile to ESM or straight up use it. Yes, its possible with CJS but not very common and libraries you use most likely don’t support it. This comes down to browser support you want to target and then if your willing to support dual bundles where you detect esm compatible browsers with the script tag noModule and module types. So for sure do-able, but adds to what your building and deploying and how.
You also can’t tree shake code that isn’t written to be tree shaken. If a library doesn’t export esm and setup it’s package.json right you aren’t going to get automated tree shaking. It’s possible if a package sets up its exports you can do a file based import like import { thing } from ‘my-package/thing/thing.js’ in node16 for example.
To your point however, it sounds like there are probably some optimizations that can be done, however I haven’t dug through it to detail out what they would be.
pre-rendering is a completely different topic though that typically wouldn’t work in a highly dynamic environment, again this is technical of which I don’t have enough context to form a strong opinion on how they should do it.
either way, i have a maintained and developed libriares in the web ecosystem, and i’ve had to setup tree shaking, and talk through it in different companies, and my point is just that its not the end all be all and can be a bit more of a hassle than its worth. its a fine optimization but theres just a lot of factors to it that I think a lot of developers arent really focused on.
But based on what you’ve found, i’m not arguing optimizations arent available for the taking, but weighing those decisions with whatever they are focused on building could be a later step. Unless its drastically affecting results, which based on some of the page scores here, it doesn’t seem like the lowest hanging fruit.
OP has a PHP website and no FE knowledge so all the JS nonsense above is definitely above their head (definitely no need to bring any kind of meta-framework here anyway), I kept it simple and easy for them to fix the low-hanging fruits. They did it well and can now proceed with the rest. ![]()
Security / privacy are quite unrelated to Web performance.
Sorry if I side-tracked the topic by being upset at some sub-par performance.
Let’s leave it at that guys @anon12918199 @Homero to not pollute the topic further. ![]()
Something I’m curious about with the website, are you having it located in 1 place on the globe or is it a cluster over the globe to achieve a fast delivery without relying on Cloudflare/Bunny CDNs.
Did you use something like Garage to achieve that or something totally different? ![]()
It’s located in 1 place currently - but having a cluster would indeed be a very nice idea.
Does Garage do exactly that? How would the database part work? Would the database simply be de-duplicated on each server, and synced every once in a while?