Testing the fingerprintablility of a few different browsers in a few different standard configurations

A conversation here the other day with respect to browser fingerprinting, uBlock Origin modes, and Mullvad Browser, got me curious to test for myself. What follows are the results of some semi-unscientific testing using coveryourtracks.eff.org from the Electronic Frontier Foundation.

What I tested

I used the EFF’s Cover your tracks fingerprinting test, to test fresh, clean installs of Librewolf (LW), Mullvad Browser (MB), and Tor Browser (TBB) in a few different configurations. I encourage everyone to test their own browser and report back.

Takeaways and points to ponder:
  1. This is just a test, and not necessarily representative of reality, or privacy more broadly (fingerprinting is just one aspect, and an aspect that sometimes conflicts with other privacy goals).
  2. Out of all tested configurations, MB and TBB set to “Safest” mode had by far the least identifiable fingerprint (1 in 92, and 1 in 182 respectively).
  3. It is possible that enabling uBO hard mode enabled would be similarly effective (since both uBO’s hard mode and TBB’s ‘safest’ mode block javascript) but I can’t say that with any certainty since uBO in hard mode blocks/breaks the test.
  4. Using the ‘safer’ mode probably has security and other privacy benefits but it performs moderately worse in the fingerprinting test.
  5. Using uBO’s Medium Mode (blocking 3P scripts and iframes) neither helped nor hurt the browser fingerprint in either LW and MB. However, while that is true in the test, Medium mode in the wild might be less (or more) identifiable since it only affects 3rd parties, and this test seems to be a 1st party fingerprinting test.
  6. For anyone not willing to block javascript, it looks like the least identifiable option would be MB in standard mode, full screen (1 in 775). At least that is what my test data showed, but it may not be representative for various reasons (including small-ish sample size of 200K)
  • Librewolf (default): 1 in 40k
  • LIbrewolf (default with letterboxing): 1 in 12K
  • Librewolf (uBO medium mode): 1 in 40k
  • Librewolf (uBO hard mode): Blocks the test

  • Mullvad Browser (‘standard’ non-fullscreen): 1 in 2900
  • Mullvad Browser (‘standard’ fullscreen): 1 in 775
  • Mullvad Browser ('standard, fullscreen, uBO Medium Mode): 1 in 775
  • Mullvad Browser(‘safer’ fullscreen): 1 in 3100
  • Mullvad Browser (‘safest’): 1 in 92
  • Mullvad Browser (uBO hard mode): Blocks the test

  • TBB (‘standard’, non-fullscreen): 1 in 1700
  • TBB (‘standard’, fullscreen): 1 in 1200
  • TBB (‘safer’, fullscreen): 1 in 2600
  • TBB (‘safest’): 1 in 183

(The “1 in x” numbers refer to uniqueness, i.e. out of the 200k browsers tested in the past 6 weeks, 1 in x matched with my test results, because of the somewhat small and self-selecting sample, its unclear how representative these numbers are, I would trust them enough to make observations about general trends but not enough to make specific concrete inferences based on that data)

(Click each > heading to expand that section).

1 Like

Having dug a bit deeper into the data it looks like the subcategory that makes the biggest difference in overall ‘uniqueness’ between the various privacy browsers is how fonts are handled.

  1. Arkenfox’s uniqueness for the fonts subcategory:

    • 1 in 4800 browsers tested share these same fonts
  2. Librewolf’s uniqueness for the fonts subcategory:

    • 1 in 285 browsers tested share these same fonts
  3. Brave Browser’s uniqueness for the fonts subcategory:

    • 1 in 147 browsers tested share these same fonts
  4. Mullvad Browser’s uniqueness for the fonts subcategory:

    • 1 in 10 browsers tested share these same fonts
  5. Tor Browser’s uniqueness for the fonts subcategory:

    • 1 in 10 browsers tested share these same fonts

It should be noted that #1 these test results reflect the self selecting group of ~200k people who have tested their browsers (people using privacy enhancing browsers will almost certainly be overrepresented), and #2 of the 5 browser configurations I tested, combating fingerprinting is not a primary focus for 2 out of the 5 (Arkenfox & Librewolf) so it is not unexpected that they perform worse (since in general the further down the fingerprinting rabbit hole you go the more you impact usability and aesthetics)

1 Like


btw, it is not an accurate metric to measure fingerprinting

The Panopticlick study done by the EFF uses the Shannon entropy - the number of identifying bits of information encoded in browser properties - as this metric. Their result data is definitely useful, and the metric is probably the appropriate one for determining how identifying a particular browser property is. However, some quirks of their study means that they do not extract as much information as they could from display information: they only use desktop resolution and do not attempt to infer the size of toolbars. In the other direction, they may be over-counting in some areas, as they did not compute joint entropy over multiple attributes that may exhibit a high degree of correlation. Also, new browser features are added regularly, so the data should not be taken as final.

1 Like

I think the Tor Project is a really good source of info on fingerprinting. I’ve come to feel that these sorts of tests (and others such as the browser comparison or adblock tests) are useful and have value in particular contexts, but can be very misleading if you try to make generalizations based on them, or don’t understand the limitations of tests like this. If you put too much value into the results of a particular test, that is a problem. If you treat it as one potential data-point, that is cross-references against others, and spend time understanding the methodology or the limitations of the approach, I think that tools like this can provide value, particularly when they provide more granular data.

Basically my current point of view is more or less inline with the Tor Project, they discuss valid limitations, and potentially misleading aspects of the EFF’s methodology, they also state that:

Their result data is definitely useful, and the metric is probably the appropriate one for determining how identifying a particular browser property is.

1 Like

Is accuracy of the data detected important, or only unique level?
E.g., a Tor and Mullvad give an OS, fonts and other details, but some are “not mine”.
Is “no info” / no Javascript a sign of uniqueness, given that most sites will be unusable?
What about verifiably false info?

e.g., I tested the following.

Safari                 17.44 bits    1/178,130
Firefox                17.44 bits    1/178,070
Mullvad - defaults     15.86 bits    1/59390.67
Mullvad - Safest       6.09 bits     1/68.25
Tor Browser            16.44 bits    1/89041.0
Brave                  17.44 bits    1/178,138
Chromium               17.44 bits    1/178,273

FireFox had “No History” setting, Noscript, and uBlock.

Only Safari revealed my actual screen depth and dimensions.
Only Brave revealed my exact video card model; Firefox revealed renderer family “or similar”.
TBB and Mullvad reported UTC/0 timezone; the others were accurate
All revealed Platform except Mullvad Safest.
Brave was detected as randomized fingerprinting.

Verifiably False
Mullvad default and TBB give an obvious tipoff. They report a false OS on an accurate & incompatible platform.
USER_AGENT “Mozilla/5.0 (Windows NT 10.0; rv:109.0) Gecko/20100101 Firefox/115.0”
PLATFORM , on which Windows would not run.

In browsers using the Noscript extension (visible), I allowed all scripts temporarily on covermytracks.eff.org. In Firefox, I then revoked those permissions, repeated, and got the same score. This is puzzling, since the page indicates that it reads most values with Javascript.

Mullvad safest could not allow checkbox “test with real tracking company” use on eff’s form. It would appear useful for reading articles that do not depend on JavaScript.

Tested on some other sites – which browser configurations are usable?
I chose popular sites with common rendering/functionality problems. They are not privacy friends.

An ad cluttered “tips and tricks” site. Safari struggled most, ate up CPU, and rendered ad rectangles and some ads.

Amazon - surprisingly, the home page does display products, and buttons work in MB safest.

Paypal - completely broken in MB safest.

Weather.com - Safari struggled with CPU. MB safest rendered overlapping text and failed to load full-size images and videos. TBB rendered a giant Noscript snake logo for video; kind of a tipoff to anyone who might glimpse one’s screen in passing. The others with Noscript on displayed a discrete (>) play button in a circle.

please see post #3 and the bottom link in post #4