Testing the fingerprintablility of a few different browsers in a few different standard configurations

A conversation here the other day with respect to browser fingerprinting, uBlock Origin modes, and Mullvad Browser, got me curious to test for myself. What follows are the results of some semi-unscientific testing using coveryourtracks.eff.org from the Electronic Frontier Foundation.

What I tested

I used the EFF’s Cover your tracks fingerprinting test, to test fresh, clean installs of Librewolf (LW), Mullvad Browser (MB), and Tor Browser (TBB) in a few different configurations. I encourage everyone to test their own browser and report back.

Takeaways and points to ponder:
  1. This is just a test, and not necessarily representative of reality, or privacy more broadly (fingerprinting is just one aspect, and an aspect that sometimes conflicts with other privacy goals).
  2. Out of all tested configurations, MB and TBB set to “Safest” mode had by far the least identifiable fingerprint (1 in 92, and 1 in 182 respectively).
  3. It is possible that enabling uBO hard mode enabled would be similarly effective (since both uBO’s hard mode and TBB’s ‘safest’ mode block javascript) but I can’t say that with any certainty since uBO in hard mode blocks/breaks the test.
  4. Using the ‘safer’ mode probably has security and other privacy benefits but it performs moderately worse in the fingerprinting test.
  5. Using uBO’s Medium Mode (blocking 3P scripts and iframes) neither helped nor hurt the browser fingerprint in either LW and MB. However, while that is true in the test, Medium mode in the wild might be less (or more) identifiable since it only affects 3rd parties, and this test seems to be a 1st party fingerprinting test.
  6. For anyone not willing to block javascript, it looks like the least identifiable option would be MB in standard mode, full screen (1 in 775). At least that is what my test data showed, but it may not be representative for various reasons (including small-ish sample size of 200K)
Results/Data
  • Librewolf (default): 1 in 40k
  • LIbrewolf (default with letterboxing): 1 in 12K
  • Librewolf (uBO medium mode): 1 in 40k
  • Librewolf (uBO hard mode): Blocks the test

  • Mullvad Browser (‘standard’ non-fullscreen): 1 in 2900
  • Mullvad Browser (‘standard’ fullscreen): 1 in 775
  • Mullvad Browser ('standard, fullscreen, uBO Medium Mode): 1 in 775
  • Mullvad Browser(‘safer’ fullscreen): 1 in 3100
  • Mullvad Browser (‘safest’): 1 in 92
  • Mullvad Browser (uBO hard mode): Blocks the test

  • TBB (‘standard’, non-fullscreen): 1 in 1700
  • TBB (‘standard’, fullscreen): 1 in 1200
  • TBB (‘safer’, fullscreen): 1 in 2600
  • TBB (‘safest’): 1 in 183

(The “1 in x” numbers refer to uniqueness, i.e. out of the 200k browsers tested in the past 6 weeks, 1 in x matched with my test results, because of the somewhat small and self-selecting sample, its unclear how representative these numbers are, I would trust them enough to make observations about general trends but not enough to make specific concrete inferences based on that data)

(Click each > heading to expand that section).

1 Like

Having dug a bit deeper into the data it looks like the subcategory that makes the biggest difference in overall ‘uniqueness’ between the various privacy browsers is how fonts are handled.

  1. Arkenfox’s uniqueness for the fonts subcategory:

    • 1 in 4800 browsers tested share these same fonts
  2. Librewolf’s uniqueness for the fonts subcategory:

    • 1 in 285 browsers tested share these same fonts
  3. Brave Browser’s uniqueness for the fonts subcategory:

    • 1 in 147 browsers tested share these same fonts
  4. Mullvad Browser’s uniqueness for the fonts subcategory:

    • 1 in 10 browsers tested share these same fonts
  5. Tor Browser’s uniqueness for the fonts subcategory:

    • 1 in 10 browsers tested share these same fonts

It should be noted that #1 these test results reflect the self selecting group of ~200k people who have tested their browsers (people using privacy enhancing browsers will almost certainly be overrepresented), and #2 of the 5 browser configurations I tested, combating fingerprinting is not a primary focus for 2 out of the 5 (Arkenfox & Librewolf) so it is not unexpected that they perform worse (since in general the further down the fingerprinting rabbit hole you go the more you impact usability and aesthetics)

1 Like

3 Likes

btw, it is not an accurate metric to measure fingerprinting

The Panopticlick study done by the EFF uses the Shannon entropy - the number of identifying bits of information encoded in browser properties - as this metric. Their result data is definitely useful, and the metric is probably the appropriate one for determining how identifying a particular browser property is. However, some quirks of their study means that they do not extract as much information as they could from display information: they only use desktop resolution and do not attempt to infer the size of toolbars. In the other direction, they may be over-counting in some areas, as they did not compute joint entropy over multiple attributes that may exhibit a high degree of correlation. Also, new browser features are added regularly, so the data should not be taken as final.

1 Like

I think the Tor Project is a really good source of info on fingerprinting. I’ve come to feel that these sorts of tests (and others such as the browser comparison or adblock tests) are useful and have value in particular contexts, but can be very misleading if you try to make generalizations based on them, or don’t understand the limitations of tests like this. If you put too much value into the results of a particular test, that is a problem. If you treat it as one potential data-point, that is cross-references against others, and spend time understanding the methodology or the limitations of the approach, I think that tools like this can provide value, particularly when they provide more granular data.

Basically my current point of view is more or less inline with the Tor Project, they discuss valid limitations, and potentially misleading aspects of the EFF’s methodology, they also state that:

Their result data is definitely useful, and the metric is probably the appropriate one for determining how identifying a particular browser property is.

1 Like