I was browsing Reddit the other day and came across this post. It shows that it is quite easy to remove the blur from a given image. I’m aware that he did give the “blur value” and the font, but wouldn’t this mean blurring is not a good way to hide any info regardless? I know that in the recommendations section privacy blur is recommended aside from obscuring text but with this information in mind should it be recommended at all? Or any blurring software for that matter.
Don’t blur. Always cut out the part of the image you want to hide and make sure you’re actually cutting and not just adding a 2nd layer of black on top (some software does that).
Fair enough, why would we be recommending privacy blur at all on the site then? Also why is just adding a layer bad?
I don’t know. :-p
If you are using really coarse pixelation that’s probably fine as well but if the pixels are just a little bit too small you can use AI to bring back more information from that than most ppl would expect.
If in doubt, cut out.
Hm, alright. Just thought someone may have more insight as to why it may be listed was all. Thanks for the input .
The app can also be used to block out text, instead of bluring. There was already a discussion on that here: Remove PrivacyBlur - #21 by jerm
Thank you, I’m not sure how i missed that!
I saw how bluring is being used by Hamas and Israel government so i assume that bluring is still a great way to hide information
It’s not.
It’s been removed, it’ll be gone once the next release is pushed.
Nope, it doesn’t mean that it is the correct way of doing something when other people are doing it and are just fine.
Example: Fitness app was used to locate the world’s most powerful people
I can’t help thinking it would be fun to have an image blur tool which generates an irrelevant but plausible fake over the to-be-redacted data and then blurs it in a just about reversible way. So if anyone un-blurs a face they get an AI generated fake face, or if they un-blur some text they get a “sorry, try harder!” message or something.
No real privacy benefit, but probably no real risk either, and it would waste the time of investigators. I suppose an AI generated fake face would perhaps have some real benefit as misinformation.
For a face, I guess the technology already exists - you’d basically redact with a black square, then hand the image to something like Stable Diffusion and ask it to inpaint, then apply the blur. It would just need wrapping up in a convenient interface.
It should be an image of Rick Astley.