Learning Lens Blur Fields

Optical blur, or point spread function (PSF) is an umbrella term for a laundry list of image degrading effects such as defocus, diffraction, and aberrations. It’s hard to calibrate because it varies with sensor position, focus, target distance, and where you look on the image plane. We introduce Lens Blur Fields: tiny MLPs that can model this high-dimensional PSF.

Two smartphones of the same make can have subtly different PSFs. We show this with the lens blur fields of two iPhone 12 Pros:

We provide a first‑of‑its‑kind dataset of 5‑D blur fields—for smartphone cameras, camera bodies equipped with a variety of lenses, etc. Finally, we show that acquired 5‑D blur fields are expressive and accurate enough to reveal, for the first time, differences in optical behavior of smartphone devices of the same make and model.

Seems like cameras have fingerprint-like characteristics that can tell exactly which camera took a certain photo, just like permanent watermarks

I mean.. the photo metadata will reveal the same. Not sure if this is really a new novel approach to fingerprinting. If so, please explain.

It can be easily edited/manipulated.

1 Like

Ah yes of course, after the fact. But raw photo taken and analyzed will still show what you’ve linked.

I wasn’t thinking that far about doctoring photo data.

Wont a “noise” filter sort of mitigate this?