Is it possible to protect person's voice and appearance from AI training on it without changing it too much for human ear and eyes?

So what i basically want is a way to distort audio/video/images to prevent AI training on them and then creating realistic deepfake audio/video/images based on trained dataset but also keep that original audio/video/images without too many noticeable changes for human ears and eyes? Is this possible?

I know from art communities that there is Glaze software that modify painting images so that AIs can’t copy artist’s style. Not sure if it actually works but i have been thinking about using it on my pictures, especially ones with my face. My threat model is not that high so i am okay with using social medias sometimes and posting things there but still i try to be as private as possible without making my life too uncomfortable and i would like to protect my identity from being used in deepfake attacks/campaigns. I understand that i probably can’t protect myself completely this way but i would like to make it as hard as possible for people who may want to do something like this.

Is there something like glaze for audio and video files? Do you think Glaze is worth using?

I’m not sure it’s that easy to poison your own persona with a FOSS tool, far easier when it comes down to digital art.
Not aware of any besides GitHub - Shawn-Shan/fawkes: Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes

Also this should probably be a Questions? :hugs: