Not at all, I think most people (myself included) are quite concerned about how this market will develop, and what the negative consequences will be (potential further harm to privacy is just one of many potential risks, if the industry is allowed to develop without checks and balances). I don’t trust the big tech conglomerates to develop AI in a responsible way any more than I trust them to develop web services, social networks, etc, etc.
Basically, I think AI does pose a threat to privacy, but not a fundamentally different risk than the things that came before it. Trying to characterize AI as good or bad, is like trying to characterize The Internet as good or bad, in the early 90s.
Like what for example? In my eyes this is a problem that predates AI by some years. It is a problem inherent to both Surveillance Capitalism (big tech systems of surveillance & data harvesting) and Dragnet Government Surveillance. AI could potentially exacerbate this problem in various ways, but currently, I don’t see real world examples of AI doing this. Before big tech was scraping the internet to train AI models, they were indexing the internet for search engines, hoovering up as much personal and private info as they could to track and target individuals etc.
Essentially my 2c is that the technology is evolving but the bad behavior of the major companies involved, and the threats to user privacy are mostly the same as they were pre-AI.