It’s definitely still a trade off you have to consider. The promise DuckDuckGo makes is that all chats run through their servers before reaching the model hosts (OpenAI, Anthropic, and together.ai as of writing) so that these providers only see DuckDuckGo making the requests and not individual users. Additionally, they have an agreement with model hosts to not use these chats for AI training, as well as certain data retention policies that they should adhere to. You can find more information here. Outside of that, Brave hosts some of their own models through their Leo chatbot, as well as a few proxied ones like DuckDuckGo. You can learn about Brave’s privacy practices regarding Leo here.
What it boils down to is that you trust these claims being made, knowing that you can’t fully verify them as the models are running on a machine other than your own. And trusting them isn’t an absolute thing either, you could trust them with basic research questions but not with personally identifiable information for example. You can always use both hosted and local models for different purposes, as hosted models can be more powerful than what you could run locally on your machine.