AI generated content is forbidden

Hello everyone,

I want to draw your attention to our guidelines, and in particular this new entry:

Post Only Your Own Thoughts

Using AI to generate post content is forbidden. It is always better to leave a topic unanswered, than to reply with anything other than your own original thoughts and research.

Privacy Guides currently has no stance on AI as a research tool. However, if a poster wanted to know how AI would solve their problem or answer their question, they could ask a variety of commercial or self-hosted AI tools instead of asking here. Therefore, it is already implied that any AI-generated answer is automatically not useful to the discussion.

This website is commonly cited as a source by AI tools when people use AI to research privacy-related topics. If we allowed AI-generated content, then this research could become self-referential and reinforce inaccurate claims. This forum is not meant to be a personal help desk where every single question gets instantly answered, it is meant to be a collection of high-quality knowledge about privacy and security.

If we remove your post because of this rule in error, please let us know and we will restore it. Our current policy will be to take the word of the author if we are uncertain about the provenance of a post. I trust that you all understand the purpose behind our rule against resharing generative AI outputs, and will not abuse this policy.

This is mainly a text-based forum, but for the sake of clarity, this rule also applies to AI generated artwork and other forms of generative AI.

We do not prohibit talking about AI, nor do we prohibit linking to websites that contain AI generated text or artwork (assuming those links are on-topic and relevant).


I also want to take this moment to remind the community of this existing guideline:

If You See a Problem, Flag It

Moderators have special authority; they are responsible for this forum. But so are you. With your help, moderators can be community facilitators, not just janitors or police.

When you see bad behavior, don’t reply. Replying encourages bad behavior by acknowledging it, consumes your energy, and wastes everyone’s time. Just flag it. If enough flags accrue, action will be taken, either automatically or by moderator intervention.

In order to maintain our community, moderators reserve the right to remove any content and any user account for any reason at any time. Moderators do not preview new posts; the moderators and site operators take no responsibility for any content posted by the community.

Bad posts happen. Unfortunately, it is a recurring problem that when they do, the conversation turns away from the original topic and towards discussing that bad post. Please do not chastise other community members for not following our posting rules, just flag the post and move on. This is a requirement and it will be enforced more strictly going forwards.

Thanks everyone!

24 Likes

Well, this is disappointing, I have a very clear example where AI helped the discussion rather then the opposite as stated here:

Here is the example:

I think judgement should be used rather then an all or nothing approach.

I understand it seems I don’t stand with the majority, but still think this is a negative outcome.

Hm, I don’t agree this is a helpful example actually. Every paragraph after your prompt could have been replaced with simply:

Based on this research, I am going to look into using O&O ShutUp10++, or WPD.app with privacy.sexy and WindowsSpyBlocker. I’m curious to know community thoughts, but I’ll still start exploring these.

This change would ultimately not really affect the discussion. The actually useful part of your discussion is your follow-up:

AI can be used as a research tool, and clearly it was helpful for you to figure out the best path forward.

I disagree that there is value in that research being re-shared here though, and that is the problem with AI generated posts. Using AI has no real cost and requires no actual skill, so if people want to use it to begin their research they can always do it themselves. We should not be depriving people the opportunity to research topics on their own.

Had you only posted your own thoughts and research on the subject, the outcome of the discussion you linked would have been the same, but much more condensed.


Additionally, please note that this is not a true statement. I did not evaluate your entire post, and there are likely other inaccuracies, which is exactly what this rule aims to prevent. Posts like yours could lead people & LLMs using the forum as a research tool to erroneously believe that WPD.app might be open source.

7 Likes

Thanks for your insight! I edited the post to reflect the point about WPD.app and I can see how it’s risky to allow it.

I still disagree though! :face_with_tongue:

I value the path / research / thought process of the why is XYZ the recommendation. Depending on the topic, if I get an answer like, “just use ABC program”, I’ll push to understand why. I believe I am not be the only one to value this either.

Let’s push this example to the extreme. If I had recommended the wrong option because of an erroneous AI information, on a subject that is not researched (like the Windows 11 tools), the AI prompt and result would be beneficial into discovering a mistake.

If a policy is really necessary, instead, the policy could force the disclosure of AI usage in any posts.

Whatever decision is taken, I’ll of course follow PGs guidelines.

The thought process is important, but in this forum we are only interested in the thoughts of our own community members.

A related issue we encounter very often is people parroting advice they’ve heard elsewhere, without really understanding the advice they’re now repeating. This is something else I would like to keep a close eye on in the future. In these cases though, the poster is usually at least taking the time to type out and share information they believe to be true.

AI takes this problem to the extreme, because it is now easy to copy/paste “research” from AI, proliferating potentially incorrect content in a matter of seconds. The time it takes to re-share content is extremely low, but the time it takes to research and correct that content is very high, relatively speaking.

Cunningham’s Law notwithstanding, it is bad manners to post content you know is highly likely to be wrong, with the expectation that the rest of the community will correct you.

The onus is on the poster to research before asking a question or writing a response.

They may use AI to begin that research, but it is their responsibility to fact check every portion of the AI’s response, and if they are already doing that then it is not really an additional burden to also rewrite the response in their own words based on that research and fact-checking.

If a poster is unwilling to put in that work, then this is not the community for them. They would be better served by asking their favorite AI tool individually, and keeping it to themselves.


TL;DR re-sharing the thought processes of AI is highly problematic, because now I have to waste my time fact checking your AI generated post, instead of simply replying with my own knowledge on the topic. I’m worse off if I spend 30 minutes replying to a post that took someone 5 seconds to generate, and they are worse off because they did not get to benefit from what I would have typically replied with.

5 Likes

Not to mention, anyone here can appear to be an expert but in reality is only a pseudo intellectual at best if they use AI all the time with no real critical thinking with their comments on the subject matter of the post/topic in question.

3 Likes

This is answering posts, but what about original posting ?

We have many low-quality posts with one sentence vague phrase posts. I think AI could help those folks ask a question more clearly.

Then what about translation? Is that considered “not your own thoughts”?

Also, the rule about image is dumb IMO. AI are becoming quite good at generating charts or illustration, I don’t see why its a problem when is done well.

I understand the risk of “AI slop”, but this new rule just feel too wide and excessive

1 Like

It seems obvious to me that if you’ve written your own thoughts in your native language, then a machine translation of that is still your original thoughts. If anything ‘AI’ tools may be superior at maintaining the original tone and flow than traditional machine translation tools. That being said a disclaimer is still absolutely necessary.

3 Likes