The Privacy Guides forum should adopt a blanket policy expressly forbidding the use of generative ‘AI’ except for the purposes of machine translation.
Justification:
‘AI’ generated content arguably already falls foul of many existing rules and their spirit. For example: “Post Only Your Own Stuff”, “Improve the Discussion”, etc.
‘AI’ generated content often contains factual inaccuracies which introduced an unnecessary additional burden for moderators
‘AI’ generated ‘evidence’ is often used as a substitute to doing sound research and providing verifiable sources
Basically I’ve recently noticed an uptick in the amount of ‘AI’ generated content and I cannot recall a single time that content has genuinely been useful. In almost every such situation a human written response would not only have been more helpful, but also would’ve contributed far more positively to the forum as a whole.
This one will be tricky. I do believe AI generated comments and info must not not exist on this forum and less than educated folks can seem to be more than what they really are when it comes to topics of this forum and PG.
How to go about it is the next big question. Luckily for us, as far as I know, only @jonah and team are responsible for it so hope they deliberate and come up with a system that mitigates this issue.
Having recently flagged a handful of these, it’d be nice if the iron fist slammed down and banned AI generated content.
Other forums have already taken this stance.
I totally see where you’re coming from with your concerns about generative AI content in our forum. Here are my thoughts on the matter!
Quality Over Quantity
I agree that human-written responses often provide a deeper understanding and more nuanced perspectives!
On the other hand, humans often struggle to communicate in the tone of a generic upbeat middle-manager at a health and wellness company, which is something that AI excels at. If AI is banned, there will be less generically cheerful comments with unnecessary emojis hold the attention of distractable humans🤔
Factual Accuracy
It’s true that AI can sometimes miss the mark on facts, which can lead to confusion. We definitely want to keep our discussions reliable!
Encouraging Research
Promoting thorough research and verifiable sources is super important for maintaining the integrity of our discussions!
Doing deep research (such as reading and then repeating reddit comments, watching youtube videos about the deep state, or reacting to a headline without reading the article) is a crucial part of forming intellectually rigorous viewpoints. To make this bullet point more relatable here is an emoji:
Community Engagement
Human interactions foster a sense of community that AI just can’t replicate. Let’s keep our forum a place for genuine conversation! WIth AI generated content, meaningful aspects of human interaction are lost leading such as being called a “shill” by someone who disagrees with your point of view, or the gratifying feeling of frustration when someone chooses to respond to a strawman instead of responding to what you’ve said. Here is another emoji
While I see the potential for AI in certain contexts, like machine translation, I think your proposal could help maintain the quality and spirit of our forum. Let’s keep the conversation going!
Cheers!
My actual (initial & conflicted) thoughts
I’d have to think harder about my feelings towards a blanket ban. But, like you, I’ve been feeling that some people’s overreliance on AI has had a small but noticeable negative impact on the forum. In particular I’ve noticed two negative trends:
“I asked “AI” and it told me: ____” (as a response to a question, or as an assertion/statement of “fact”)
Obvious (and not so obvious) copy pastes of AI generated text or “slop” (like my satirical example) without any disclaimer and without putting it in quotes and attributing it to AI.
With that said, I’m not sure that outright banning AI in full is necessarily the right approach (I’m also not sure that it isn’t).
I think straight up copy pasting AI generated outputs as if they are your own should be banned. But I can also envision some less clear cut edge cases where AI use in limited ways, isn’t necessarily a bad thing or at least doesn’t rise to the level of badness requiring an outright ban. (e.g. using AI to correct tone or grammar, or to format posts for someone who struggles to communicate their thoughts clearly/writes unformatted walls of text), or for answering the type of concrete simple technical question that AI tends to be pretty good at (e.g. using it as a supplement to man when looking for a specific Linux terminal command).
I’m kind of just thinking out loud here, I have a somewhat conflicted point of view with respect to LLMs. LLMs are useful and convenient for various things, but the proliferation of AI generated content is contributing to making the internet a worse place in a lot of ways, and may further threaten our ability to focus, be thoughtful and deliberative, do research, and to verify/check sources (all of which are already growing problems in my eyes, even before AI came around).
At a minimum I think any AI generated text in comments should have a clear disclaimer, be formatted as a quotation, and be kept to a minimum. And I wouldn’t necessarily be opposed to an outright ban, that might be the best approach, I’m just unsure about it.
I was under the presumption that AI or LLM-generated content was already banned here. We can do a better job of communicating that though in our rules.
That follows whether or not our existing rules deserves an overhaul. I am happy to hear any suggestions on specific wording though
I agree with what your saying, but I disagree that it requires an exception. The reality is that if I can’t tell you’re using ‘AI’ to fix your grammar then it will never become an issue. Carving out an exception for that is only confusing.
The issue I have is with content that is obviously ‘AI’ generated and I do think AI formatted content should be included in that. Like with your satirical example, even if my text remains 99% the same formatting can also be extremely abnoxious. See below:
Example
I agree with what you’re saying , but I must respectfully disagree that it requires an exception . The reality is that if I can’t tell you’re using ‘AI’ to fix your grammar , then it will never become an issue . Carving out an exception for that is only confusing —you know?
The issue I have is with content that is obviously ‘AI’-generated , and I do think AI-formatted content should be included in that .
I recently attended a talk for beginners and the speaker kept repeating along the lines “if you need help/elaboration on this just ask ChatGPT”.
It saddens me to see this become the norm.
AI tools can be useful but they should not be constantly relied upon.
Using them to generate forum responses is utterly useless and brings nothing to the conversation but verbose statements and potential misinformation.
It also doesn’t necessarily help you truly understand the issue/problem at hand.
I kind of was too but I realised it has become a recurring issue recently and I’ve had to flag / remove many such posts. I personally appreciate the rules being vague -ish guidelines rather than carved in stone but clearly prohibiting all use of generative ‘AI’ is necessary imo.
You are probably right. (And my recollection is that @KevPham is correct, I think there is technically already an unofficial or official rule against AI generated content in posts/comments. At least I recall @jonah saying something along those lines). (edit: this is the exchange I was remembering)
The reality is that if I can’t tell you’re using ‘AI’ to fix your grammar then it will never become an issue.
Just to be clear, I don’t personally do this (not for tone, not for grammar, not for anything)
My comments are my own (as evidenced by my often embarrassing grammar, meandering comments, and the occasional made up word, + the 13 edits it sometimes takes me to try to clarify what I meant to say)
Like you, I find the generic wishy-washy AI tone, overuse of emojis, relentless bland cheerfulness to be annoying, distracting and not helpful.
I do think AI formatted content should be included in that
While I’m often annoyed by AI generated text, I think decent AI generated text can in some cases clarify bad human formed text.
We’ve all encountered people online who don’t use paragraphs, don’t use punctuation, are really passionate about a subject but struggle to communicate effectively or calmly, or write in an almost stream-of-consciousness style. That is the sort of edge case I was thinking of. But you are probably right that making explicit exceptions for random edge cases is an unnecessary over-complication.
If I wanted to know what AI has to say I’d just ask AI myself. It sometimes feels similar to people “shilling” for a product they really like so they keep bringing it up in topics remotely related to it. So in that sense a full ban seems too strict, we (luckily) haven’t reached the point yet where one AI user is talking to another AI user.
Telling people why they shouldn’t use AI would hopefully be more useful in the long run, but maybe that’s too optimistic.
Edit: There also won’t be a good way to combat it if it improves to a point where you can’t tell it apart anymore, so people should learn to use it like any other tool instead of overrelying on it and feeling like they are part of a revolution of never-thinkers.
Yeah, I get where you’re coming from, but again, that’s a situation where you could probably get away with discreetly using ‘AI’ to correct your style without ever running into trouble with it. Honestly, though, I would strongly advise such people to take the effort to learn punctuation, paragraphs and general formatting. ‘AI’ can be used to help you learn these concepts, but research already shows how detrimental reliance on ‘AI’ can be for people’s ability to complete simple tasks since they constantly outsource that minor effort.
I’m far from perfect at punctuation, especially commas lol, but I really don’t think it requires an unreasonable amount of effort to format text somewhat coherently.
Honestly though, I would strongly advise such people to take the effort to learn punctuation, paragraphs and general formatting
I think that is good advice most of the time, but in many cases the issue isn’t learning these concepts, in many cases it has more to do with mental differences (which doesn’t mean they shouldn’t still focus on building good clear communication skills, and isn’t an excuse in cases of apathy/laziness, but in some cases they also face greater hurdles than most people wrt expressing themselves effectively).
I think pointing out mistakes AI makes can be a good opportunity to teach people why it’s bad. If you just hate the writing style but every “fact” it states is perfect, it’s really not different from someone rambling on for multiple pages but not making any specific errors in their reasoning.
People are lazy (including me and you if we’re being honest), they trust their favorite news host, influencer, random people on social media and finally AI, all it takes is that it feels like a figure of authority on the subject.
A ban will just reinforce their feeling that they are being excluded and they’ll keep going into the cult of stupidity that tells them it’s not cool to be skilled or talented or informed.
Sidenote
The reason I say it like that is because common sentiment with AI “artists” (which I hate like the scum of the earth) seems to be that spending time to improve ones skill is a waste of time when you can just write a prompt and call yourself an artist. Most people don’t care because they honestly cannot tell, they just think it’s kinda mediocre art.
All of that said, I too think consuming AI generated content (so reading text or looking at images) is a complete and utter waste of time, if the poster could not be bothered to spend the time to come up with something of their own, why should I bother to engage with it?
I don’t know if the moderators are just so damn good at detecting AI at its posting time but I seem to be bad at detecting AI.
We should outright ban it. The internet is bad enough that if we don’t fight against this form of brain rot (letting AI “think” or “elaborate” for you), it will slowly destroy the foundation of the internet and we will have a dead internet because it is run by AI for AI.
It is silly to have to use an AI to detect other AI but we are here now.
This generally sums up my problem with creating a rule that forbids AI:
I would probably argue that most AI content does fall afoul of these rules, yes, and therefore this additional rule you’re suggesting is probably not necessary.
I am generally happy with how AI content has been moderated so far, and I don’t really think an additional rule will change anything in how we moderate, nor will it realistically deter the type of people who would post AI content in the first place.
Personally, my moderation ethos is basically that too many rules strongly encourages “rules lawyering,” which I don’t really tolerate from people. I prefer to have general expectations of behavior and trust that people can handle themselves, and if they can’t they can find another community
My final thought: I have never been particularly actively involved in moderation, and I will support the decision that the rest of you come to, but these are my 2 cents on how I like communities to work.
While I agree that it’s probably futile to ban it entirely due to how hard AI content is to detect, it would be nice to have it as a guideline for people who do care about following the rules…
edit: AI is also helpful in obfuscating your writing style against stylometry attacks, so prob don’t ban it