Nick Clegg, an ex-Meta executive and former U.K. Deputy Prime Minister, has came out against any requirement for AI training models to request permission from artists whose content may appear in their training data.
He goes an extra step further by claiming it will “kill” the AI industry because of its unfeasability.
Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn’t feasible to ask for consent before ingesting their work first.
“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content, [if you] first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”
Here is another hilarious quote from this man:
“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”
It seems like he has never heard of basic informed consent, or how robots.txt has been around since 1994. But hey, at least we can generate funny videos of a shark with sneakers or a anthropomorphic wooden bat based on the effort of artists without their consent!
Its such posts where you have people saying such stuff where I genuinely feel PG should allow for profanity and attack on the person other than the points they are making. Because sometimes, these two go hand in hand.
Since every rule has exceptions (even laws of nature/physics/chemistry breaks down at quantum levels) and I feel so does PG rules should.
You may feel free to censor me for saying this but I hope you don’t. I am not violating PG rules in bad faith.
I honestly wonder if these execs go through some brainwashing when they are hired at such high profile positions in corporate America. I mean, they have to right? The companies need to ensure the new “talent” is capable of aligning with their own twisted understanding of reality, laws, morality, and what intelligence actually is.
I know several people in big tech who are below execs directly so not as high as them - you do an eerie feeling of them being out of touch with what is good and bad.
Anyways - this is a tangential comment not directly pertinent to the post. But I say this in a contextual manner.
I 1000% agree with your comments, and I appreciate you sharing Clegg’s ties and background for those who didn’t know.
That being said, it is my understanding that creating and using GIFs is 100% copyright infringement, but for some reason, the copyright holders are ok with it. GIPHY and Tenor probably make money from GIFS and yet don’t pay a dime to the copyright holders. And neither do when we use and create them.
Why is that? Why is it not ok for AI companies to use copyrighted content without consent, but’s ok for GIF companies and creators to do the same?
There are funny moments from my favorite films and TV shows that I have never found as GIFs and wanted to create myself, but doing so would infringe copyright.
On a very loosely related subject, I don’t understand the internet’s apparent preference for Tenor over GIPHY. I think the latter is a million times superior and has better quality GIFs. Signal just recently introduced GIFs on the desktop app, and I’m surprised it used Tenor because on mobile it’s GIPHY. I hope they don’t force Tenor on mobile and that they give us opportunity to choose on desktop.
The brainwashing is just how businesses run, and the people earning three and four digit salaries don’t question the model.
Investors must see growth. Execs push for that growth. Company must be largest in world. Bottom line goes up. This quarter must grow more than last quarter. Throw everything in the fire to make it grow. Gamify the metrics - even the business does not matter as long as you make shareholders money.
Lie about projected revenues. Lie about the product. Hire thousands and lay them off. All is fair to execs as long as their stocks go up.
I’m starting to get it, and it makes me want to leave tech and buy a farm and get away from it lol.
IDK whether he genuinely believes what he is saying, or is merely propping up some sort of AI propaganda (maybe both).
Firstly, there is indeed an arms race with AI, and thus he is warranted (in a geopolitical context) to want his own country to succeed in this arms race (there’s a lot that can be said about the sociopolitical consequences of this, in which he is thus not warranted, but this is a different subject). However, I think he is greatly misinformed on the utility of the “creative community” in helping with said arms race.
In some instances, the AI arms race is conceived of as ‘who gets to AGI first’. I do know much about AGI (who does, honestly?), but the consensus is pretty fragmented. Some think earlier forms of AGI are already here, some think it’s decades to centuries down the road. We just do not know what AGI is on a technological level, nor how to best pursue it, and so asserting that the creative community is essential in reaching AGI is not backed up by facts, only by speculation on his ends, or his advisors’ ends, or whatever.
In terms of geoeconomics, whoever develops any sufficiently advanced narrow AI might hold economic power. What constitutes as “sufficiently advanced” is also speculative. We are pretty much speculating what kind of resource would be useful. For example, what’s useful could be AIs that are narrowly and highly advanced in code generation, behavioral prediction/analysis, big data structuring, or cybersecurity, etc. But again, like what Kevin says, I highly doubt that image generation (or AI slop in general) is going to be at all useful in this context. AIs that work on mathematics, or chemistry, or whatever, are probably much more useful.
A blanket statement like that doesn’t say much but only raises more questions.
Only non profits will ever exist if this is the case. This is not realistic and is highly debatable with each decision a company makes to run itself. It is also not as simple as you have put it. And I say this because a counter argument can be made for anything.. and since we don’t have an authority on who or what is ethical with any action, it will continue to be a debatable matter when it comes to “ethically”.
Only non profits will ever exist if this is the case.
Do you mean in general, or in the field of “AI”?
we don’t have an authority on who or what is ethical
Well, at least some would say that we do: some $DIVINITY written book. But I’m not saying that. I just hope that we can agree getting rich by repackaging and reselling the creative output of someone more talented (while keeping them poor) is not ethical.
To be clear, he doesn’t minimize sexual abuse, nor does he say that companies and the decision makers behind them are sexual abusers. What he says is that they have the mentality of sexual abusers, precisely because they don’t care about respecting their users’ consent.
Louis also coined the term EULA roofie to describe the practice of hiding contentious terms within an End-user license agreement (EULA). I think that is fair too.
Although this forum is about privacy and security, it is clear that the right to privacy is one of the many consumer rights that is being infringed upon, and we must all fight together. There is no doubt that more people around the world are becoming aware of these issues, but I wish there were more fighters with influence outside the US with communities in non-English languages.
Veering off-topic, how do share articles so that a preview appears?
Well that’s a new term! EULAs are always confusing by default so I honestly am more concerned over whether people are reading them in the first place. Age old human-computer interaction question has always been: will the user actually read this information in the dialogue box?
Paste the link directly in the post. If the preview doesn’t populate, the website had configured itself to not allow previews.