AI companies should be held liable for the materials they are plagiarizing, although in this specific case it’s probably rando pedophiles training custom AI models, and not a product released by a real company
I searched.
Digital health as a tool to amplify the impact. Today, Tdh is one of the first organisations to explore the use of artificial intelligence in its interventions while promoting ethical use and equitable access to digital technologies. By 2024, Tdh plans to become a reference organisation in West Africa in this field. In general, Tdh will continue – thanks to access to digital technologies – to improve the quality of care, in particular the management of malnutrition, to increase the performance of health systems, the accuracy of diagnosis, and the exploitation of data (including the surveillance of epidemics).
Artificial intelligence can be used to monitor the quality of data entered by health workers, and quickly detect the first signs of epidemics. Translated with DeepL.com (free version)
Using artificial intelligence to save children’s lives? That’s what we’re making possible thanks to our new partnership with the University of Geneva and the Cloudera Foundation.
By integrating artificial intelligence into our e-health project deployed since 2014 in Burkina Faso, we will be able to improve the individual monitoring of more than 4,000 health workers and carry out epidemiological surveillance, thanks to intelligent and predictive models based on massive data. Translated with DeepL.com (free version)
No facebook document, 2 pages long document describing their project
Where would they exactly get that from?
Anything on that topic neither has funding or any interest from the public. It’s generally a problem with other mental health issues too. I seem to remember one AMA on Reddit where researchers said that it’s so underfunded and taboo that study is pretty rare.
Likely they used something like stable diffusion. All you need is a decent graphics card and you can make your own. More regulation is precisely not what we need, as that will have chilling effects particularly in regard to privacy.
Its already likely that to combat spam search engines will be basically replaced by AI, and then those will lock down and require a phone number. How would you feel about every time you want to do an internet search requiring a phone number and a logged in account with virtual numbers blocked? That’s generally the direction when you argue that regulation is required. Regulation requires compliance.
That’s not technically true they could have trained it on drawings. LoRA models can then further help with accuracy.
To me it seems like optics. Likely these people didn’t think they were doing anything illegal, and hence didn’t hide very well as opposed to you, know, people doing it with real kids. Law enforcement is always looking for an opportunity to demonstrate their usefulness especially in relation to some new law being passed. What will be interesting is if they continue to devote police resources to this, or it was a once off to show that the law was useful.
I am clearly not describing new regulation. Possession of such material is already very illegal.
From that I thought you meant some software as a service AI generator. I doubt these people would have been doing that, and in the case of stable diffusion it can be run locally on a non-internet connected machine.
There are safe harbor laws so that companies can provide services with user content and not be immediately disconnected from the internet. Presumably companies would terminate accounts which repeatedly do this kind of stuff. I doubt further regulation will help anything.
Not in my country. Its also legal in many of the countries where the operation was carried out.
Material
This is maybe nitpicking, but I don’t think it makes sense to call AI images (or any images) “material” , since it is ultimately information that is forbidden by the government.
Its the concept of it and not the physical placement of 0s and 1s or electrons in the DRAM cells itself. If this were the case, encrypted art, which is essentially random data that requires getting rearranged through a set of mathematical operations to come to its “illegal” form would still have to be legal. Otherwise any high-entropy random data, which can technically be re-arranged to be whatever you want would have to be banned, too.
When data is transmitted over the internet it typically does so over fiber optic cables, and light isn’t even matter in the first place. Quantum computers can store information with photons, though I’m sure a British man in the year 2125 watching the fanservice scenes in the anime Sword Art Online on his home quantum PC build would still get arrested despite the anime literally not being material. Its the idea that counts.
Yep, and they can also just get trained on legal pictures of naked children or short adults, or in swimwear etc.
Appearance can sometimes be a really inaccurate indicator of someone’s age, especially after puberty. I say this as someone who is frequently mistaken as a child
On a lighthearted note, if you want to get a laugh at some extreme examples, check out the “13 or 30” subreddit.