Am I the only one who see this as an existential threat of generalizing AI for the masses?
We don’t have enough energy for that, let alone clean energy and teens are using AI everyday for generated pics or something.
I personally see the matter quite the opposite way. We should have moved away from fossil fuels at least ten years ago. Now it is good that, out of necessity, technology companies are also becoming interested in the issue.
AI is a probably the biggest threat for humanity besides ABC weapons
What about climate change?
While climate change is one the biggest threats to humanity, in terms of time left before catastrophe, AI (singularity, killer robots) and conventional human enslavement (totalitarianism, constant surveillance, etc.) are IMO more urgent. I’m unsure about WMD but if or when it amounts to catastrophe (mutually-assured destruction) so will climate change. We need solutions to neutralize all these threats before any of them reach catastrophe.
I’m not exactly worried about killer robots’ decisions, considering what’s going on in the world today..
Well, that’s the problem.
Nuclear energy is not a real solution, whilst is carbon neutral the radioactive waste is still an unsolved problem (and it uses a non renewable fuel). Also nuclear plants use water to cool down which is another increasing scarce resource and with the temperatures rising it’s quite an issue.
Then if you think that AI data centers are spreading even in cities you don’t want a nuclear reactor popping out near dense population.
Big tech means big money invested in this type of approach and is quite concerning.
This is not going to solve any problem, quite the contrary if you ask me.
Indeed, someone is required to generate new ideas because people have failed to do so up to this point.
They don’t dump it into the ocean like they used to. Safely storing the waste is the proper solution, no?
They typically use recyclable water. I don’t get how water is a limited resource when it goes full cycle in the atmosphere.
Power plants in non-tectonic areas are totally safe.
![]()
Firstly, you can’t stop the signal. Secondly, terminator scenarios require AI to have a strong physical presence, which is nearly impossible with the current tech. No AI body exists, and every AI body will be human built due to real technological and economical constraints.
I think you’re way to positive on the capabilities AI will bring. AI boom != immediate robotics boom.
My mental frame was about threats that could immediately escalate.
But climate change is a problem too.
Nuclear energy is very expensive and also has the same issue like coal, oil and gas: You are dependent on importing something to make it.
I prefer renewable energy in combination with effective storage solutions, because its cheaper and better for self sufficency
What do you mean by that?
The are already trying to make robots ready for mass production to destroy the peoples job’s.
If this trend continues, they will be there in 5-20 years
Who would have thought that the end of the world would be accelerated by a company that began with a focus on PC gaming?
Valve on the other hand seems to be on the nice side for now.
Is it a choice?
Current generation ““AI”” (large language model) is not economically viable for replacing workers, or devise some kind of long-term scheme to destroy us (their context windows are too small lmao).
They are DEFINITELY not going to cause the Singularity. These models CANNOT (as in, the technology is incapable of) learn. They are trained—at great expense—and then deployed. After deployment, they are effectively an unchanging statical algorithm.
Nuclear is fine. We have shitloads of uranium. It is expensive up front though.
Regardless of whether or not current AI can truly “learn”, or “think”, you can expect stupid people to make stupid decisions about where to place AI in software, platforms, digital infrastructure etc.
AI is not likely to harm you in the Terminator sense, but it can harm you in a lot of other ways when you move at a pace like this.
Pretty much spot on! The problem we’re already starting to see is that so-called AI tooling as it already exists will be ubiquitous to the point where even if you’re in a Signal or SimpleX group, you can’t trust the safety of it because (and this is beyond the scope of what chat apps no matter how good can protect against) the people within that group are having Claude or ChatGPT or Grok or whatever summarize all their messages because they can’t be bothered to read it themselves, and thus are getting every single message scooped up.
A similar point I remember hearing from Meredith Whittaker was that “agentic OS” platforms also break this promise by hoovering up whatever is on the screen regardless of whether the user wants the agents to have access. The AI is inserted on one of the “ends”, and thus the security model is broken.
What do you do if you can’t trust your OS?
I totally agree that it will be (and already has been) placed where it doesn’t belong, that’s a great point.
I think we’re pretty lucky that Linux exists, in regards to trusting our OS. It still has thorny bits, but I’ve been daily driving Linux for about 5 months and been very pleased. I still use Windows (I assume that’s the OS you were referring to) for specifically gaming, but I am looking at moving away for that as well.
@ch0ccyra1n @frisk_ravage690 For quite some time Meredith Whittaker has warned about a similar issue, like @frisk_ravage690 stated, agentic AI running on people’s devices. 39C3 talk
I wouldn’t worry about my OS or your OS or the OSes used by privacy-conscious people, I worry about the OSes running AI agents and spyware used by normies. I communicate with others using Session, SimpleX, Signal and other E2EE chat apps but I have never been positively confident about the confidentiality of most of those chats, and if/when agentic AI becomes mainstream my confidence will fall rapidly.