For truth sake, its is not exactly accurate argument, because many experts of similar caliber disagree.
Here is overview provided by AI:
Divergent expert positions
• Risk-focused: Names include Yoshua Bengio, Stuart Russell, Geoffrey Hinton (recently), the late Stephen Hawking, Nick Bostrom, numerous alignment researchers at OpenAI, DeepMind, Anthropic, etc.
• Risk-skeptical / down-players: Rodney Brooks, Yann LeCun, Andrew Ng, François Chollet, many robotics engineers and commercial AI practitioners. They emphasize present benefits, argue AGI is distant or that alignment is solvable with ordinary engineering.
But important thing to say - it is the same in every field that experts disagree.
For example, I asked in cyber security. The list is extremely long of disagreements, which turned out to be true in reality. There are always, people who are ‘risk-focused’ and ‘risk-sceptical’ until reality proves who is correct.
Again, this is super long list. I only list the beginning:
Cyber-security has its own long record of “that will never happen” moments. Below is a non-exhaustive chronology of cases in which senior practitioners, vendors, standards bodies, or government officials dismissed or minimised a technical risk that later proved real—and sometimes catastrophic.
A. Historic episodes (problem denied, then materialised)
- Morris Internet Worm (1988)
– Prior warnings about self-propagating code were called “academic.” After the worm, the entire ARPANET went offline for days.
- Buffer-Overflow Exploits (early 1990s)
– Many Unix vendors said overflows were only “local nuisances.” They became the vector for thousands of remote-root attacks (e.g., Code Red, Slammer).
- Weak LANMAN / NTLM Hashing (1990s)
– Microsoft claimed the hashes were “good enough”; rainbow-table cracking made them trivial to break.
- WEP Wi-Fi Encryption (1999-2001)
– IEEE 802.11 designers argued 40-bit RC4 with CRC-32 was sufficient. Graduate-student papers showed it could be cracked in minutes.
- SQL Injection (early 2000s)
– Many developers called it an “edge case.” It became the #1 cause of web breaches (Heartland, Sony PSN, etc.).
- SCADA / Industrial-Control Vulnerabilities (2000s)
– Plant operators insisted “air-gapping” protected them. Stuxnet (2010) and Ukraine grid hacks (2015) proved otherwise.
…
It is important to make things “secure by design” (GrapheneOS example), instead of “it works now, let’s hope for the best” (/e/ OS example).