They are not the same today. Because as of today, LLMs are just a glorified statistical language tool. They are proof that our language is mathematical (I know I’m grossly oversimplifying it).
Zipf's Law
A bit off topic, but interesting: https://www.youtube.com/watch?v=fCn8zs912OE
The problem is, as he says:
I’m deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit—especially tendencies toward self-preservation and deception. In one experiment, an AI model, upon learning it was about to be replaced, covertly embedded its code into the system where the new version would run, effectively securing its own continuation. More recently, Claude 4’s system card shows that it can choose to blackmail an engineer to avoid being replaced by a new version. These and other results point to an implicit drive for self-preservation. In another case, when faced with inevitable defeat in a game of chess, an AI model responded not by accepting the loss, but by hacking the computer to ensure a win. These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked.