We are seeing the first instances of experimental malware that use LLM instructions when adapting to various situations. One example uses an LLM to remotely send commands, while another has its entire source code rewritten to avoid traditional detection methods.
I just find it funny that vibe-coding will reach malware developers despite the so-called protections implemented by most major LLM models out there. What do you think about this trend?