Scientists have found a way to turn ChatGPT and other AI chatbots into carriers of encrypted messages that are invisible to cybersecurity systems.
The new technique — which seamlessly places ciphers inside human-like fake messages — offers an alternative method for secure communication “in scenarios where conventional encryption mechanisms are easily detected or restricted,” according to a statement from the researchers who devised it.
The breakthrough functions as a digital version of invisible ink, with the true message only visible to those who have a password or a private key. It was designed to address the proliferation of hacks and backdoors into encrypted communications systems.
But as the researchers highlight, the new encryption framework has as much power to do bad as it does good. They published their findings April 11 to the preprint database arXiv, so it has not yet been peer-reviewed.
“This research is very exciting but like every technical framework, the ethics come into the picture about the (mis)use of the system which we need to check where the framework can be applied,” study coauthor Mayank Raikwar, a researcher of networks and distributed systems at the University of Oslo in Norway, told Live Science in an email.
Related: Chinese scientists claim they broke RSA encryption with a quantum computer — but there’s a catch
To build their new encryption technique, the researchers created a system called EmbedderLLM, which uses an algorithm to insert secret messages into specific areas of AI-generated text, like treasure laid along a path. The system makes the AI-generated text appear to be created by a human and the researchers say it’s undetectable by existing decryption methods. The recipient of the message then uses another algorithm that acts as a treasure map to reveal where the letters are hidden, revealing the message.
Users can send messages made by EmbedderLLM through any texting platform — from video game chat platforms to WhatsApp and everything in between.
”The idea of using LLMs for cryptography is technically feasible, but it depends heavily on the type of cryptography,” Yumin Xia, chief technology officer at Galxe, a blockchain company that uses established cryptography methods, told Live Science in an email. “While much will depend on the details, this is certainly very possible based on the types of cryptography currently available.”
The method’s biggest security fault comes at the beginning of a message: the exchange of a secure password to encode and decode future messages. The system can work using symmetric LLM cryptography (requiring the sender and receiver to have a unique secret code) and public-key LLM cryptography (where only the receiver has a private key).
Once this key is exchanged, EmbedderLLM uses cryptography that is secure from any pre- or post-quantum decryption, making the encryption method long-lasting and resilient against future advances in quantum computing and powerful decryption systems, the researchers wrote in the study.
The researchers envision journalists and citizens using this technology to circumvent the speech restrictions imposed by repressive regimes.
“We need to find the important applications of the framework,” Raikwar said. “For citizens under oppression it provides a safer way to communicate critical information without detection.”
It will also enable journalists and activists to communicate discreetly in regions with aggressive surveillance of the press, he added.
Yet despite the impressive advance, experts say that implementation of LLM cryptography in the wild remains a way off.
“While some countries have implemented certain restrictions, the framework’s long-term relevance will ultimately depend on real-world demand and adoption,” Xia said. “Right now, the paper is an interesting experiment for a hypothetical use case.”