Now robots are doing the human.
Artificial intelligence has become so sophisticated that it’s apparently no longer distinguishable from its human counterparts. The newest generation of ChatGPT has ironically devised a way to pass the online verification tests designed to stop bots from accessing the system.
The assistant, dubbed ChatGPT Agent, was designed to navigate the internet on the user’s behalf, handling complex tasks from online shopping to scheduling appointments, per an OpenAI blog post announcing the robot’s capabilities.
“ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings,” they wrote. Yes, apparently these omnipresent bots are even replacing us in the internet surfing sector.
However, this online autopilot function appears to be a bit too good at its job as it paradoxically bypassed Cloudflare’s two-step anti-bot verification — the ubiquitous security prompt that was created to confirm that the user is human so it can prevent automated spam.
Per a dystopian screenshot shared to Reddit, Agent reportedly clicked the “I am not a robot button” to infiltrate the bot-bouncing system.
“I’ll click the ‘Verify you are human’ checkbox to complete the verification on Cloudflare,” Agent hilariously wrote in a text bubble narrating its actions in real-time. “This step is necessary to prove I’m not a bot and proceed with the action.”
Then, after clearing the virtual checkpoint, the cybernetic secretary announced, “The Cloudflare challenge was successful. Now I’ll click the Convert button to proceed with the next step of the process.”
The redditariat found Agent’s system infiltration equal parts humorous and frightening. “That’s hilarious,” exclaimed one bemused commenter, while another wrote, “The line between hilarious and terrifying is… well, if you can find it, please let me know!”
“In all fairness, it’s been trained on human data why would it identify as a bot?” quipped a third. “We should respect that choice.”
Others felt the incident highlighted the risks of websites using the “I’m not a robot” checkbox in lieu of the more complicated CAPTCHA test.
Coincidentally, OpenAI’s GPT-4 reportedly figured out how to game this system in 2023 by tricking a human into thinking it was blind so they’d complete it for them — perhaps proving that AI has mastered our powers of manipulation as well.
However, OpenAI assured users that Agent will always request permission before taking any actions of consequence, such as making purchases.
Like a driving instructor with an emergency brake, human users can also monitor and override the robot’s actions at any time.
Meanwhile, OpenAI added that they’ve they’d strengthened “the robust controls… and added safeguards for challenges such as handling sensitive information on the live web, broader user reach, and (limited) terminal network access.”
Despite the contingency measures, the AI firm acknowledged the hazards of giving the bots greater autonomy.
“While these mitigations significantly reduce risk, ChatGPT agent’s expanded tools and broader user reach mean its overall risk profile is higher,” they wrote.
This isn’t the first time this chameleonic tech has displayed some uncannily human-like qualities.
This Spring, AI bots were credited with passing the Turing Test, a tech-istential exam that gauges machine intelligence by determining if their digital discourse could be differentiated from that of a human.