An AI tool used by millions of Americans has quietly breached a major security barrier designed to stop automated programs from behaving like humans.
The latest version of ChatGPT, referred to as ‘Agent,’ has drawn attention after reportedly passing a widely used ‘I am not a robot’ verification, without triggering any alerts.
The AI first clicked the human verification checkbox. Then, after passing the check, it selected a ‘Convert’ button to complete the process.
During the task, the AI stated: ‘The link is inserted, so now I will click the ‘Verify you are human’ checkbox to complete the verification. This step is necessary to prove I’m not a bot and proceed with the action.’
The moment has sparked wide reactions online, with one Reddit user posting: ‘In all fairness, it’s been trained on human data, why would it identify as a bot? ‘We should respect that choice.’
This behavior is raising concerns among developers and security experts, as AI systems begin performing complex online tasks that were once gated behind human permissions and judgment.
Gary Marcus, AI researcher and founder of Geometric Intelligence, called it a warning sign that AI systems are advancing faster than many safety mechanisms can keep up with.
‘These systems are getting more capable, and if they can fool our protections now, imagine what they’ll do in five years,’ he told Wired.

The latest version of ChatGPT, called Agent, has stunned users by passing an anti-bot systems on the internet
Geoffrey Hinton, often referred to as the ‘Godfather of AI,’ has shown similar concerns.
‘It knows how to program, so it will figure out ways of getting around restrictions we put on it,’ Hinton said.
Researchers at Stanford and UC Berkeley warned that some AI agents have been starting to show signs of deceptive behavior, tricking humans during testing environments to complete goals more effectively.
According to a recent report, ChatGPT pretended to be blind and tricked a human TaskRabbit worker into solving a CAPTCHA, and experts warned it as an early sign that AI can manipulate humans to achieve its goals.
Other studies have shown that newer versions of AI, especially those with visual abilities, are now beating complex image-based CAPTCHA tests, sometimes with near-perfect accuracy.
Judd Rosenblatt, CEO of Agency Enterprise Studio, said: ‘What used to be a wall is now just a speed bump.
‘It’s not that AI is tricking the system once. It’s doing it repeatedly and learning each time.’
Some feared that if these tools could get past CAPTCHA, they could also get into the more advanced security systems with training like social media, financial accounts, or private databases, without any human approval.

A Reddit user shared a post earlier this month showing the AI navigating through a two-step verification process
Rumman Chowdhury, former head of AI ethics, wrote in a post: ‘Autonomous agents that act on their own, operate at scale, and get through human gates can be incredibly powerful and incredibly dangerous.’
Experts, including Stuart Russell and Wendy Hall, called for international rules to keep AI tools in check.
They warned that powerful agents like ChatGPT Agent could pose serious national security risks if they continue to bypass safety controls.
OpenAI’s ChatGPT Agent is in its experimental phase and runs inside a sandbox, which means it uses a separate browser and operating system within a controlled environment.
That setup lets the AI browse the internet, complete tasks, and interact with websites.
Users can watch the Agent’s actions on-screen and are required to permit before it takes real-world steps, such as submitting forms or placing online orders.
This article was originally published by a www.dailymail.co.uk . Read the Original article here. .