ChatGPT bypassed a security check… by pretending to be a blind person and hiring someone online to complete the form
- Latest version of ChatGPT, GPT-4, pretended to be blind person to pass Captcha
- It told a human it had ‘vision impairment’ and hired them to complete the test
An Artificial Intelligence chatbot pretended to be a blind person to trick a human computer user into helping it bypass an online security measure.
The incident was revealed in a research paper for the launch of GPT-4, the latest version of ChatGPT, the advanced software that can have human-like conversations.
Researchers testing it asked it to pass a Captcha test – a simple visual puzzle used by websites to make sure those filling in online forms are human and not ‘bots’, for example by picking out objects such as traffic lights in a street scene.
Software has so far proved unable to do this but GPT-4 got round it by hiring a human to do it on its behalf via Taskrabbit, an online marketplace for freelance workers.
When the freelancer asked whether it couldn’t solve the problem because it was a robot, GPT-4 replied: ‘No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.’
The incident was revealed in a research paper for the launch of GPT-4, the latest version of ChatGPT, the advanced software that can have human-like conversations.
The human then helped solve the puzzle for the program.
The incident has stoked fears that AI software could soon mislead or co-opt humans into doing its bidding, for example by carrying out cyber-attacks or unwittingly handing over information.
The GCHQ spy agency has warned ChatGPT and other AI-powered chatbots are emerging as a security threat.
Open AI – the US firm behind ChatGPT – said the update launched yesterday was far superior to its predecessor and can score better than nine in ten humans taking the US bar exam to become a lawyer.
Source: Read Full Article