CAPTCHAs are actually getting harder, and it will get worse

The advancement of artificial intelligence software, among many things, means that it will be increasingly difficult to distinguish humans from bots

For years, CAPTCHAs, or security tests used to determine whether a human being or a computer wants to perform a certain action online, have been rather repetitive. There were those that asked to recognize numbers or letters written in random order, or a series of images containing palm trees or street lamps, or even those that simply wanted the user to click on a small square next to the words “I’m not a robot“. Despite their relative simplicity, CAPTCHAs have always been disliked by many internet users, who usually find them annoying and who do not appreciate even a temporary interruption to their online experience.

For a few months, however, further frustration has been added to this underlying intolerance due to the fact that CAPTCHAs are becoming increasingly strange, enigmatic and difficult to solve. It is not just an impression: to cope with the progressive development of the so-called “artificial intelligence“, advanced software which among other things is capable of recognizing images and words with a precision much higher than that of the past, those who design CAPTCHAs are engineering to design tests that are simple for humans but not for bots. With mixed success so far.

CAPTCHA is an imperfect acronym that stands for Completely Automated Public Turing (test to tell) Computers (and) Humans Apart, “a completely public and automatic Turing test to distinguish computers from humans”. Unlike the Turing test, which aims to measure the ability of a machine to answer certain questions like a human being would, CAPTCHAs try to “identify human beings”, that is, to exclude that there is something behind a certain online behaviour. an automated program (so-called bots). These systems are therefore mainly used to prevent bots from using certain services: they are used for example to ensure that only humans are buying tickets for a highly requested concert, or that it is not software that is commenting under a blog post or a register on a website.

As technological developments have advanced, however, this job has become more difficult. “Software has become particularly good at recognizing photos. That’s why we’re working on a new wave of CAPTCHAs that rely on logic instead,” explained Kevin Gosschalk, founder of Arkose Labs, the company behind some of the strangest and most difficult tests found online.

For example, the CAPTCHAs that ask the user to overturn the drawing of an animal with the mouse or finger, or to click on the image of an animal that does NOT live underwater, or to select two objects that have the same shape. Users can no longer simply identify simple objects, explains journalist Katie Deighton. “They have to identify an object and then do something with that information: move a puzzle piece, rotate an object, find a hidden number in a larger image.”

None of these things are really difficult in themselves, but they are undoubtedly more difficult than traditional CAPTCHAs, take up more time, and can be tiring to interpret for someone in a hurry or who has a learning disability. CAPTCHAs, however, should be fine for any human who finds themselves using them, of any age and cultural background, including people with disabilities of various types.

Added to this is the fact that sometimes these new generation CAPTCHAs ask you to identify objects or shapes within images generated by artificial intelligences such as Dall-E or Midjourney, with very bizarre results: for example, you may find yourself faced with the request to select all clouds that have the shape of a horse, or locate objects that do not exist. A few months ago a company called hCaptcha – which is itself working hard on the development of new generation tests – asked for example to identify a “yoko”, or a sort of snail-shaped yo-yo, which however in real world does not exist.

“It’s likely that in the future things will get even stranger, that people will be asked to do things that don’t make any sense,” Gosschalk says. “Otherwise artificial intelligences will be able to pass the tests as well as humans.” The idea is in fact to identify tests so specific and bizarre that they require a lot of money and time for any developer who wants to try to circumvent them with a bot. According to their calculations, even the most complex tests that have so far been designed by Arkose Labs are solved on the first try by almost all human beings (94.6 percent, to be exact). However, some ideas are discarded very early, before being submitted to a wider audience, because they are considered too difficult.

Share your love
Lera Gelbart
Lera Gelbart

Lera Gelbart, serving as the Business Advisory Head at Gleexa, a prominent tech company, stands as a visionary leader in the tech industry. With a wealth of experience and a relentless drive for innovation, she empowers businesses to excel in an ever-evolving digital landscape. Lera's profound expertise in technology and strategic acumen makes her an indispensable asset, guiding companies towards unparalleled success in the dynamic realm of technology and business.

Articles: 6