AI

‘Visual’ AI models might not see anything at all

Comment

Image Credits: Bryce Durbin / TechCrunch

The latest round of language models, like GPT-4o and Gemini 1.5 Pro, are touted as “multimodal,” able to understand images and audio as well as text. But a new study makes clear that they don’t really see the way you might expect. In fact, they may not see at all.

To be clear at the outset, no one has made claims like “This AI can see like people do!” (Well, perhaps some have.) But the marketing and benchmarks used to promote these models use phrases like “vision capabilities,” “visual understanding,” and so on. They talk about how the model sees and analyzes images and video, so it can do anything from homework problems to watching the game for you.

So although these companies’ claims are artfully couched, it’s clear that they want to express that the model sees in some sense of the word. And it does — but kind of the same way it does math or writes stories: matching patterns in the input data to patterns in its training data. This leads to the models failing in the same way they do on certain other tasks that seem trivial, like picking a random number.

A study — informal in some ways, but systematic — of current AI models’ visual understanding was undertaken by researchers at Auburn University and the University of Alberta. They tested the biggest multimodal models on a series of very simple visual tasks, like asking whether two shapes overlap, or how many pentagons are in a picture, or which letter in a word is circled. (A summary micropage can be perused here.)

They’re the kind of thing that even a first-grader would get right, yet they gave the AI models great difficulty.

“Our seven tasks are extremely simple, where humans would perform at 100% accuracy. We expect AIs to do the same, but they are currently NOT,” wrote co-author Anh Nguyen in an email to TechCrunch. “Our message is, ‘Look, these best models are STILL failing.’”

Image Credits: Rahmanzadehgervi et al

The overlapping shapes test is one of the simplest conceivable visual reasoning tasks. Presented with two circles either slightly overlapping, just touching or with some distance between them, the models couldn’t consistently get it right. Sure, GPT-4o got it right more than 95% of the time when they were far apart, but at zero or small distances, it got it right only 18% of the time. Gemini Pro 1.5 does the best, but still only gets 7/10 at close distances.

(The illustrations do not show the exact performance of the models but are meant to show the inconsistency of the models across the conditions. The statistics for each model are in the paper.)

Or how about counting the number of interlocking circles in an image? I bet an above-average horse could do this.

Image Credits: Rahmanzadehgervi et al

They all get it right 100% of the time when there are five rings, but then adding one ring completely devastates the results. Gemini is lost, unable to get it right a single time. Sonnet-3.5 answers six … a third of the time, and GPT-4o a little under half the time. Adding another ring makes it even harder, but adding another makes it easier for some.

The point of this experiment is simply to show that, whatever these models are doing, it doesn’t really correspond with what we think of as seeing. After all, even if they saw poorly, we wouldn’t expect six-, seven-, eight- and nine-ring images to vary so widely in success.

The other tasks tested showed similar patterns; it wasn’t that they were seeing or reasoning well or poorly, but there seemed to be some other reason why they were capable of counting in one case but not in another.

One potential answer, of course, is staring us right in the face: Why should they be so good at getting a five-circle image correct, but fail so miserably on the rest, or when it’s five pentagons? (To be fair, Sonnet-3.5 did pretty good on that.) Because they all have a five-circle image prominently featured in their training data: the Olympic Rings.

Image Credits: IOC

This logo is not just repeated over and over in the training data but likely described in detail in alt text, usage guidelines and articles about it. But where in their training data would you find six interlocking rings. Or seven? If their responses are any indication: nowhere! They have no idea what they’re “looking” at, and no actual visual understanding of what rings, overlaps or any of these concepts are.

I asked what the researchers think of this “blindness” they accuse the models of having. Like other terms we use, it has an anthropomorphic quality that is not quite accurate but hard to do without.

“I agree, ‘blind’ has many definitions even for humans and there is not yet a word for this type of blindness/insensitivity of AIs to the images we are showing,” wrote Nguyen. “Currently, there is no technology to visualize exactly what a model is seeing. And their behavior is a complex function of the input text prompt, input image and many billions of weights.”

He speculated that the models aren’t exactly blind but that the visual information they extract from an image is approximate and abstract, something like “there’s a circle on the left side.” But the models have no means of making visual judgments, making their responses like those of someone who is informed about an image but can’t actually see it.

As a last example, Nguyen sent this, which supports the above hypothesis:

Image Credits: Anh Nguyen

When a blue circle and a green circle overlap (as the question prompts the model to take as fact), there is often a resulting cyan-shaded area, as in a Venn diagram. If someone asked you this question, you or any smart person might well give the same answer, because it’s totally plausible … if your eyes are closed! But no one with their eyes open would respond that way.

Does this all mean that these “visual” AI models are useless? Far from it. Not being able to do elementary reasoning about certain images speaks to their fundamental capabilities, but not their specific ones. Each of these models is likely going to be highly accurate on things like human actions and expressions, photos of everyday objects and situations, and the like. And indeed that is what they are intended to interpret.

If we relied on the AI companies’ marketing to tell us everything these models can do, we’d think they had 20/20 vision. Research like this is needed to show that, no matter how accurate the model may be in saying whether a person is sitting or walking or running, they do it without “seeing” in the sense (if you will) we tend to mean.

More TechCrunch

Ola Electric, India’s largest electric two-wheeler maker, saw its shares rise as much as 20% on its public debut on Friday, making it the biggest listing among Indian firms in…

Ola Electric surges in India’s biggest listing in two years

Rocket Lab surpassed $100 million in quarterly revenue for the first time, a 71% increase from the same quarter of last year. This is just one of several shiny accomplishments…

Rocket Lab’s sunny outlook bodes well for future constellation plans 

In 1996, two companies, Patersons HR and Payroll Solutions, formed a venture called CloudPay to provide payroll and payments services to enterprise clients. CloudPay grew quietly over the next several…

CloudPay, a payroll services provider, lands $120M in new funding

The vulnerabilities allowed one security researcher to peek inside the leak sites without having to log in.

Security bugs in ransomware leak sites helped save six companies from paying hefty ransoms

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the…

A comprehensive list of 2024 tech layoffs

A new “beta rabbit” mode adds some conversational AI chops to the Rabbit r1, particularly in more complex or multi-step instructions.

Rabbit’s r1 refines chats and timers, but its app-using ‘action model’ is still MIA

Los Angeles is notorious for its back-to-back traffic. Three events that promise to bring in millions of spectators from around the world — the 2026 World Cup, the Super Bowl…

Archer to set up air taxi network in LA by 2026 ahead of World Cup

Featured Article

Amazon is fumbling in India

Amazon’s decision to overlook quick-commerce in India is now looking like a significant misstep.

Amazon is fumbling in India

OpenAI’s GPT-4o, the generative AI model that powers the recently launched alpha of Advanced Voice Mode in ChatGPT, is the company’s first trained on voice as well as text and…

OpenAI finds that GPT-4o does some truly bizarre stuff sometimes

On Thursday, Box filled in a missing piece on its AI platform when it bought automated metadata extracting startup, Alphamoon.

Box adds crucial piece to its AI platform with Alphamoon acquisition

OpenAI has announced a new appointment to its board of directors: Zico Kolter. Kolter, a professor and director of the machine learning department at Carnegie Mellon, predominantly focuses his research…

OpenAI adds a Carnegie Mellon professor to its board of directors

Count Spotify and Epic Games among the Apple critics who are not happy with the iPhone maker’s newly revised compliance plan for the European Union’s Digital Markets Act (DMA). Shortly…

Spotify and Epic Games call Apple’s revised DMA compliance plan ‘confusing,’ ‘illegal’ and ‘unacceptable’

Thursday seeks to shake up conventional online dating in a crowded market. The app, which recently expanded to San Francisco, fosters intentional dating by restricting user access to Thursdays. At…

Thursday, the dating app that you can use only on Thursdays, expands to San Francisco

AI companies are gobbling up investor money and securing sky-high valuations early in their life cycle. This dynamic has many calling the AI industry a bubble. Nick Frosst, a co-founder…

Cohere co-founder Nick Frosst thinks everyone needs to be more realistic about what AI can and cannot do

Instagram is rolling out the ability for users to add up to 20 photos or videos to their feed carousels, as the platform embraces the trend of “photo dumps.” Back…

Instagram is embracing the ‘photo dump’

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Anyone paying…

Lyft ‘opens a can of whoop ass’ on surge pricing, Tesla’s Dojo explained and Saudi Arabia pumps $1.5B into Lucid

Flint Capital just closed its third fund at $160 million. Its has a unique strategy for finding its limited partner investors. 

Flint Capital raises a $160M through an unusual fund-raising strategy

Earlier this week it emerged that the DPC had instigated court proceedings seeking an injunction against X over the data processing without consent.

Elon Musk’s X agrees to pause EU data processing for training Grok

During testing, Google DeepMind’s table tennis bot was able to beat all of the beginner-level players it faced.

Google DeepMind develops a ‘solidly amateur’ table tennis robot

The X account announced that its Premium+ subscription would now be “fully” ad-free, leading some to question how this change would affect creator earnings.

As X sues advertisers over boycott, the app ditches all ads from its top subscription tier

Apple has further revised its compliance plan for the European Union’s Digital Markets Act (DMA) rulebook, which, since March, has forced it to give iOS developers more freedom over how…

Apple revises DMA compliance for App Store link-outs, applying fewer restrictions and a new fee structure

The rise of neobanks has been fascinating to witness, as a number of companies in recent years have grown from merely challenging traditional banks to being massive players in and…

Chime and Dave execs are coming to TechCrunch Disrupt 2024

If you visited the Wikipedia website on mobile this week, you might have seen a pop-up indicating that dark mode is ready for prime time.

How to enable Wikipedia’s dark mode

The home security company says attackers accessed databases containing customer home addresses, email addresses, and phone numbers.

Home security giant ADT says it was hacked

The Looking Glass Pro has a 6-inch display and a foldable base. It shows spatial images like those created with the Apple Vision Pro and iPhone 15 Pro.

Looking Glass’ new lineup includes a $300 phone-sized holographic display

TikTok’s latest offering is capitalizing on the app’s ability to serve as a discovery engine for other media — something its users already take advantage of by sharing short clips…

TikTok partners with Warner Bros. to become a discovery engine for TV and movies

Cocoon is a new startup built on the belief that greener steel production and the creation of concrete slag doesn’t have to be an either/or proposition.

Cocoon is transforming steel production runoff into a greener cement alternative

SoundHound, an AI company that makes voice interface tech used by car companies, restaurants and tech firms, is doubling down on enterprise services by playing consolidator in a crowded market.…

SoundHound acquires Amelia AI for $80M after it raised $189M+

Seeking mental health support is a complex process, but some founders believe that using AI to formalize techniques like cognitive behavioral therapy (CBT) can help folks who might not have…

Feeling Great’s new therapy app translates its psychiatrist co-founder’s experience into AI

The U.K.’s antitrust regulator has confirmed that it’s carrying out a formal antitrust investigation into Amazon’s ties with Anthropic, after Amazon recently completed a $4 billion investment into the AI startup.…

UK launches formal probe into Amazon’s ties with AI startup Anthropic