NOV 11, 2021 6:46 AM PST

Chatty Chatbots Learn to Cope with Figurative Language

WRITTEN BY: Mia Wood

Chatbots. Increasingly ubiquitous and (for this writer, anyway) unfailingly annoying, these computer systems can make or break a user’s experience. Sadly, perhaps, FAQ pages aren’t enough, anymore. When we land on a website, we apparently need “interaction.” (Whatever that means, since a chatbot is just an algorithm designed to stand in for an actual human.)  Enter the chatbot or other dialog system.

Pre-populated with answers to standard questions, chatbots have a closed and narrow list of possible responses, which means the human user has to adjust. That adjustment, in turn restricts communication in ways that often feel unnatural. Indeed, the experience often feels as if the human must accommodate the chatbot, rather than the other way ‘round. You know you’re in trouble when you long for the days of interminable hold times followed by an unhelpful, but still human, customer service representative.

The general problem with the chatbot seems to be that it can’t think. Consider the Turing test. In his 1950 paper, mathematician Alan Turing proposed the Imitation Game as a test of computing intelligence. If a human believes they’re talking to another human during a text-based chat, then the machine has exhibited thought or intelligence. 

But let’s suppose the human types this joke: A vacuum cleaner salesman turns up on a person’s doorstep. “I’ve got a vacuum that sounds too good to be true,” he says, “but it’s real. This vacuum is so effective, it’ll cut your housework by half!” The person thinks for a moment and says. “One vacuum cuts my housework in half, you say?” The salesman nods vigorously. “Why, yes. Yes, it does!” After a moment’s more thought, the person says, “Okay, I’ll take two!”

An ordinary chatbot won’t get why it’s supposed to be funny — not unless it’s programmed to detect the ambiguity at the heart of the joke. Such detection would be no small feat indeed for a program limited to queries about things like product returns. Chatbots require an enormous amount of conversational data as training material, so expanding the program’s repertoire might be more trouble than it’s worth, particularly when things like jokes are obviously outside the scope of, for example, an ordinary customer service exchange. (Siri and Alexa have apparently learned to engage with the ha-ha’s, but we’re still left to wonder if they have a sense of humor — but that’s a topic for another article!)

 

 

Ordinary language is replete with ambiguities that elude dialog systems such as chatbots. In a recent study, “Investigating Robustness of Dialog Models to Popular Figurative Language Constructs,” computer scientists from Carnegie Mellon University and UC San Diego found that computer dialog systems, like virtual personal assistants and chatbots, aren’t proficient with figurative language. Such systems stumble over idioms like, “He got cold feet when it came time to parachute from the plane.” Confronted with conversations that emphasized this sort of language, dialog system performance suffered.

Lead authors, Harsh Jhamtani and Taylor Berg-Kirkpatrick set out to document when and how much dialog systems stumble — and to test a candidate for helping the system to improve. First, they found that dialog system performance dropped considerably when conversation data sets emphasized figurative language. After writing a script for systems to scan dictionaries for translations of figurative language into literal expressions, dialog system performance improved up to 15%. While there is more work to be done to preserve, for example, the meaning of a figurative expression, the hope is that improvements to dialog system scripts will improve human engagement.

Sources: Neuroscience NewsCornell University arXive

About the Author
PhD, Philosophy
I am a philosophy professor and writer with a broad range of research interests.
You May Also Like
MAR 28, 2022
Space & Astronomy
Earth-forming meteorites may have formed in the outer Solar System
MAR 28, 2022
Earth-forming meteorites may have formed in the outer Solar System
"The cosmos is also within us, we're made of star stuff," Carl Sagan famously stated on his original award ...
APR 11, 2022
Space & Astronomy
Uranus - The Sideways Planet
APR 11, 2022
Uranus - The Sideways Planet
Uranus, the seventh planet from the Sun – The Sideways Planet, known for its chuckling name, this gas giant hangin ...
APR 20, 2022
Space & Astronomy
Neptune: Cooler Than We Thought, Temperature Wise
APR 20, 2022
Neptune: Cooler Than We Thought, Temperature Wise
We've always known Neptune was cool, but now we know it's way cooler, tempertaure wise. A recent study published ...
MAY 06, 2022
Technology
Smartphone App Screens for Neurological Conditions Using Pupil Scans
MAY 06, 2022
Smartphone App Screens for Neurological Conditions Using Pupil Scans
Neurological conditions can be devastating. Conditions like Alzheimer’s (which is becoming increasingly common as ...
MAY 13, 2022
Neuroscience
Brain Area for Reward Perception 10% Larger in Psychopaths
MAY 13, 2022
Brain Area for Reward Perception 10% Larger in Psychopaths
The striatum- a region in the brain that coordinates decision-making, motivation, and reward perception, among other thi ...
MAY 16, 2022
Space & Astronomy
Ceres: King of the Asteroid Belt
MAY 16, 2022
Ceres: King of the Asteroid Belt
Fans of the show The Expanse will undoubtedly know about Ceres Station, with its six million inhabitants, artificial gra ...
Loading Comments...