How conversation works – and why people with hearing loss rely more on their powers of prediction

Health


Benjavisa Ruangvaree Art/Shutterstock

“Ultimately, the bond of all companionship, whether in marriage or friendship, is conversation,” wrote Oscar Wilde.

We often think of conversation as effortless. But beneath its apparent ease lies an extraordinary feat of coordination – a finely tuned dance of listening and speaking.

Summoning a single word in your mind and then saying it takes at least 600 milliseconds. Yet the most common gap between one person finishing a speaking turn and the other beginning is around 200 milliseconds, regardless of the language they are speaking.

This means we usually start talking too quickly to have planned our response after the other person has finished. Somehow, our brains are always ahead of the conversation.

How do we manage this? As we listen, our brains operate like a sophisticated version of predictive text. Instead of waiting for a sentence to finish, we continuously predict how it is likely to end.

In a study with colleagues in the UK and Germany, we found that people with some hearing loss often rely more heavily on these predictive cues to keep conversations flowing. But over time, the effort this requires can have other negative effects.

While smartphones rely on simple word-to-word probabilities, human prediction is far richer. We combine these probabilistic cues with knowledge about the speaker (who they are, what they like, how they usually talk) as well as the surrounding environment and broader topic of conversation.

If someone says, “I’d like to wear the nice …”, your brain immediately narrows the possibilities to things that can be worn — perhaps a tie or a dress. And prediction doesn’t stop there. If the speaker sounds male, listeners may be more likely to predict “tie”; if the speaker sounds female, “dress”.

Prediction also helps us determine when we can speak. As a sentence unfolds, we predict its structure, rhythm, melody and likely final words. These subconscious timing predictions allow us to enter the conversation with remarkable precision, enhancing social connections by avoiding talking over someone or leaving awkward pauses.

A neuroscientist explains human communication. Video: TED.

How hearing loss affects this process

The delicate coordination of conversation relies on our brain having enough cognitive resources to support prediction, response planning and timing. But when hearing becomes more difficult, the brain has to work harder to identify sounds and words, stretching these resources.

For around half of people over 55, hearing loss makes everyday conversation harder work for the brain. Fewer resources are available for higher-level conversational processes, making the roughly 200-millisecond rhythm of turn-taking harder to maintain. This can lead to longer, more disruptive gaps in the conversation.

Until recently, it has been unclear exactly why these longer gaps arise. To what degree do people with hearing loss find it harder to predict when someone will finish speaking? And how much does the extra effort to hear words restrict their ability to plan what to say next?

Our study disentangled these possibilities by testing people aged 50 to 80 years old, some of whom had mild-to-moderate hearing loss. We tested them under listening conditions that ranged from comfortable, clear speech to situations where speech was only just intelligible.

This allowed us to separate the effects of hearing loss from those of more demanding listening conditions. This distinction matters because while both increase listening effort, they may disrupt different aspects of conversation.

Our results revealed a clear pattern. When listening conditions were comfortable, people with hearing loss relied more heavily on predictions of what the other person would say next than those who had clear hearing. Prediction acted as a compensatory strategy for people with hearing loss, helping maintain conversational coordination to a level very similar to those without hearing loss.

However, when listening became more effortful because speech was presented at the quietest level participants could understand, this predictive advantage disappeared. The additional effort needed for those with hearing loss appeared to leave them too little cognitive capacity to support their previously compensatory powers of prediction.

This helps to explain why people with hearing loss can appear perfectly fluent conversational partners in quiet, one-to-one settings, yet struggle in noisy environments where listening becomes much more effortful. Of course, people with full hearing also start to experience this effect in noisy bars or crowded restaurants.

Illustration of two people having an intense conversation.

Benjavisa Ruangvaree Art/Shutterstock

Losing the skill of conversation

Conversation is a high-speed cognitive skill and, like any other skill, it benefits from regular use. When conversation becomes exhausting owing to hearing loss, people may withdraw from social interaction to avoid the effort of staying in sync. Greater social isolation is associated with poorer mental, physical and cognitive health.

But a reduction in the frequency of conversations that someone is having may also weaken the cognitive mechanisms that support them – like a muscle weakens from lack of use. This could add to their reluctance to talk to people. We hope to explore this “use it or lose it” effect in our future research.

Already, we have been surprised by just how much subconscious coordination goes into everyday conversation. Recognising the particular needs – and skills – of people with hearing loss is an important part of maintaining “this bond of all companionship”.

The Conversation

Ruth Corps has received funding from the ESRC and the Leverhulme Trust.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *