Skip to main content
European Commission logo
Research and Innovation

How software that embeds emotional cues in audio helps treat patients

The power of music to evoke strong emotions is known to anyone who hears it – but the underlying mechanism remains unknown. Using sound manipulation tools that could elicit emotional responses, the EU-funded CREAM project has yielded insights into the brain. The work is also already inspiring novel clinical applications, from diagnosing speech problems to assessing brain surgery patients.

© GraphicFactory # 727784691 | Source: Stock.Adobe.com

PDF Basket

No article selected

Whether it is the triumphant crescendo of a horn section or the melancholy tones of a lone cello, music can always set the mood. 

But while science has discovered that music activates the same brain pathways as food or sex, research has so far mostly focused on simply observing its impact, not experimenting with it.

“Experiments typically played happy music, then recorded how this triggered happy emotions,” explains neuroscientist Jean-Julien Aucouturier, coordinator of the CREAM project. “Questions remained about the underlying mechanisms responsible, such as: is the music evoking memories, or changing physiology?”

Just as therapeutic drug researchers investigate how active molecules target physiological pathways, Aucouturier used audio processing techniques to change the emotional register of sounds, then tested their impact on listeners’ emotions.

He says: “While our findings and techniques could help answer many open research questions in areas such as speech and music cognition or linguistics, I’m most excited about the potential clinical applications we are now exploring.”

A smile for a smile

A multidisciplinary project, CREAM borrowed speech and music technology methods to work with over 600 participants across four countries: France, Japan, Sweden and the United Kingdom.

“I was stunned by the sound transformation toys available to my computer-music colleagues,” remarks Aucouturier, a neuroscientist with the French National Centre for Scientific Research. “Simulating the sound of moving instruments, or blending animal roars with stringed instruments, were neuroscience experiments-in-waiting!” 

While a neuroscience consensus already existed about how to define and categorise emotions, CREAM contributed significant insights about how emotional states are denoted by subtle sound ‘signatures’.

A key project milestone was the patenting of the project’s SMILE software . This tool can acoustically simulate the change in timbre that happens to a speaker’s voice when they smile. 

“Simulating the sound of smiling let us create an algorithm which can be applied real-time to any voice, making it sound happier,” explains Aucouturier. Similar tweaks can make the voice sound more trustworthy or authoritative. 

The impact of these sound simulations on listeners was measured using electrodes fitted to their scalps, chests and faces to record brain, cardiac and facial muscle changes, respectively.

“When listeners heard voices manipulated by our SMILE algorithm, not only did they report the speaker as friendlier than before, they also started smiling themselves,” remarks Aucouturier. 

A spin-out company, AltaVoce, is currently commercialising the SMILE software as a communications enhancement tool, especially for phone-based customer relations.

Techniques transferrable to music

While it was well known that musical instruments can be as emotionally expressive as voice, most past experiments have changed only basic parameters, such as tempo and volume.

By applying smiling, vocal tremor and vocal ‘roughness’ sound signatures to samples of music, test participants experienced similar emotional responses as when the manipulations were applied to voice samples. 

This held true even when playing purely instrumental music, leading to some interesting results. “We found that liking death metal music required more brain power than disliking it, because fans have to override their brain’s innate association of the guttural sounds of guitars with fear,” says Aucouturier. 

These sound transformation techniques raise obvious ethical concerns. Perhaps most topical is the potential for criminals and bad actors to produce more convincing and compelling audio fakes. 

While Aucouturier is actively engaged in discussions about containing these risks, he also points to clinical trials currently under way as highlighting the opportunities on offer.

The project’s freely available open-source tools are currently being used in several French hospitals for a range of applications. 

DAVID is a free, real-time voice transformation tool able to change the emotion of recorded speech, while ANGUS can simulate cues of arousal and roughness on arbitrary voice signals. CLEESE is a Python toolbox for performing random or deterministic pitch, timescale, filtering and gain transformations on a given sound.

Applications include: diagnosing speech (aphasia) problems in stroke survivors, testing for consciousness in coma patients, exploring social cognition (especially the capacity to imitate ‘heard’ emotions) in congenitally blind patients, and identifying vocal anxiety markers before patients are anesthetised for surgery. 

Work is also under way in emergency medicine triage, for autism spectrum disorder and post-traumatic stress disorder, and to assess the impact of brain glioma surgery. 

“We are developing techniques that can control and measure emotional properties in the healthy brain using sounds, to diagnose a wide range of neurological and psychiatric disorders,” adds Aucouturier.

Now based at the mixed research FEMTO-ST Institute in France, Aucouturier is branching out to new fields. Working in the EU-funded Lullabyte project, he is collaborating with 10 European laboratories to investigate how the brain processes sound during sleep. 

“We could use what we learn to develop creative applications that improve sleep quality, memory consolidation and remembrance of dreams – or even to modify the contents of dreams,” says Aucouturier.

A playlist to program your dreams? Music to our ears. 

PDF Basket

No article selected

Project details

Project acronym
CREAM
Project number
335536
Project coordinator: France
Project participants:
France
Total cost
€ 1 499 992
EU Contribution
€ 1 499 992
Project duration
-

See also

More information about project CREAM

All success stories