Two posters at LSA 2021

Come see us present two of our projects tomorrow, Saturday, January 9th at the Linguistic Society of America (LSA) 2021 Annual Meeting!

Prosodic focus in human- versus voice-AI-directed speech (11-12:30pm)
Eleonora Beier, Michelle Cohn, Fernanda Ferreira, Georgia Zellou

In this study, we test whether speakers differ in how they prosodically mark focus in speech directed toward an adult human versus a voice activated artificially intelligent (voice-AI) system (here, Amazon’s Alexa). Overall, we found that speakers prosodically mark focus similarly for both types of interlocutors; this suggests that speakers may view voice-AI (e.g., Alexa) to be a rational listener who will benefit from prosodic focus marking. At the same time, there were several targeted differences by focus type, which suggests that speakers can change their use of prosodic focus marking based on the perceived properties of the listener. 

The Interaction between Phonological & Semantic Usage Factors in Dialect Intelligibility in Noise (2-3:30pm)

Riley Stray, Michelle Cohn, & Georgia Zellou

This study examines how an “American” or “British” meaning of a word (e.g., “chips”) spoken in different accents (GB, US) can affect speech-in-noise intelligibility. Overall, we found the British speaker was more intelligible producing British sentences, but also that intelligibility decreased as sentences became more stereotypically British. Results suggest that both phonological and semantic properties of phrases impact speech intelligibility of words across dialects, and that a particular semantic usage in a less familiar dialect can decrease intelligibility as sentences become less predictable.

New paper in Cognition!

Our paper, Intelligibility of face-masked speech depends on speaking style: Comparing casual, smiled, and clear speech, was accepted to Cognition today!

My co-authors, Anne Pycha (University of Wisconsin-Milwaukee) and Georgia Zellou (UC Davis), and I had a blast working together on a new project: how wearing a fabric face mask (as is common these days) affects speech intelligibility.

[Take away: masks don’t simply reduce intelligibility! The speaker plays an important role]

Click here to read the paper

LSA 2021

Two upcoming presentations at the 2021 Linguistics Society of America (LSA) Annual Meeting:

  • Eleonora Beier, Michelle Cohn, Fernanda Ferreira, & Georgia Zellou: Prosodic focus in human- versus voice-AI-directed speech (Saturday, January 9, 2021 – 11:00am to 12:30pm)
  • Riley Stray, Michelle Cohn, & Georgia Zellou: The Interaction between Phonological and Semantic Usage Factors in Dialect Intelligibility in Noise (Saturday, January 9, 2021 – 2:00pm to 3:30pm)

New UCD HCI Research Group

In Fall 2020, I launched the UC Davis HCI Research Group: a collective of professors, postdocs, graduate students, and undergraduate students across campus investigating the dynamics of human-computer interaction. 

We have a quarterly talk series (on Zoom):

Fall Quarter 2020

J​orge Peña
Associate Professor, Dept. of Communication (UCD)

Dr. Peña specializes in computer-mediated communication, new media, communication in video games and virtual environments, and content analysis of online communication. 

Friday, November 13th 10am-11am (on Zoom)

To join the mailing list to receive updates and the Zoom links, please email Michelle Cohn ( ​​

Interspeech 2020

We are thrilled to have several papers accepted to the 2020 Interspeech conference:

  • Cohn, M., & Zellou, G. “Perception of concatenative vs. neural text-to-speech (TTS): Differences in intelligibility in noise and language attitudes”
  • Cohn, M., Sarian, M., Predeck, K., & Zellou, G. “Individual variation in language attitudes toward voice-AI: The role of listeners’ autistic-like traits”
  • Zellou, G., & Cohn, M. “Social and functional pressures in vocal alignment: Differences for human and voice-AI interlocutors”

Including one for the new collaboration between UC Davis and Saarland University:

  • Cohn, M., Raveh, E., Predeck, K., Gessinger, I., Möbius, B., Zellou, G. “Differences in Gradient Emotion Perception: Human vs. Alexa Voices”

CogSci 2020 Papers

We are thrilled that three of our papers have been accepted to the 2020 Cognitive Science Society Meeting!

  • Cohn, M., Jonell, P., Kim, T., Beskow, J., Zellou, G. Embodiment and gender interact in alignment to TTS voices.

While at the KTH Royal Institute of Technology (Stockholm, Sweden) in September 2019, I met up with Dr. Jonas Beskow (pictured in the center), co-founder of Furhat Robotics, and Ph.D. student Patrik Jonell (pictured on the right). Together with Georgia Zellou and Taylor Kim, we’re conducting a study to test the role of embodiment and gender in human’s voice-AI interaction with three platforms: Amazon Echo, Nao, and Furhat. 
  • Zellou, G., & Cohn, M. Top-down effects of apparent humanness on vocal alignment toward human and device interlocutors.
  • Zellou, G., Cohn, M., Block, A. Top-down effect of speaker age guise on perceptual compensation for coarticulatory /u/-fronting.