New paper in Cognition!

Our paper, Intelligibility of face-masked speech depends on speaking style: Comparing casual, smiled, and clear speech, was accepted to Cognition today!

My co-authors, Anne Pycha (University of Wisconsin-Milwaukee) and Georgia Zellou (UC Davis), and I had a blast working together on a new project: how wearing a fabric face mask (as is common these days) affects speech intelligibility.

[Take away: masks don’t simply reduce intelligibility! The speaker plays an important role]

Click here to read the paper

LSA 2021

Two upcoming presentations at the 2021 Linguistics Society of America (LSA) Annual Meeting:

  • Eleonora Beier, Michelle Cohn, Fernanda Ferreira, & Georgia Zellou: Prosodic focus in human- versus voice-AI-directed speech (Saturday, January 9, 2021 – 11:00am to 12:30pm)
  • Riley Stray, Michelle Cohn, & Georgia Zellou: The Interaction between Phonological and Semantic Usage Factors in Dialect Intelligibility in Noise (Saturday, January 9, 2021 – 2:00pm to 3:30pm)

New UCD HCI Research Group

In Fall 2020, I launched the UC Davis HCI Research Group: a collective of professors, postdocs, graduate students, and undergraduate students across campus investigating the dynamics of human-computer interaction. 

We have a quarterly talk series (on Zoom):


Fall Quarter 2020



J​orge Peña
Associate Professor, Dept. of Communication (UCD)

Dr. Peña specializes in computer-mediated communication, new media, communication in video games and virtual environments, and content analysis of online communication. 

Friday, November 13th 10am-11am (on Zoom)

To join the mailing list to receive updates and the Zoom links, please email Michelle Cohn (mdcohn@ucdavis.edu) ​​

Interspeech 2020

We are thrilled to have several papers accepted to the 2020 Interspeech conference:

  • Cohn, M., & Zellou, G. “Perception of concatenative vs. neural text-to-speech (TTS): Differences in intelligibility in noise and language attitudes”
  • Cohn, M., Sarian, M., Predeck, K., & Zellou, G. “Individual variation in language attitudes toward voice-AI: The role of listeners’ autistic-like traits”
  • Zellou, G., & Cohn, M. “Social and functional pressures in vocal alignment: Differences for human and voice-AI interlocutors”

Including one for the new collaboration between UC Davis and Saarland University:

  • Cohn, M., Raveh, E., Predeck, K., Gessinger, I., Möbius, B., Zellou, G. “Differences in Gradient Emotion Perception: Human vs. Alexa Voices”

CogSci 2020 Papers

We are thrilled that three of our papers have been accepted to the 2020 Cognitive Science Society Meeting!

  • Cohn, M., Jonell, P., Kim, T., Beskow, J., Zellou, G. Embodiment and gender interact in alignment to TTS voices.

While at the KTH Royal Institute of Technology (Stockholm, Sweden) in September 2019, I met up with Dr. Jonas Beskow (pictured in the center), co-founder of Furhat Robotics, and Ph.D. student Patrik Jonell (pictured on the right). Together with Georgia Zellou and Taylor Kim, we’re conducting a study to test the role of embodiment and gender in human’s voice-AI interaction with three platforms: Amazon Echo, Nao, and Furhat. 
  • Zellou, G., & Cohn, M. Top-down effects of apparent humanness on vocal alignment toward human and device interlocutors.
  • Zellou, G., Cohn, M., Block, A. Top-down effect of speaker age guise on perceptual compensation for coarticulatory /u/-fronting.