I’m happy to announce that my paper with co-authors Georgia Zellou and Aleese Block, “Partial compensation for coarticulatory vowel nasalization across concatenative and neural text-to-speech” has been accepted for publication to the Journal of the Acoustic Society of America (JASA) today!
Recent posts
New Frontiers socialbot paper!
We are thrilled that our paper, Speech rate adjustments in conversations with an Amazon Alexa socialbot, has been accepted to Frontiers in Communication, in the special issue ‘Towards Omnipresent and Smart Speech Assistants‘.
2021 UC Davis Award for Excellence in Postdoctoral Research
Yesterday, I was thrilled to be awarded the UC Davis Award for Excellence in Postdoctoral Research!
2021 Picnic Day Booth: Speech Science
Come learn about an interdisciplinary research project exploring how adults and kids talk to Amazon’s Alexa, compared to how they talk to a human. You’ll see an example of the experiment, meet the team, and get a behind-the-scenes look at the research process!
Interested in participating? http://phonlab.ucdavis.edu/participate
CBS-13 Interview & Press release
Today, UC Davis published a press release and we did an interview with CBS-13 Sacramento on our project, Intelligibility of face-masked speech depends on speaking style: Comparing casual, smiled, and clear speech.
My co-authors, Anne Pycha (University of Wisconsin-Milwaukee) and Georgia Zellou (UC Davis), and I had a blast working together on a new project: how wearing a fabric face mask (as is common these days) affects speech intelligibility.
[Take away: masks don’t simply reduce intelligibility! The speaker plays an important role]
Click here to read the paper in ‘Cognition’

Two posters at LSA 2021
Come see us present two of our projects tomorrow, Saturday, January 9th at the Linguistic Society of America (LSA) 2021 Annual Meeting!
Prosodic focus in human- versus voice-AI-directed speech (11-12:30pm)
Eleonora Beier, Michelle Cohn, Fernanda Ferreira, Georgia Zellou
In this study, we test whether speakers differ in how they prosodically mark focus in speech directed toward an adult human versus a voice activated artificially intelligent (voice-AI) system (here, Amazon’s Alexa). Overall, we found that speakers prosodically mark focus similarly for both types of interlocutors; this suggests that speakers may view voice-AI (e.g., Alexa) to be a rational listener who will benefit from prosodic focus marking. At the same time, there were several targeted differences by focus type, which suggests that speakers can change their use of prosodic focus marking based on the perceived properties of the listener.
- Selected as a newsworthy project and included in the LSA Press Release
The Interaction between Phonological & Semantic Usage Factors in Dialect Intelligibility in Noise (2-3:30pm)
Riley Stray, Michelle Cohn, & Georgia Zellou
This study examines how an “American” or “British” meaning of a word (e.g., “chips”) spoken in different accents (GB, US) can affect speech-in-noise intelligibility. Overall, we found the British speaker was more intelligible producing British sentences, but also that intelligibility decreased as sentences became more stereotypically British. Results suggest that both phonological and semantic properties of phrases impact speech intelligibility of words across dialects, and that a particular semantic usage in a less familiar dialect can decrease intelligibility as sentences become less predictable.
New paper in Cognition!
Our paper, Intelligibility of face-masked speech depends on speaking style: Comparing casual, smiled, and clear speech, was accepted to Cognition today!
My co-authors, Anne Pycha (University of Wisconsin-Milwaukee) and Georgia Zellou (UC Davis), and I had a blast working together on a new project: how wearing a fabric face mask (as is common these days) affects speech intelligibility.
[Take away: masks don’t simply reduce intelligibility! The speaker plays an important role]
New Frontiers paper on voice-AI!
We’re thrilled that our paper, Age- and gender-related differences in speech alignment toward humans and voice-AI, was accepted at Frontiers in Communication: Language Sciences today!
https://www.frontiersin.org/articles/10.3389/fcomm.2020.600361/abstract
Amazon Research Grant Awarded!
I’m thrilled to that our project, “Speech entrainment during socialbot conversations“, has been funded with an Amazon Research Grant ($46,485). PI: Georgia Zellou, co-PI: Michelle Cohn.
LSA 2021
Two upcoming presentations at the 2021 Linguistics Society of America (LSA) Annual Meeting:
- Eleonora Beier, Michelle Cohn, Fernanda Ferreira, & Georgia Zellou: Prosodic focus in human- versus voice-AI-directed speech (Saturday, January 9, 2021 – 11:00am to 12:30pm)
- Riley Stray, Michelle Cohn, & Georgia Zellou: The Interaction between Phonological and Semantic Usage Factors in Dialect Intelligibility in Noise (Saturday, January 9, 2021 – 2:00pm to 3:30pm)
