Anne Pycha, Georgia Zellou, and I have a new paper on face-masked speech, in the Frontiers special issue on Language Development Behind the Mask.
Recent posts
New JASA-EL paper on intelligibility
We have a new paper led by grad student, Nick Aoki, and co-authored by Dr. Georgia Zellou that was accepted to the Journal of Acoustical Society of America (JASA)- Express Letters on April 1st!
“The clear speech intelligibility benefit for text-to-speech voices: Effects of speaking style and visual guise”
Started as a Visiting Researcher @ Google (via ProUnlimited)
In April 2022, I started as a Visiting Researcher with the Responsible AI (RAI) Human-Centered Technology (HCT) UX Team at Google (Provided by ProUnlimited).
Siri-DS Paper: Top 3 Downloaded Article (JPhonetics)
We are excited to see that our paper (Acoustic-phonetic properties of Siri- and human-directed speech) is one of the top 3 downloaded articles from the Journal of Phonetics in the past 90 days!

Invited talk: EU COST Action “Language in the Human-Machine Era”
I’m looking forward to giving a talk Friday, March 25, 2022 for the EU COST (European Cooperation in Science and Technology) Action network for “Language in the Human-Machine Era“.
Speech interactions with voice assistants: a window to human language and cognition Millions of people now regularly interface with technology using spoken language, such as with voice assistants (e.g., Alexa, Siri, Google Assistant). Yet, our scientific understanding of these interactions is still in its infancy. My research explores the cognitive factors shaping speech production, perception, and learning in interactions with voice assistants, using methods from psycholinguistics and acoustic-phonetics. |
Guest Lecture (Boston University)
I’m excited to give a guest lecture on Thursday, November 18th at Boston University for Dr. Kate Lindsey’s Intro. to Linguistics course.
Hey [Siri|Google|Alexa], How do we talk to voice assistants?
For a PDF of slides, please email me directly (mdcohn @ ucdavis . edu)
Keynote presentation (@SpoHuMa ’21)
I was honored to give a keynote today at the inaugural 2021 Human Perspectives on Spoken Human-Machine Interaction (SpoHuMa) FRIAS Junior Researcher Conference, hosted by the Frieberg Institute for Advanced Studies.
See below for slides!
New paper on emotional alignment in Speech Communication!
My co-authors, Kris Predeck, Melina Sarian, & Georgia Zellou, and I are thrilled our paper ”Prosodic alignment toward emotionally expressive speech: Comparing human and Alexa model talkers” is now in press in Speech Communication. https://doi.org/10.1016/j.specom.2021.10.003
Siri- vs. human-DS paper accepted to Journal of Phonetics!
I’m thrilled that my paper with Georgia Zellou and Bruno Ferenc Segedin has been accepted to the Journal of Phonetics!
The paper examines acoustic-phonetic adjustments when people talk to a Siri vs. a naturally recorded human voice. We find prosodic differences (e.g., increased intensity, smaller f0 range in Siri-DS) as well as some targeted adjustments (vowel hyperarticulation in response to an error made by Siri). Across two experiments varying in error rate, we see differences in the way these register adaptions emerge.
[[Update: Now available online! https://doi.org/10.1016/j.wocn.2021.101123]]
Invited keynote at Human Perspectives on Spoken Human-Machine Interaction
I’m excited to be a keynote speaker at FRIAS Junior Researcher Conference – Human Perspectives on Spoken Human-Machine Interaction this November!
- If you’re interested in submitting a paper to the conference, note the due date of September 3, 2021