We’re excited that our paper, “Effects of Emotional Expressiveness on Voice Chatbot Interactions”, was accepted to the 2022 Conversational User Interfaces (CUI) conference. This project was led by UC Davis grad students, Qingxiaoyang Zhu and Austin Chau. Our other co-authors include Kai-Hui Liang, Georgia Zellou, Hao-Chuan Wang, & Zhou Yu.
We’re hosting a virtual event at the UC Davis Phonetics Lab (Dept. of Linguistics) on Thursday, April 28th.
Come learn about speech science with Siri, Alexa, and Google assistant! Kids (ages 7-12) can participate in a real science experiment with a voice assistant (note that a parent must be present to consent). You will need a computer that can play sound and allow you to type/click (no other devices are needed). The experiment will take about 5 minutes. After, you’ll see a short presentation about our research, including an overview of the lab.
Anne Pycha, Georgia Zellou, and I have a new paper on face-masked speech, in the Frontiers special issue on Language Development Behind the Mask.
We have a new paper led by grad student, Nick Aoki, and co-authored by Dr. Georgia Zellou that was accepted to the Journal of Acoustical Society of America (JASA)- Express Letters on April 1st!
“The clear speech intelligibility benefit for text-to-speech voices: Effects of speaking style and visual guise”
In April 2022, I started as a Visiting Researcher with the Responsible AI (RAI) Human-Centered Technology (HCT) UX Team at Google (Provided by ProUnlimited).
I’m looking forward to giving a talk Friday, March 25, 2022 for the EU COST (European Cooperation in Science and Technology) Action network for “Language in the Human-Machine Era“.
|Speech interactions with voice assistants: a window to human language and cognition|
Millions of people now regularly interface with technology using spoken language, such as with voice assistants (e.g., Alexa, Siri, Google Assistant). Yet, our scientific understanding of these interactions is still in its infancy. My research explores the cognitive factors shaping speech production, perception, and learning in interactions with voice assistants, using methods from psycholinguistics and acoustic-phonetics.
I’m excited to give a guest lecture on Thursday, November 18th at Boston University for Dr. Kate Lindsey’s Intro. to Linguistics course.
Hey [Siri|Google|Alexa], How do we talk to voice assistants?
For a PDF of slides, please email me directly (mdcohn @ ucdavis . edu)
I was honored to give a keynote today at the inaugural 2021 Human Perspectives on Spoken Human-Machine Interaction (SpoHuMa) FRIAS Junior Researcher Conference, hosted by the Frieberg Institute for Advanced Studies.
See below for slides!
My co-authors, Kris Predeck, Melina Sarian, & Georgia Zellou, and I are thrilled our paper ”Prosodic alignment toward emotionally expressive speech: Comparing human and Alexa model talkers” is now in press in Speech Communication. https://doi.org/10.1016/j.specom.2021.10.003
I’m thrilled that my paper with Georgia Zellou and Bruno Ferenc Segedin has been accepted to the Journal of Phonetics!
The paper examines acoustic-phonetic adjustments when people talk to a Siri vs. a naturally recorded human voice. We find prosodic differences (e.g., increased intensity, smaller f0 range in Siri-DS) as well as some targeted adjustments (vowel hyperarticulation in response to an error made by Siri). Across two experiments varying in error rate, we see differences in the way these register adaptions emerge.
[[Update: Now available online! https://doi.org/10.1016/j.wocn.2021.101123]]