I’m excited to share that our paper, Cross-Cultural Comparison of Gradient Emotion Perception: Human vs. Alexa TTS Voices, was accepted to Interspeech 2022! This project is a collaboration with Iona Gessinger (project lead; UC Dublin), Bernd Moebius (Saarland University), and Georgia Zellou (UC Davis).
We’re excited that our paper, “Effects of Emotional Expressiveness on Voice Chatbot Interactions”, was accepted to the 2022 Conversational User Interfaces (CUI) conference. This project was led by UC Davis grad students, Qingxiaoyang Zhu and Austin Chau. Our other co-authors include Kai-Hui Liang, Georgia Zellou, Hao-Chuan Wang, & Zhou Yu.
We’re hosting a virtual event at the UC Davis Phonetics Lab (Dept. of Linguistics) on Thursday, April 28th.
Come learn about speech science with Siri, Alexa, and Google assistant! Kids (ages 7-12) can participate in a real science experiment with a voice assistant (note that a parent must be present to consent). You will need a computer that can play sound and allow you to type/click (no other devices are needed). The experiment will take about 5 minutes. After, you’ll see a short presentation about our research, including an overview of the lab.
Anne Pycha, Georgia Zellou, and I have a new paper on face-masked speech, in the Frontiers special issue on Language Development Behind the Mask.
We have a new paper led by grad student, Nick Aoki, and co-authored by Dr. Georgia Zellou that was accepted to the Journal of Acoustical Society of America (JASA)- Express Letters on April 1st!
“The clear speech intelligibility benefit for text-to-speech voices: Effects of speaking style and visual guise”
In April 2022, I started as a Visiting Researcher with the Responsible AI (RAI) Human-Centered Technology (HCT) UX Team at Google (Provided by ProUnlimited).
We are excited to see that our paper (Acoustic-phonetic properties of Siri- and human-directed speech) is one of the top 3 downloaded articles from the Journal of Phonetics in the past 90 days!
I’m looking forward to giving a talk Friday, March 25, 2022 for the EU COST (European Cooperation in Science and Technology) Action network for “Language in the Human-Machine Era“.
|Speech interactions with voice assistants: a window to human language and cognition|
Millions of people now regularly interface with technology using spoken language, such as with voice assistants (e.g., Alexa, Siri, Google Assistant). Yet, our scientific understanding of these interactions is still in its infancy. My research explores the cognitive factors shaping speech production, perception, and learning in interactions with voice assistants, using methods from psycholinguistics and acoustic-phonetics.
I’m excited to give a guest lecture on Thursday, November 18th at Boston University for Dr. Kate Lindsey’s Intro. to Linguistics course.
Hey [Siri|Google|Alexa], How do we talk to voice assistants?
For a PDF of slides, please email me directly (mdcohn @ ucdavis . edu)
I was honored to give a keynote today at the inaugural 2021 Human Perspectives on Spoken Human-Machine Interaction (SpoHuMa) FRIAS Junior Researcher Conference, hosted by the Frieberg Institute for Advanced Studies.
See below for slides!