I’m looking forward to giving a talk Friday, March 25, 2022 for the EU COST (European Cooperation in Science and Technology) Action network for “Language in the Human-Machine Era“.
Speech interactions with voice assistants: a window to human language and cognition
Millions of people now regularly interface with technology using spoken language, such as with voice assistants (e.g., Alexa, Siri, Google Assistant). Yet, our scientific understanding of these interactions is still in its infancy. My research explores the cognitive factors shaping speech production, perception, and learning in interactions with voice assistants, using methods from psycholinguistics and acoustic-phonetics.
My co-authors, Kris Predeck, Melina Sarian, & Georgia Zellou, and I are thrilled our paper ”Prosodic alignment toward emotionally expressive speech: Comparing human and Alexa model talkers” is now in press in Speech Communication. https://doi.org/10.1016/j.specom.2021.10.003
I’m thrilled that my paper with Georgia Zellou and Bruno Ferenc Segedin has been accepted to the Journal of Phonetics!
The paper examines acoustic-phonetic adjustments when people talk to a Siri vs. a naturally recorded human voice. We find prosodic differences (e.g., increased intensity, smaller f0 range in Siri-DS) as well as some targeted adjustments (vowel hyperarticulation in response to an error made by Siri). Across two experiments varying in error rate, we see differences in the way these register adaptions emerge.