On April 10th 1:30-2:30pm (PT), I’ll be giving a talk for the UC Davis Cluster on Language
Voice assistant- vs. human-directed? Speech style differences as a window to social cognition.
Individuals of all ages increasingly use spoken language to interface with technology, such as with voice assistants (e.g., Siri, Alexa, Google Assistant). In this talk, I will present some of our recent research examining speech style adaptations in controlled psycholinguistic experiments with voice assistant and human addressees. Our findings suggest that both a priori expectations of communicative competence, as well as the actual error rate in the interaction, shape acoustic-phonetic adaptations. I discuss these findings in terms of the interplay of anthropomorphism and mental models in human-computer interaction, and raise the broader implications of voice technology on language use and language change.