Talking to voice assistants: Cross-disciplinary and industry collaborations
Michelle Cohn & Georgia Zellou
Millions of people now talk to voice assistants (e.g., Siri, Alexa, Google Assistant) to complete daily tasks. These incidental interactions raise novel questions for our understanding of human communication and cognition. Do people differentiate how they talk to a human and device? Do people learn linguistic patterns from devices? Will talking to technology shape our language in the long term? In this talk, we present several of our cross-disciplinary and industry collaborations exploring speech interactions with voice technology (e.g., robots, socialbots, voice assistants).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Equitable automatic speech recognition (ASR): Collaboration with Google Research
Michelle Cohn, Zion Mengesha, Michal Lahav, Courtney Heldreth
While interactions with language technology can be helpful for a range of use cases (e.g., setting timers, reading labels aloud), work in responsible AI has shown that it does not work equally well for all individuals. In particular, speakers of underrepresented language varieties can face additional barriers. In this academic-industry collaboration, we investigate disparities in who language technology understands and make advancements for equitable automatic speech recognition (ASR).
