Recent posts

2 talks @ LSA Annual Meeting (LEXING: Linguists in Industry, Non-profits, and Government Organized Session)

Talking to voice assistants: Cross-disciplinary and industry collaborations
Michelle Cohn & Georgia Zellou

Millions of people now talk to voice assistants (e.g., Siri, Alexa, Google Assistant) to complete daily tasks. These incidental interactions raise novel questions for our understanding of human communication and cognition. Do people differentiate how they talk to a human and device? Do people learn linguistic patterns from devices? Will talking to technology shape our language in the long term? In this talk, we present several of our cross-disciplinary and industry collaborations exploring speech interactions with voice technology (e.g., robots, socialbots, voice assistants).

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Equitable automatic speech recognition (ASR): Collaboration with Google Research
Michelle Cohn, Zion Mengesha, Michal Lahav, Courtney Heldreth

While interactions with language technology can be helpful for a range of use cases (e.g., setting timers, reading labels aloud), work in responsible AI has shown that it does not work equally well for all individuals. In particular, speakers of underrepresented language varieties can face additional barriers. In this academic-industry collaboration, we investigate disparities in who language technology understands and make advancements for equitable automatic speech recognition (ASR).

UC Davis Alzheimer’s Disease Research Center REC Scholar

I’m thrilled to be one of two postdocs selected for the UC Davis Alzheimer’s Disease Research Center (ADRC) REC Scholar Program. I’ll be working with neuropsychologist, Dr. Alyssa Weakley, on acoustic analyses of speech by older adults with cognitive decline in her iCare system. Specifically, we will examine speech directed in Apple Watch commands, extending my work in technology-directed speech adaptations to a clinical context.

Paper accepted to the Journal of the Acoustical Society of America

We’re thrilled that our paper was accepted to the Journal of the Acoustical Society of America (JASA).

Comparing perception of L1 and L2 English by human listeners and machines: Effect of interlocutor adaptations by grad students Jules Vonessen and Nick Aoki, Georgia Zellou, and myself.

This paper is part of a special issue: Acoustic Cue–Based Perception and Production of Speech by Humans and Machines.