I’m also currently a Visiting Researcher with the Google Responsible AI and Human-Centered Technology (HCT) User Experience (UX) Team. (provided by Magnit)
My research program aims to uncover the cognitive sources of variation in how people produce & perceive phonetic details in speech.
Broadly, I am interested in three main areas:
- Intelligibility: How do people adapt across presumed and actual barriers, such as automatic speech recognition (ASR) systems or wearing face-masks?
- Social connection / sociophonetics : What details do people perceive, mirror, and learn from human or text-to-speech (TTS) voices?
- Individual differences: What cognitive factors shape individual variation in the way people produce/perceive speech?
Types of methods I use:
- experiments in speech perception / production
- psycholinguistic paradigms
- acoustic-phonetic analyses
- user interaction studies (human-computer interaction)
mdcohn at ucdavis dot edut