Hello!


I’m a postdoctoral fellow at UC Davis, associated with the Linguistics, Psychology, and Computer Science Departments. [bio]

I’m a Principal Investigator on a National Science Foundation (NSF) Training Fellowship with with co-PIs Dr. Georgia Zellou (Linguistics), Dr. Zhou Yu (Computer Science), and Dr. Katharine Graf Estes (Psychology) to explore human-voice AI interaction.

I received my Ph.D. in Linguistics in 2018 from UC Davis, working with Dr. Georgia Zellou, Dr. Santiago Barreda, and Dr. Antoine Shahin. I tested the ‘musician’s advantage’ for speech-in-noise perception, testing whether musicians/nonmusicians differ in the acoustic cues they use to separate competing talkers.

My research program focuses on cognitive factors in speech communication in both human-human and human-computer interaction.

In 2020, I launched the UC Davis Human-Computer Interaction Research Group: a collective of faculty, postdocs, graduate students, and undergraduates across campus interested in the dynamics of human-computer interaction. Our goal is to form a broader community of scientists, where we can share our work and forge connections across disciplines.

Contact: mdcohn at ucdavis dot edu


Research Interests

Talking to Tech

How do people talk, perceive, and learn from voice-AI assistants (e.g., Siri, Alexa) compared to real human talkers? …read more!

Intelligibility

How do people tailor their speech to improve intelligibility across novel barriers? …read more!

Music/Speech

Is individual variation in speech perception shaped by a person’s musical experience? …read more!


News!


Publications

  1. Cohn, M., Ferenc Segedin, B., & Zellou, G. (Accepted). The acoustic-phonetic properties of Siri- and human-DS: Differences by error type and rate. Journal of Phonetics
  2. Cohn, M., & Zellou, G. (2021). Prosodic differences in human- and Alexa-directed speech, but similar error correction strategies. Frontiers in Communication. [Article]
  3. Cohn, M., Liang, K., Sarian, M., Zellou, G., & Yu, Z. (2021). Speech rate adjustments in conversations with an Amazon Alexa socialbot. Frontiers in Communication [Article]
  4. Zellou, G., Cohn, M., & Kline, T. (2021). The Influence of Conversational Role on Phonetic Alignment toward Voice-AI and Human Interlocutors. Language, Cognition and Neuroscience [Article]
  5. Zellou, G., Cohn, M., Block, A. (2021). Partial compensation for coarticulatory vowel nasalization across concatenative and neural text-to-speech. Journal of the Acoustic Society of America [Article]
  6. Cohn, M., Pycha, A., Zellou, G. (2021). Intelligibility of face-masked speech depends on speaking style: Comparing casual, smiled, and clear speech. Cognition [Article]
  7. Zellou, G., Cohn, M., Ferenc Segedin, B. (2021). Age- and gender-related differences in speech alignment toward humans and voice-AI. Frontiers in Communication [Article]
  8. Cohn, M. & Zellou, G. (2020). Perception of concatenative vs. Neural text-to-speech (TTS): Differences in intelligibility in noise and language attitudes. Interspeech [pdf] [Virtual Talk]
  9. Cohn, M., Raveh, E., Predeck, K., Gessinger, I., Möbius, B., & Zellou, G. (2020). Differences in Gradient Emotion Perception: Human vs. Alexa Voices. Interspeech [pdf] [Virtual talk]
  10. Zellou, G., & Cohn, M. (2020). Social and functional pressures in vocal alignment: Differences for human and voice-AI interlocutors. Interspeech [pdf]
  11. Cohn, M, Sarian, M., Predeck, K., & Zellou, G. (2020). Individual variation in language attitudes toward voice-AI: The role of listeners’ autistic-like traits. Interspeech [pdf] [Virtual talk]
  12. Cohn, M., Jonell, P., Kim, T., Beskow, J., Zellou, G. (2020). Embodiment and gender interact in alignment to TTS voices. Cognitive Science Society [pdf] [Virtual talk]
  13. Zellou, G., & Cohn, M. (2020). Top-down effects of apparent humanness on vocal alignment toward human and device interlocutors. Cognitive Science Society [pdf]
  14. Zellou, G., Cohn, M., Block, A. (2020). Top-down effect of speaker age guise on perceptual compensation for coarticulatory /u/-fronting. 2020 Cognitive Science Society [pdf]
  15. Yu, D., Cohn, M., Yang, Y.M., Chen, C., … Yu, Z. (2019). Gunrock: A Social Bot for Complex and Engaging Long Conversations. EMNLP-IJCNLP [pdf] Click here for the system demonstration
  16. Cohn, M., Chen, C., & Yu, Z. (2019). A Large-Scale User Study of an Alexa Prize Chatbot: Effect of TTS Dynamism on Perceived Quality of Social Dialog. SIGDial [pdf]
  17. Cohn, M., & Zellou, G. (2019). Expressiveness influences human vocal alignment toward voice-AI. Interspeech [pdf]
  18. Snyder, C. Cohn, M., Zellou, G. (2019). Individual variation in cognitive processing style predicts differences in phonetic imitation of device and human voices. Interspeech [pdf]
  19. Ferenc Segedin, B. Cohn, M., Zellou, G. (2019). Perceptual adaptation to device and human voices:  learning and generalization of a phonetic shift across real and voice-AI talkers. Interspeech [pdf]
  20. Cohn, M., Zellou, G., Barreda, S. (2019). The role of musical experience in the perceptual weighting of acoustic cues for the obstruent coda voicing contrast in American English. Interspeech [pdf]
  21. Cohn, M., Ferenc Segedin, B., Zellou, G. (2019). Imitating Siri: Socially-mediated vocal alignment to device and human voices. ICPhS [pdf]
  22. Brotherton, C., Cohn, M., Zellou, G., Barreda, S. (2019). Sub-regional variation in positioning and degree of nasalization of /æ/ allophones in California. ICPhS [pdf]
  23. Cohn, M. (2018). Investigating a possible “musician advantage” for speech-in-speech perception: The role of f0 separation. Linguistic Society of America [pdf]

Public Outreach

2021 Picnic Day Booth: Speech Science

Come learn about an interdisciplinary research project exploring how adults and kids talk to Amazon’s Alexa, compared to how they talk to a human. You’ll see an example of the experiment, meet the team, and get a behind-the-scenes look at the research process! Interested in participating? http://phonlab.ucdavis.edu/participate