EMNLP 2019 Paper

Congrats to the Gunrock team, led by Prof. Zhou Yu, for our demo paper acceptance at the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP) in Hong Kong!

Gunrock: A Social Bot for Complex and Engaging Long Conversations
Dian Yu, Michelle Cohn, Yi Mang Yang, Chun Yen Chen, Weiming Wen, Jiaping Zhang, Mingyang Zhou, Kevin Jesse, Austin Chau, Antara Bhowmick, Shreenath Iyer, Giritheja Sreenivasulu, Sam Davidson, Ashwin Bhandare and Zhou Yu [pdf]

You can see our system demonstration (2 minute video):

Picture
The Gunrock team at the 2018 reception after winning the Alexa Prize!

CAMP3 Talks

Great job to Tyler Kline and Aleese Block who presented two of our projects at the California Meeting on Psycholinguistics (CAMP3) this weekend at UC Santa Cruz!

  • Speech Alignment of Females toward Voice-AI and Human Voices: Conversational Role Influences Phonetic Imitation in a Map Task (Tyler Kline, Bruno Ferenc Segedin, Michelle Cohn & Georgia Zellou) 
  • California listeners’ patterns of partial compensation for coarticulatory /u/-fronting is influenced by the apparent age of the speaker (Aleese Block, Michelle Cohn & Georgia Zellou)
Picture
Tyler Kline presenting

Picture
Aleese Block presenting via handout during the power outage

Interspeech 2019

Along with Georgia Zellou & Bruno Ferenc Segedin (UC Davis Phonetics Lab), I traveled to Graz, Austria to present some of our research at the 2019 Interspeech Conference!

Picture
Georgia Zellou presenting our research exploring individual variation in speech toward Siri vs. human voices
Picture
Presenting our project looking at alignment toward emotionally expressive productions by Amazon Alexa

Bruno Ferenc Segedin presenting our research exploring phonetic adaptation to human vs. Amazon Alexa voices
Presenting research on acoustic cue weighting for musicians / nonmusicians

See below for links for the papers: 

Sigdial 2019

Along with Dr. Zhou Yu and Arbit Chen (UCD Computer Science), we are thrilled that we have a paper accepted to the Special Interest Group on Discourse (SIGDIAL) meeting in Stockholm, Sweden.

Our paper explores how different text-to-speech (TTS) modifications to the 2018 Alexa Prize Winner chatbot, Gunrock, impact user ratings.

Cohn, M., Chen, C., Yu, Z. (2019). A Large-Scale User Study of an Alexa Prize Chatbot: Effect of TTS Dynamism on Perceived Quality of Social Dialog. (In press). 2019 Special Interest Group on Discourse and Dialogue, SIGDIAL. 

Interspeech 2019

We are excited that several papers have been accepted for the Interspeech 2019 meeting in Graz, Austria!

Papers on human-voice AI interaction

Cohn, M., & Zellou, G.(2019). Expressiveness influences human vocal alignment toward voice-AI. (In press). 2019 Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 

Snyder, C. Cohn, M., & Zellou, G. (2019). Individual variation in cognitive processing style predicts differences in phonetic imitation of device and human voices. (In press). 2019 Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 

Ferenc Segedin, C. Cohn, M., & Zellou, G. (2019). Perceptual adaptation to device and human voices:  learning and generalization of a phonetic shift across real and voice-AI talkers. (In press). 2019 Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 

Paper on musical training & speech perception

Cohn, M., Zellou, G., Barreda, S. (2019) The role of musical experience in the perceptual weighting of acoustic cues for the obstruent coda voicing contrast in American English. (In press). 2019 Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 

UCD Language Symposium

‘Most Innovative Research’ Panel

Undergraduate researcher, Melina Sarian, did a fantastic job presenting her research project at the ‘Most Innovative Research’ Panel. Her work extends our project exploring device expressiveness to human voices.

Sarian, M., Cohn, M., & Zellou, G. Human vocal alignment to voice­AI is mediated by acoustic expressiveness. [Talk].UC Davis Symposium on Language Research. Davis, CA.

Picture
Melina Sarian presenting at the ‘Most Innovative Research’ Panel

Picture
Melina Sarian presenting at the ‘Most Innovative Research’ Panel

Dynamics of Voice-AI Interaction Panel

Bruno Ferenc Segedin and I also presented two talks in our ‘Dynamics of Voice-AI Interaction’ ​panel

Cohn, M., Ferenc Segedin, B., & Zellou, G. Differences in cross­-generational prosodic alignment toward device and human voices [Talk]. UC Davis Symposium on Language Research. Davis, CA. 

Ferenc Segedin, B., Cohn, M., & Zellou, G. Perceptual adaptation to Amazon’s Alexa and human voices: asymmetries in learning and generalization of a novel accent across real and AI talkers. [Talk]. UC Davis Symposium on Language Research. Davis, CA.