We are thrilled to have several papers accepted to the 2020 Interspeech conference:
Cohn, M., & Zellou, G. “Perception of concatenative vs. neural text-to-speech (TTS): Differences in intelligibility in noise and language attitudes” Cohn, M., Sarian, M., Predeck, K., & Zellou, G. “Individual variation in language attitudes toward voice-AI: The role of listeners’ autistic-like traits” Zellou, G., & Cohn, M. “Social and functional pressures in vocal alignment: Differences for human and voice-AI interlocutors”
Including one for the new collaboration between UC Davis and Saarland University:
Cohn, M., Raveh, E., Predeck, K., Gessinger, I., Möbius, B., Zellou, G. “Differences in Gradient Emotion Perception: Human vs. Alexa Voices”
Embodiment and gender interact in alignment to TTS voices (CogSci 2020)
UC Davis-KTH Collaboration:
Michelle Cohn (UCD),
Patrik Jonell (KTH), Taylor Kim (UCD), Jonas Beskow (KTH), Georgia Zellou (UCD)
We are thrilled that three of our papers have been accepted to the
2020 Cognitive Science Society Meeting!
Cohn, M., Jonell, P., Kim, T., Beskow, J., Zellou, G. Embodiment and gender interact in alignment to TTS voices.
While at the KTH Royal Institute of Technology (Stockholm, Sweden) in September 2019, I met up with Dr. Jonas Beskow (pictured in the center), co-founder of Furhat Robotics, and Ph.D. student Patrik Jonell (pictured on the right). Together with Georgia Zellou and Taylor Kim, we’re conducting a study to test the role of embodiment and gender in human’s voice-AI interaction with three platforms: Amazon Echo, Nao, and Furhat.
Zellou, G., & Cohn, M. Top-down effects of apparent humanness on vocal alignment toward human and device interlocutors.
Zellou, G., Cohn, M., Block, A. Top-down effect of speaker age guise on perceptual compensation for coarticulatory /u/-fronting.
Picnic Day is going 100% digital in light of the COVID-19 pandemic. Fortunately, a small team of talented RAs (Patty Sandoval, Julian Rambob, Mia Gong, and Marlene Andrade) helped me create Virtual Booth videos!
See the other Virtual Picnic Day events
We’ll present two projects at the annual Linguistic Society of America (LSA) meeting in January:
California listeners’ patterns of partial compensation for coarticulatory /u/-fronting is influenced by the apparent age of the speaker (Aleese Block, Michelle Cohn, Georgia Zellou)
Conversational role influences speech alignment toward digital assistant and human voices (Georgia Zellou, Michelle Cohn, Tyler Kline, Bruno Ferenc Segedin)
Congrats to the Gunrock team, led by Prof. Zhou Yu, for our demo paper acceptance at the
2019 Conference on Empirical Methods in Natural Language Processing (EMNLP) in Hong Kong!
Gunrock: A Social Bot for Complex and Engaging Long Conversations Dian Yu, Michelle Cohn, Yi Mang Yang, Chun Yen Chen, Weiming Wen, Jiaping Zhang, Mingyang Zhou, Kevin Jesse, Austin Chau, Antara Bhowmick, Shreenath Iyer, Giritheja Sreenivasulu, Sam Davidson, Ashwin Bhandare and Zhou Yu [ pdf]
You can see our system demonstration (2 minute video):
The Gunrock team at the 2018 reception after winning the Alexa Prize!
Great job to Tyler Kline and Aleese Block who presented two of our projects at the California Meeting on Psycholinguistics (CAMP3) this weekend at UC Santa Cruz!
Speech Alignment of Females toward Voice-AI and Human Voices: Conversational Role Influences Phonetic Imitation in a Map Task (Tyler Kline, Bruno Ferenc Segedin, Michelle Cohn & Georgia Zellou)
California listeners’ patterns of partial compensation for coarticulatory /u/-fronting is influenced by the apparent age of the speaker (Aleese Block, Michelle Cohn & Georgia Zellou)
Tyler Kline presenting
Aleese Block presenting via handout during the power outage
Along with Georgia Zellou & Bruno Ferenc Segedin (UC Davis Phonetics Lab), I traveled to Graz, Austria to present some of our research at the
2019 Interspeech Conference!
Georgia Zellou presenting our research exploring individual variation in speech toward Siri vs. human voices
Presenting our project looking at alignment toward emotionally expressive productions by Amazon Alexa
Bruno Ferenc Segedin presenting our research exploring phonetic adaptation to human vs. Amazon Alexa voices
Presenting research on acoustic cue weighting for musicians / nonmusicians
See below for links for the papers:
While at the KTH Royal Institute of Technology
(Stockholm, Sweden) this September, Michelle Cohn met up with Dr. Jonas Beskow, co-founder of Furhat Robotics, and Ph.D. student Patrik Jonell. Together with Georgia Zellou, they are conducting a study to test the role of embodiment and gender in human’s voice-AI interaction with three platforms: Amazon Echo, Nao, and Furhat.
Michelle Cohn, Jonas Beskow, & Patrik Jonell at the KTH Studio
Along with Dr. Zhou Yu and Arbit Chen (UCD Computer Science), we are thrilled that we have a paper accepted to the Special Interest Group on Discourse (SIGDIAL) meeting in Stockholm, Sweden.
Our paper explores how different text-to-speech (TTS) modifications to the
2018 Alexa Prize Winner chatbot, Gunrock, impact user ratings.
Cohn, M., Chen, C., Yu, Z. (2019). A Large-Scale User Study of an Alexa Prize Chatbot: Effect of TTS Dynamism on Perceived Quality of Social Dialog. (In press). 2019 Special Interest Group on Discourse and Dialogue, SIGDIAL.