We’re thrilled that our paper, “The effect of listener beliefs on perceptual learning: Comparing adaptation to a vowel shift across device and human guises”, by Georgia Zellou, myself, and Anne Pycha, has been accepted for publication at Language.
Recent posts
Interspeech 2023 Paper on Cross-Cultural Emotion Perception
We have a new paper accepted to Interspeech 2023, “Cross-linguistic Emotion Perception in Human and TTS Voices”, led by Iona Gessinger, and our co-authors Georgia Zellou, Bernd Möbius, and Benjamin Cowan.
Take our Children to Work Day @ PhonLab (4/27)
Participate in a real speech science experiment!
Have you ever wondered how you’re able to understand speech? Or how your mouth and tongue coordinate to produce it? Come to the UC Davis Phonetics Lab (Department of Linguistics), 251 Kerr Hall to directly participate in a real speech science experiment (ages 7-12).
The appointment is for 45 minutes: the experiment will take about 5 minutes. After, you’ll see a short presentation on our research and have time for kids (and adults!) to ask questions and get a tour of the lab.
Timeslots: 10am, 11am, 1pm [RSVP required]. Note that we can accommodate a maximum of 5 children for each time slot.
Participation in the experiment is voluntary; the study has been approved by the UC Davis Institutional Review Board (IRB) ethics committee. For more information about consent, go to: https://phonlab.ucdavis.edu/child-consent-participate-experiment-volunteer.
To sign up and to learn more about the Take our Children to Work events, please go to: https://hr.ucdavis.edu/departments/worklife-wellness/events/tocs.
Talk @ Acoustical Society of America (ASA)
Georgia Zellou and I have a talk at the Annual Acoustical Society of America conference in Chicago on May 9th:
- “Clear speech in the new digital era: Speaking and listening clearly to voice-AI systems”
- We’ll present an overview of our work on speech production/perception with voice technology
Started as a Senior Scientist Consultant @ Neurobehavioral Systems
This week I started as a consultant for Neurobehavioral Systems, a start-up in the Bay Area. I’m working on the CCAB Project (‘The California Cognitive Assessment Battery’ https://www.ccabresearch.com/), which uses remote and speech-based interactions for neuropsychological testing. We will be producing original research on the acoustic-prosodic features associated with various cognitive measures, and tracking changes over time.
Public Outreach: Take our Children to Work Day
Participate in a real speech science experiment!
Have you ever wondered how you’re able to understand speech? Or how your mouth and tongue coordinate to produce it? Come to the UC Davis Phonetics Lab (Department of Linguistics), 251 Kerr Hall to directly participate in a real speech science experiment (ages 7-12).
The appointment is for 45 minutes: the experiment will take about 5 minutes. After, you’ll see a short presentation on our research and have time for kids (and adults!) to ask questions and get a tour of the lab.
Timeslots: 10am, 11am, 12pm, 1pm, 2pm, 3pm, 4pm [RSVP required]. Note that we can accommodate a maximum of 5 children for each time slot.
Participation in the experiment is voluntary; the study has been approved by the UC Davis Institutional Review Board (IRB) ethics committee. For more information about consent, go to: https://phonlab.ucdavis.edu/child-consent-participate-experiment-volunteer.
To sign up and to learn more about the Take our Children to Work events, please go to: https://hr.ucdavis.edu/departments/worklife-wellness/events/tocs.
Speech Science Booth @ Picnic Day
Sat, Apr 15, 2023 @ 10:00am – 1:30pm
Hosted by the UC Davis Phonetics & Language Learning Labs!
We will be presenting speech science to children in a fun and accessible way. Kids can take part in a sample from real experiments (hearing sounds over headphones and making decisions on an iPad). We will also have low-tech activities where kids can learn about speech data, and be asked to draw where they think the different sounds are on laminated spectrograms.

Invited Language Cluster Talk
On April 10th 1:30-2:30pm (PT), I’ll be giving a talk for the UC Davis Cluster on Language
Voice assistant- vs. human-directed? Speech style differences as a window to social cognition.
Individuals of all ages increasingly use spoken language to interface with technology, such as with voice assistants (e.g., Siri, Alexa, Google Assistant). In this talk, I will present some of our recent research examining speech style adaptations in controlled psycholinguistic experiments with voice assistant and human addressees. Our findings suggest that both a priori expectations of communicative competence, as well as the actual error rate in the interaction, shape acoustic-phonetic adaptations. I discuss these findings in terms of the interplay of anthropomorphism and mental models in human-computer interaction, and raise the broader implications of voice technology on language use and language change.
Collaborative paper accepted to AMPPS!
I’m thrilled our multi-institutional project led by Stefano Coretta and Timo Roetteger (and with 151 co-authors) was accepted to Advances in Methods and Practices in Psychological Science: “Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses”.
New JASA Paper!
Today we had a new paper accepted to the Journal of the Acoustical Society of America (JASA): “The perception of nasal coarticulatory variation in face-masked speech” (Zellou, Pycha, & Cohn, accepted).
