This week I started as a consultant for Neurobehavioral Systems, a start-up in the Bay Area. I’m working on the CCAB Project (‘The California Cognitive Assessment Battery’ https://www.ccabresearch.com/), which uses remote and speech-based interactions for neuropsychological testing. We will be producing original research on the acoustic-prosodic features associated with various cognitive measures, and tracking changes over time.
Have you ever wondered how you’re able to understand speech? Or how your mouth and tongue coordinate to produce it? Come to the UC Davis Phonetics Lab (Department of Linguistics), 251 Kerr Hall to directly participate in a real speech science experiment (ages 7-12).
The appointment is for 45 minutes: the experiment will take about 5 minutes. After, you’ll see a short presentation on our research and have time for kids (and adults!) to ask questions and get a tour of the lab.
Timeslots: 10am, 11am, 12pm, 1pm, 2pm, 3pm, 4pm [RSVP required]. Note that we can accommodate a maximum of 5 children for each time slot.
We will be presenting speech science to children in a fun and accessible way. Kids can take part in a sample from real experiments (hearing sounds over headphones and making decisions on an iPad). We will also have low-tech activities where kids can learn about speech data, and be asked to draw where they think the different sounds are on laminated spectrograms.
Voice assistant- vs. human-directed? Speech style differences as a window to social cognition.
Individuals of all ages increasingly use spoken language to interface with technology, such as with voice assistants (e.g., Siri, Alexa, Google Assistant). In this talk, I will present some of our recent research examining speech style adaptations in controlled psycholinguistic experiments with voice assistant and human addressees. Our findings suggest that both a priori expectations of communicative competence, as well as the actual error rate in the interaction, shape acoustic-phonetic adaptations. I discuss these findings in terms of the interplay of anthropomorphism and mental models in human-computer interaction, and raise the broader implications of voice technology on language use and language change.
I’m thrilled our multi-institutional project led by Stefano Coretta and Timo Roetteger (and with 151 co-authors) was accepted to Advances in Methods and Practices in Psychological Science: “Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses”.
Today we had a new paper accepted to the Journal of the Acoustical Society of America (JASA): “The perception of nasal coarticulatory variation in face-masked speech” (Zellou, Pycha, & Cohn, accepted).
I’m thrilled that, along with my co-authors Santiago Barreda & Georgia Zellou, we have a paper accepted to the Journal of Speech Language, and Hearing Research (JSLHR): Differences in a musician’s advantage for speech-in-speech perception based on age and task.
Along with my co-authors, Santiago Barreda, Katharine Graf Estes, Zhou Yu, & Georgia Zellou, I’ll be presenting our research, Register adaptations toward Alexa: Comparing children and adults.
I’m excited to share that our paper, Cross-Cultural Comparison of Gradient Emotion Perception: Human vs. Alexa TTS Voices, was accepted to Interspeech 2022! This project is a collaboration with Iona Gessinger (project lead; UC Dublin), Bernd Moebius (Saarland University), and Georgia Zellou (UC Davis).