I’m thrilled that, along with my co-authors Santiago Barreda & Georgia Zellou, we have a paper accepted to the Journal of Speech Language, and Hearing Research (JSLHR): Differences in a musician’s advantage for speech-in-speech perception based on age and task.
Recent posts
Talk at 2023 LSA Annual Meeting
Along with my co-authors, Santiago Barreda, Katharine Graf Estes, Zhou Yu, & Georgia Zellou, I’ll be presenting our research, Register adaptations toward Alexa: Comparing children and adults.
New Interspeech paper! Cross-cultural emotion perception of TTS voices
I’m excited to share that our paper, Cross-Cultural Comparison of Gradient Emotion Perception: Human vs. Alexa TTS Voices, was accepted to Interspeech 2022! This project is a collaboration with Iona Gessinger (project lead; UC Dublin), Bernd Moebius (Saarland University), and Georgia Zellou (UC Davis).
Emotional Expressiveness Paper @ CUI 2022
We’re excited that our paper, “Effects of Emotional Expressiveness on Voice Chatbot Interactions”, was accepted to the 2022 Conversational User Interfaces (CUI) conference. This project was led by UC Davis grad students, Qingxiaoyang Zhu and Austin Chau. Our other co-authors include Kai-Hui Liang, Georgia Zellou, Hao-Chuan Wang, & Zhou Yu.
Public Outreach: UC Davis Take our Children to Work (TOC) Day
We’re hosting a virtual event at the UC Davis Phonetics Lab (Dept. of Linguistics) on Thursday, April 28th.
https://hr.ucdavis.edu/departments/worklife-wellness/events/tocs
[Remote-activity].
Come learn about speech science with Siri, Alexa, and Google assistant! Kids (ages 7-12) can participate in a real science experiment with a voice assistant (note that a parent must be present to consent). You will need a computer that can play sound and allow you to type/click (no other devices are needed). The experiment will take about 5 minutes. After, you’ll see a short presentation about our research, including an overview of the lab.
New paper in Frontiers on masked speech!
Anne Pycha, Georgia Zellou, and I have a new paper on face-masked speech, in the Frontiers special issue on Language Development Behind the Mask.
New JASA-EL paper on intelligibility
We have a new paper led by grad student, Nick Aoki, and co-authored by Dr. Georgia Zellou that was accepted to the Journal of Acoustical Society of America (JASA)- Express Letters on April 1st!
“The clear speech intelligibility benefit for text-to-speech voices: Effects of speaking style and visual guise”
Started as a Visiting Researcher @ Google (via ProUnlimited)
In April 2022, I started as a Visiting Researcher with the Responsible AI (RAI) Human-Centered Technology (HCT) UX Team at Google (Provided by ProUnlimited).
Siri-DS Paper: Top 3 Downloaded Article (JPhonetics)
We are excited to see that our paper (Acoustic-phonetic properties of Siri- and human-directed speech) is one of the top 3 downloaded articles from the Journal of Phonetics in the past 90 days!

Invited talk: EU COST Action “Language in the Human-Machine Era”
I’m looking forward to giving a talk Friday, March 25, 2022 for the EU COST (European Cooperation in Science and Technology) Action network for “Language in the Human-Machine Era“.
Speech interactions with voice assistants: a window to human language and cognition Millions of people now regularly interface with technology using spoken language, such as with voice assistants (e.g., Alexa, Siri, Google Assistant). Yet, our scientific understanding of these interactions is still in its infancy. My research explores the cognitive factors shaping speech production, perception, and learning in interactions with voice assistants, using methods from psycholinguistics and acoustic-phonetics. |