I’m thrilled that two of my undergraduate research mentees, Prati Mehta and Ben Getz (who I’m also co-advising for his Senior Honors Thesis) presented posters at the UC Davis Undergraduate Research Conference on Friday, April 25th!
Prati Mehta: “Do Wav2vec Embeddings Track Phonetic Differences in Duration?”
Ben Getz: “Silent Center Syllables: A Study in Coarticulation Across Speech Style and Word Type”
Ages 7-12, Lottery Tours, 10:00am, 11:00am, 12:00pm, 1:00pm, capacity 5 each tour Participate in a real speech science experiment! Have you ever wondered how you’re able to understand speech? Or how your mouth and tongue coordinate to produce it? Come participate in a real speech science experiment. The appointment is for 45 minutes: the experiment will take about 5 minutes. After, you’ll see a short presentation on our research and have time for kids (and adults!) to ask questions and get a tour of the lab. Participation in the experiment is voluntary; the study has been approved by the UC Davis Institutional Review Board (IRB) ethics committee. For more information about consent, go to: https://phonlab.ucdavis.edu/child-consent-participate-experiment-volunteer.Sign up at https://hr.ucdavis.edu/departments/worklife-wellness/events/tocs.
We had a wonderful time hosting our Speech Science booth at UC Davis Picnic Day! Adults and kids could learn about spectrograms and try a bot / not experiment.
I was thrilled to be invited as this year’s Yvonne Becker Colloquium Speaker in the Department of Linguistics at Simon Fraser University (May 30, 12:30pm – 2:00pm).
Title: Impact of AI on human language
Abstract: Millions of people now talk to voice-activated artificially intelligent (voice-AI) systems (e.g., Siri, Alexa, Google Assistant, ChatGPT) to complete daily tasks. My research program tests how people (1) talk to, (2) perceive, and (3) learn language from voice-AI. At its core, I ask: is communication with voice-AI similar/distinct from communication with another human? I design experiments to probe behavior, combining methods from psycholinguistics, human-computer interaction, and phonetics. Thus far, I have found that while people produce a distinct technology-directed register, they also attribute human social qualities to the systems (e.g., gender, emotion) and learn speech patterns from them. I discuss these findings in terms of their implications for linguistic diversity and language change.Â
I’ll be presenting at poster at the REC Scholar session at the National Alzheimer’s Coordinating Center (NACC) Spring Meeting in San Francisco: “Speech biomarkers of cognitive impairment in technology-directed tasks”
We’ll be hosting our “Speech Science” booth at the Children’s Discovery Fair again this Picnic Day (April 12th) from 10:30am-1pm. This is a co-hosted booth with the Phonetics Lab and Language Learning Lab.
We will have spectrograms / waveforms that adults & kids could circle to learn about the components of speech, as well as sample experiments:
‘Bot or Not’: try to determine if speakers were text-to-speech (TTS) or human voices
‘Statistical Learning’: try to learn the new words from an alien language
I’m looking forward to presenting my poster with Alyssa Lanzi (U of Delaware) and Alyssa Weakley (UC Davis) at AAIC this July: “Speech rate as a biomarker of cognitive impairment in technology-directed tasks.”
I’m thrilled that our paper, “Prosodic variation between contexts in infant-directed speech”, led by psychology PhD student Jenna DiStefano, has been accepted to the Journal of Child Language.
DiStefano, J., Cohn, M., Zellou, G., Graf Estes, K. Prosodic variation between contexts in infant-directed speech. Journal of Child Language.
Along with my co-authors, Kevin Lilley, Ellen Dossey, Cynthia Clopper, Laura Wagner and Georgia Zellou, we’re thrilled our paper has been accepted and is now in-press.
Lilley, K., Dossey, E., Cohn, M., Clopper, C., Wagner, L.,& Zellou, G.(in press). Social evaluation of text-to-speech voices by adults and children. Speech Communication. https://doi.org/10.1016/j.specom.2024.103163