Research Projects

My research focuses on developing deep learning models for speech data and using well-understood dependencies in speech to interpret internal representations in deep neural networks. More specifically, I build models that learn representations of spoken words from raw audio inputs. I combine machine learning and statistical models with neuroimaging and behavioral experiments to better understand how neural networks learn internal representations in speech and how humans learn to speak. I have worked and published on sound systems of various language families such as Indo-European, Caucasian, and Austronesian languages.

Understanding how AI learns

Building artificial baby language learners

Comparing the brain and AI

More

YouTube 

🔊The sound played to humans and machines: link

🔊How this sound sounds like in the brain: link

🔊How this sound sounds like in machines1: link
🔊How this sound sounds like in machines2: link

Analyzing large language models

Using Generative AI to decode whale communication 



Unnatural phonology

Indo-European linguistics