For humans to achieve accurate speech recognition and communicate with one another, the auditory system must recognize distinct categories of sounds — such as words — from a continuous incoming stream of sounds. This task becomes complicated when considering the variability in sounds produced by individuals with different accents, pitches, or intonations. In a new paper, researchers detail a computational model that explores how the auditory system tackles this complex task.