Interhemispheric connectivity important for integration of speech sound information, study shows When we listen to speech sounds, the information that enters our left and right ear is not exactly the same. This may be because acoustic information reaches one ear before the other, or because the sound is perceived as louder by one of the ears. Information about speech sounds also reaches different parts of our brain, and the two hemispheres are specialized in processing different types of acoustic information. But how does the brain integrate auditory information from different areas? To investigate this question, lead researcher Basil Preisig from the University of Zurich collaborated with an international team of scientists. In an earlier study, the team discovered that the brain integrates information about speech sounds by 'balancing' the rhythm of gamma waves across the hemispheres--a process called 'oscillatory synchronization'.