The Pre-Conscious Brain

This project aims to investigate the neural and biological underpinnings of depression and suicidal ideation. We will look at participants’ brain waves during the administration of simple language tasks that involve word and sentence processing while monitoring them through electrophysiology and eye-tracking measures. Several questionnaires will also be implemented to obtain a holistic view in this area of mental health.

Music, Arts, & Child Development

Beginning in 2019, in partnership with Youth Orchestra of Los Angeles at Camino Nuevo Charter Academy, we are investigating the effects of in-school group training on children, starting in the fifth and second grade. Music participants will be compared with children not involved in a systematic extracurricular program. In this study, we hope to further investigate the effects of music training on child and adolescent development, in the context of the public school system.

YOLA @ HOLA

Beginning in 2012, in partnership with the Youth Orchestra of Los Angeles at the Heart of Los Angeles, we have been investigating the effects of community group-based music training in 80 children, starting at age six. Music participants were compared with an active and passive control group of peers. We recently completed the 7th year of this longitudinal investigation, and thus far have provided support for the positive impact of music training on auditory processing development evidenced by greater ability for pitch perception and more developed auditory evoked potentials. Additionally, our findings have provided support for increased engagement of the cognitive control network during a stroop task, and for earlier development of inhibition skills, behaviorally. In addition, our music participants demonstrated higher fractional anisotropy in the corpus callosum.

Music and Hearing

The USC Music and Hearing study is an interdisciplinary project, where we are investigating the possible impact of group singing and music listening on hearing abilites and well-being in adults aged 50-65. Specifically, we are interested in whether short-term participation in a weekly community choir can improve speech in noise perception and its neural substrates as measured by sensory auditory evoked potentials (ERPs) to speech stimuli in older adults with mild to moderate subjective hearing loss.

To the right, see the BCI’s “Neurophonics”, directed by Andrew Schultz with accompanist Barry Tan, rehearsing in Cammilerri Hall at the Brain and Creativity Institute.

Neural dynamics of emotions in response to naturalistic music listening

How do we come to experience intense and complex feelings and what can studying the brain tell us about these experiences? To study emotional responses to music in a more ecologically valid manner, we collect neural activity continuously while participants listen to a full-length piece of music, and use multivariate, data-driven analytical techniques to quantify the patterns of response.

Currently, we use dynamic functional connectivity analyses to investigate how time-varying patterns of neural activity relate to changes in emotional experience in response to music. Specifically, this project is focused on understanding the complex feeling of pleasurable sadness, the paradoxical phenomenon in which we enjoy stimuli that convey negatively valent emotions. The fact that emotions have clearly defined evolutionary functions and yet humans routinely seek out music that communicates pain and suffering calls in to question our current understanding of human reward and motivation, and therefore necessitates further empirical investigation. By combining data-driven, whole-brain analytical techniques with continuous data collected in response to a singular, emotional stimulus, the findings will give a more detailed picture of the experience of pleasurable sadness as well as the interplay between emotional and reward processing pathways in the brain.

Music-evoked emotions and machine learning

In addition to assessing the neural signatures of complex, dynamic feelings, we are interested in understanding how distinct emotional states are represented by patterns of bodily and neural activity. Machine learning classification algorithms can help address these questions because they involve assessing spatially distributed patterns of activity at once, and can greatly increase predictive power.

One such method, termed multivoxel pattern analysis, involves decoding representational content from patterns of fMRI signal distributed across the brain. Using MVPA, we were able to uncover fundamental neural systems responsible for representing emotions conveyed across various sound sources with different acoustical properties, i.e. music and the human voice. The fact that emotions in sound can be decoded based on patterns of activity in the emotional and sensory processing regions of the brain extend previous knowledge regarding how these networks extract meaningful affective information from auditory stimuli.

Emotions are not solely a neurological phenomenon and are often identified based on the accompanying set of bodily state changes. Recently, we conducted a study to assess how music-evoked emotions, as well as the perception of those emotions in others, are represented on the human body. We used a topographical self-report in which participants drew on a silhouette of a human body where they felt an emotion in response to music clips and trained a classifier to predict the intended emotions of the clips based on the drawings. We then compare these bodily states of music-evoked emotions to a group of children performing the same task.