Numerous studies have linked poverty with impaired cognitive development, but to date little is known about how socioeconomic status might relate to the physical development of the brain. In a recent study, published in Nature Neuroscience, Columbia University researchers Kimberly Noble, Suzanne Houston and their colleagues completed the largest investigation of socioeconomic status and children’s brain structure to date.
The researchers found both parental education and family income to be associated with a child’s brain structure—particularly in regions critical to language, executive functions and memory. To explore the link between socioeconomic factors and brain structure, the researchers examined the surface area of the cortex—collected from over 1000 youth ranging in age from 3 to 20, as a part of the multi-site Pediatric Imaging, Neurocognition and Genetics study—in relation to family income and parental education, while controlling for age, sex, and genetic ancestry.
Children whose parents spent more years in high school or college showed increased brain surface area compared to children whose parents experienced lower levels of education. The relationship between parental education and surface area followed a linear path, “implying that any increase in parental education, whether an extra year of high school or college, was associated with a similar increase in surface area over the course of childhood and adolescence.”
Similarly, children with higher family income also tended to have increased brain surface area. However, the relationship between income and surface area was not linear, but rather logarithmic, meaning the increase in brain surface area per dollar earned was comparatively greater for children from lower earning families than for those from more financially well off households.
Understanding the link between socioeconomic status and experience-dependent brain development is an important step for both researchers and policy makers looking for ways to close the achievement gap. However, it should be noted that the present study does not identify a causal link between socioeconomic status and brain structure—it does not identify what socioeconomic related experiences impact brain development. The question remains how factors like stress, nutrition, exposure to environmental toxins, or cognitive stimulation impact a child’s brain in her prenatal environment and/or her post natal environment.
Finally, the authors make a point of stating, “Our results should in no way imply that a child’s socioeconomic circumstances lead to an immutable trajectory of cognitive or brain development.” They point out there are a number of other factors that explain differences in brain development. Nevertheless, they conclude, “Many leading social scientists and neuroscientists believe that policies reducing family poverty could have meaningful effects on children’s brain functioning and cognitive development.”
- Noble, K. G., Houston, S. M., Brito, N. H., Bartsch, H., Kan, E., Kuperman J. M., …Sowell, E. R. (2014). Family income, parental education and brain structure in children and adolescents. Nature Neuroscience. Advanced online publication. doi: 10.1038/nn.3983
Many scientists view a deficiency in the ability to recognize faces as a major component in social interaction disorders, such as those on the autism-spectrum. Previous research has shown that mammals identify members of their own species using social recognition cues; for rodents odor cues, and for primates visual cues. Previous research has also pointed to a specific receptor, the oxytocin receptor, as key for social recognition in rodents. Now, in a study published in Proceedings of the National Academy of Sciences, Skuse and colleagues implicated the oxytocin receptor as critical for face recognition in humans. The authors recruited 198 Finnish and British families, who all had at least one child diagnosed with high-functioning autism, and tested each family member’s ability to: remember faces, discriminate facial emotions, and detect “direction of gaze.” Then, using a saliva sample, the authors analyzed genetic variation in the oxytocin receptor for each participant in order to understand if genetic differences in the receptor were associated with diminished social recognition ability. The authors found that high performance on the social recognition tasks was associated with normal genetic expression of the oxytocin receptor, while a specific genetic variation in the receptor, found in one-third of the participants, was associated with decreased performance. These findings implicate a specific genetic variant of the oxytocin receptor in social recognition disorders—in this case autism—as well as suggest that the gene encoding the oxytocin receptor plays an important role in human face recognition.
- Skuse, D. H., Lori, A., Cubells, J. F., Lee, I., Conneely, K. N., Puura, K., … & Young, L. J. (2013). Common polymorphism in the oxytocin receptor gene (OXTR) is associated with human social recognition skills. Proceedings of the National Academy of Sciences, 201302985. (Full Study)
In our last Research Lead, we described how a human was able to move the tail of a rat through brain-to-brain interface. Now, Rajesh Rao and Andrea Stocco at the University of Washington have successfully performed what they believe is the first non-invasive human-to-human brain interface. Their interface was set up as follows: A “sender” wore a headset that read the electrical waves along his scalp. His brain waves were then interpreted by computer software, and when he properly produced a certain type of brain wave (by entering a focused and relaxed brain state), a computer via transcranial magnetic stimulation (TMS) sent a signal to the “receiver’s” brain, which caused the receiver to involuntarily press a button on a keyboard. The key component of this interface was the use of TMS. There was a small TMS machine hooked up to the receiver’s motor cortex such that, when the machine was activated, the receiver experienced an involuntary motor movement. While this type of research is billed as a human brain-to-brain interface, it might be better described as a brain-to-computer-to-TMS-to-brain interface. Regardless, Rao’s and Stocco’s work represents a significant step towards more direct brain-to-brain interaction.
In our brains, information in the form of electrochemical signals is processed and transported from one neuron to the next at speeds up to 250 miles per hour via connectors called synapses. We’re information processors—this is why computer metaphors are sometimes apt for describing our brains. But how true is this metaphor? “K computer” at the Okinawa Institute of Technology Graduate University (currently the 4th fastest computer on Earth) is getting closer to answering that question. With the processing power of 250,000 high speed PCs, K computer has performed the largest simulation of a neural network ever. The researchers simulated the activity of 1.73 billion neurons connected by 10.4 trillion synapses. Using 82,944 processors, K computer took 40 minutes to simulate 1 second of random brain activity. While this is nowhere near as fast as a human brain, nor is the network even close to as large (human brains are estimated to have about 100 billion neurons), this is an important step in understanding neural networks. It also opens up vast research spaces to test the limits and boundaries between neural and computer networks. Is this a science fiction writer’s dream, slowly—very slowly—becoming a reality?
Who moved my cheese? How about who moved my tail? This past spring, a team of researchers successfully linked the brains of a human and a rat, such that a human participant was able to move a rat’s tail with only his thoughts. Through a noninvasive approach, Yoo et al. used an image of a flashing strobe light to prompt specific brain signals in the human participant, which were then translated into the appropriate neural stimulus for the rat, and thus invoke the movement of the rat’s tail. Methodologically, Yoo et al. were able to achieve this brain-to-brain interface by successfully translating, computer recorded, EEG signals of the human brain into a transcranial focused ultrasound (FUS) burst, which stimulated the area in the rat’s motor cortex corresponding to tail movement. Though Yoo and colleagues acknowledge the potential for brain-to-brain communication between humans, they approach expanding upon mind control technology with caution, as further advancements pose questions of ethics that remain unanswered.
- Yoo S-S, Kim H, Filandrianos E, Taghados SJ, Park S (2013) Non-Invasive Brain-to-Brain Interface (BBI): Establishing Functional Links between Two Brains.PLoS ONE 8(4): e60410.
Unable to overcome the gauntlet of cravings and withdrawal, alcohol abusers often succumb to relapse. However, a recent article in Nature Neuroscience describes a potential avenue of treatment that may aid with recovery. According to Barak et al., the tastes and smells associated with alcohol cue memories that evoke cravings, and thus spur relapse. Using alcohol dependent rats, Barak et al. found that by inhibiting a memory related pathway, they were able to mitigate cravings for alcohol in the rats. In the study, the researchers first identified the activation of the mTORC1 neural pathway as part of the memory reconsolidation process, and subsequently hypothesized that inhibiting this pathway could disrupt alcohol related memories and ultimately suppress relapse. As predicted, Barak and his team found that the mTORC1 inhibitor, rapamycin, effectively suppressed relapse in alcohol dependent rats that had been prompted with alcohol related taste and smell cues. This finding–that disruption of a neural pathway related to memory consolidation can lead to clinical benefits–has implications for the treatment of alcohol and substance abuse, as well as for clinical conditions involving recurrent memories, such as PTSD.
- Barak, S., Liu, F., Hamida, S. B., Yowell, Q. V., Neasta, J., Kharazia, V., … & Ron, D. (2013). Disruption of alcohol-related memories by mTORC1 inhibition prevents relapse. Nature Neuroscience, 16, 1111–1117.
Two wrongs don’t make a right, but when it comes to cognitive neuroscience, sometimes two simultaneous impairments counterintuitively make for improved performance. Previous research has established that decisions made either under stress or while multitasking tend to be impaired. Extending this research in the June issue of Cognitive Neuroscience, Pabst and colleagues examined the combined effect of these impairments on decision making, uncovering a paradoxical result. A simple prediction might hold that this combination of impairments would combine or synthesize, resulting in even worse decision making; puzzlingly, however, Pabst et al. found that decisions made by stressed, multitasking individuals performed better than a control group acting in the absence of either factor. The most likely explanation, according to the authors, is that the combination of stress and dual task monitoring induces a cognitive switch from serial to parallel goal monitoring, potentially by means of elevated dopamine concentrations in areas of the brain implicated in goal monitoring.
- Pabst, S., Schoofs, D., Pawlikowski, M., Brand, M., & Wolf, O. T. (2013). Paradoxical effects of stress and an executive task on decisions under risk. Behavioral Neuroscience, 127(3), 369–379.