The Research Lead: September-October 2013

 

“Researcher controls colleague’s motions in 1st human brain-to-brain interface”

[Neuroscience]

In our last Research Lead, we described how a human was able to move the tail of a rat through brain-to-brain interface. Now, Rajesh Rao and Andrea Stocco at the University of Washington have successfully performed what they believe is the first non-invasive human-to-human brain interface. Their interface was set up as follows: A “sender” wore a headset that read the electrical waves along his scalp. His brain waves were then interpreted by computer software, and when he properly produced a certain type of brain wave (by entering a focused and relaxed brain state), a computer via transcranial magnetic stimulation (TMS) sent a signal to the “receiver’s” brain, which caused the receiver to involuntarily press a button on a keyboard. The key component of this interface was the use of TMS. There was a small TMS machine hooked up to the receiver’s motor cortex such that, when the machine was activated, the receiver experienced an involuntary motor movement. While this type of research is billed as a human brain-to-brain interface, it might be better described as a brain-to-computer-to-TMS-to-brain interface. Regardless, Rao’s and Stocco’s work represents a significant step towards more direct brain-to-brain interaction.

“Largest neuronal network simulation achieved using K computer”

[Computational Neuroscience]

In our brains, information in the form of electrochemical signals is processed and transported from one neuron to the next at speeds up to 250 miles per hour via connectors called synapses. We’re information processors—this is why computer metaphors are sometimes apt for describing our brains. But how true is this metaphor? “K computer” at the Okinawa Institute of Technology Graduate University (currently the 4th fastest computer on Earth) is getting closer to answering that question. With the processing power of 250,000 high speed PCs, K computer has performed the largest simulation of a neural network ever. The researchers simulated the activity of 1.73 billion neurons connected by 10.4 trillion synapses. Using 82,944 processors, K computer took 40 minutes to simulate 1 second of random brain activity.  While this is nowhere near as fast as a human brain, nor is the network even close to as large (human brains are estimated to have about 100 billion neurons), this is an important step in understanding neural networks. It also opens up vast research spaces to test the limits and boundaries between neural and computer networks. Is this a science fiction writer’s dream, slowly—very slowly—becoming a reality?

“Wild Orangutan Males Plan and Communicate Their Travel Direction One Day in Advance”

[Animal Behavior]

Research by van Schaik et al., published in PLOS ONE, provides new evidence of long-term planning behavior by an animal in the wild. In a study of Sumatran orangutans, van Schaik and colleagues found that the direction of long calls made by flanged* males predicted travel directions up to 22 hours in advance, even when the call and travel were interrupted by sleeping at night. The authors found that, in contrast to migrating birds, the orangutans adjust their travel plans and often signal this change with new spontaneous long-calls, which predict travel direction better than their initial long-calls. The authors point to the female orangutans’ desire to mate with and be protected by the dominant male as a reason why communicating travel information in advance makes sense from a reproductive point of view. The authors posit that orangutans likely make and adjust plans using features of episodic memory–the ability to recall specific events–however more research is required to understand the mechanism behind this behavior. Overall, the authors suggest that this type of long-term planning behavior by animals in the wild is unlikely limited to orangutans and they expect this behavior to exist in other apes and large-brain animals.

*“Sexually mature males may, after highly variable periods of time, grow cheek flanges (wide cartilaginous pads at the sides of their face.”

“Language can boost otherwise unseen objects into visual awareness”

[Language & Perception]

Giving objects a name, besides being an effective memory tool, serves to direct our attention towards familiar things in our visual environment. For example, while walking through a forest, knowing the linguistic label for an oak, maple, or birch tree can help us recognize and differentiate various trees that otherwise we may not have noticed. Along these lines, researchers Lupyan and Ward found that activating the linguistic label for an object is enough to propel an otherwise unseen object into visual awareness. In their experiment, participants indicated whether or not they saw familiar objects on a screen. The catch? These objects were suppressed from conscious visual awareness using a method called Continuous Flash Suppression; a method where one eye sees the object and the other eye sees a random scribble of lines, essentially making the original image incomprehensible. Lupyan and Ward found that participants responded faster to the suppressed image and noticed it more often when it was paired with an appropriate linguistic label (i.e. they heard the word right before the images appeared), compared to when participants heard an invalid label, or nothing at all. The authors conclude that simply hearing the appropriate label for an object can bring that object to attention, when it would have otherwise gone unseen.

Tags: , , ,