This website stores cookies on your computer. These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media. To find out more about the cookies we use, see our Privacy Policy. We won't track your information when you visit our site. But in order to comply with your preferences, we'll have to use just one tiny cookie so that you're not asked to make this choice again.

Turning brain signals into useful information

But although algorithms are getting better, there is still a lot of room for improvement, not least because data remain thin on the ground. Despite claims that smart algorithms can make up for bad signals, they can do only so much. “Machine learning does nearly magical things, but it cannot do magic,” says Dr Shenoy. Consider the use of functional near-infrared spectroscopy to identify simple yes/no answers given by locked-in patients to true-or-false statements; they were right 70% of the time, a huge advance on not being able to communicate at all, but nowhere near enough to have confidence in their responses to an end-of-life discussion, say. More and cleaner data are required to build better algorithms.

It does not help that knowledge of how the brain works is still so incomplete. Even with better interfaces, the organ’s extraordinary complexities will not be quickly unravelled. The movement of a cursor has two degrees of freedom, for example; a human hand has 27. Visual-cortex researchers often work with static images, whereas humans in real life have to cope with continuously moving images. Work on the sensory feedback that humans experience when they grip an object has barely begun.

And although computational neuroscientists can piggyback on broader advances in the field of machine learning, from facial recognition to autonomous cars, the noisiness of neural data presents a particular challenge. A neuron in the motor cortex may fire at a rate of 100 action potentials a second when someone thinks about moving his right arm on one occasion, but at a rate of 115 on another. To make matters worse, neurons’ jobs overlap. So if a neuron has an average firing rate of 100 to the right and 70 to the left, what does a rate of 85 signify?

At least the activities of the motor cortex have a visible output in the form of movement, showing up correlations with neural data from which predictions can be made. But other cognitive processes lack obvious outputs. Take the area that Facebook is interested in: silent, or imagined, speech. It is not certain that the brain’s representation of imagined speech is similar enough to actual (spoken or heard) speech to be used as a reference point. Progress is hampered by another factor: “We have a century’s worth of data on how movement is generated by neural activity,” says BrainGate’s Dr Hochberg dryly. “We know less about animal speech.”

Higher-level functions, such as decision-making, present an even greater challenge. BCI algorithms require a model that explicitly defines the relationship between neural activity and the parameter in question. “The problem begins with defining the parameter itself,” says Dr Schwartz of Pittsburgh University. “Exactly what is cognition? How do you write an equation for it?”

Such difficulties suggest two things. One is that a set of algorithms for whole-brain activity is a very long way off. Another is that the best route forward for signal processing in a brain-computer interface is likely to be some combination of machine learning and brain plasticity. The trick will be to develop a system in which the two co-operate, not just for the sake of efficiency but also for reasons of ethics.

Share This Post

related posts

On Top