Brain Computer Interfaces: Current Research and Future Perspectives
By Fouzia Raza
The increased attention that brain-computer interfaces (BCI) have received from media outlets as well as popular CEOs and tech companies has left many, if not most, of the general population puzzled about what such technology can actually do. From the first successful recording of brain activity in 1924 by Hans Berger to the first human to use a surgically implanted BCI to control an artificial limb in 2005, BCI technology has garnered support from researchers in a multitude of disciplines in both academia and industry.
But what exactly is a BCI? To put it simply, a BCI is any computer-based system that collects signals of the brain before analyzing and processing them to produce commands that are translated by an output device for a desired outcome (Shih et al, 2012). They are currently used to restore, replace, enhance, and supplement bodily functions such as speech, movement, sleep, among other activities of daily living.
To better understand the technology, I interviewed Kevin Davis, an MD-PhD candidate at the University of Miami Miller School of Medicine and a researcher in the Neural Interfaces Lab, whose goal is to restore communication and control in paralyzed individuals. Davis and his colleagues have developed a Python-based computer application that uses electrocorticography (ECoG) to actuate generalized end-effector devices, such as a soft robotic glove, for at-home use by a spinal-cord injury patient. Currently, the subject is able to adjust the system over bluetooth low energy using a custom designed mobile phone application. The team is also working with Emory Brown’s Neuroscience Statistics Research Laboratory at MIT to improve their machine learning algorithms used to decode the subject’s brain signals.
Conducting ground-breaking research in the field arguably puts Davis and his colleagues at the forefront of moving this technology forward. Here were his thoughts on BCI devices as well as the field’s current efforts:
The scientific community has come a long way in developing robust BCI technology used for communication, rehabilitation, leisure activities, recovery of function post injury or disease, and oftentimes understanding the complexities of the human nervous system. What led you to pursue research in using BCI for treating movement disorders above other applications?
There are numerous ways we are currently using to treat movement disorders, including stem cell therapies which would ideally serve as the most robust solution in the future. But the reason why there is a big push for brain computer interfaces is their utility. Currently there are mainly three different methods used to obtain information from the brain for developing BCIs: noninvasive means such as electroencephalography or EEG, electrocorticography or ECoG, and intracortical or more invasive measures. And of course, unlike the invasive method that we are researching, noninvasive BCIs have been extensively studied because they don’t require surgery.
I am interested in BCI technology because of the opportunity to restore independence in those who have lost it. The economic and public health impact that BCIs can have on individuals with spinal cord injury, such as decreasing the cost of living, as well as on society as a whole, is tremendous. This is, of course, considering the theoretical idea that these devices can actually restore full independence. I also have a profound interest in computer programming and signal processing, and research in BCI technology has allowed me to find a marriage between my interests in computer technology and medicine.
What sort of skills do you think are essential in developing this kind of technology? Would you place more weight on software development? Mechanical design? Electrical engineering and electrode design? Neuroscience? Clinical experience? Or something else altogether?
If it was an ultimatum, I would say electrical engineering and systems design and architecture. This is mainly because those skills are critical in developing something that is robust, low powered, and can efficiently distribute power regardless of the environment. The long answer: you need a team with skills in all of those specialties. You can probably build a rudimentary system with some skills in each of those areas or perhaps expertise in just one area but in order to build more robust systems, all of those areas are crucial in developing a full end product.
Did you feel like you were missing those skills, given your background in computing?
As I mentioned earlier, I have some software background and have been very interested in neuroscience, having done wet lab neuroscience research. When you pursue an MD-PhD, you have to submit a fellowship grant outlining the areas \you want to gain more exposure to. For me, those are signal processing and machine learning. Do I feel like I am missing skills in those areas? Yes. But do I feel like I will improve in those with my research? Absolutely. You enter the program with certain skill sets but with the mindset to learn and gain experience in even more areas.
So yes, there are definitely some things that I might be short on; in fact, when I first joined the program, I was told I couldn’t focus on biomedical engineering since I was never officially trained in an engineering background. But recently they changed the policy and the program director for the biomedical engineering department at Miami thought my experience in software development was sufficient in pursuing the things I want to learn.
So this next question concerns your most recent paper. What was the most difficult challenge that you and your research team faced in developing and deploying what is reportedly the first at-home BCI system for patient use?
I can think of two things, and both of them are still ongoing challenges. The first one involves ensuring the physical components attached to the wheelchair remain portable. Currently, they are housed in a rudimentary 3D casket which could be easily solved if we were able to move it to a smaller system. It may seem minor, but I think that's one of the biggest challenges in deploying it to the patient.
The other, and the more difficult challenge is performance, given the limited number of channels our device uses to collect neural signals. This is actually why we reached out to a team at MIT who helped us develop more robust machine learning algorithms to obtain better performance, with about 90% accuracy. But we're always looking to improve that number because at the end of the day, if the device cannot accurately detect the motion intent of the patient, it's not going to be useful. And that's a particular challenge that we continue to try and address.
On the topic of improving performance - how much do you think that is dependent on improving the electrode technology versus the algorithms decoding the signals?
This is a hard question to answer. With our technology, which uses ECoG, it has to be machine learning, simply because ECoG has not been used widely in research. But then I would push back on that a little bit and say the sensor technology is incredibly important as well. As I mentioned earlier, there are three major ways of collecting neural signals from the brain: the surface of the scalp via EEG, the surface of the brain via ECoG, and the deeper regions of the brain via spikes and local field potentials. A lot of what we are trying to do with ECoG data can easily be done with more invasive measures that can collect a higher resolution of neural signals. The less invasive your electrode technology is, the more you will have to rely on strong machine learning algorithms.
So what sort of classifiers has your lab used and which have been the most successful in correctly identifying the movement intent of the patient?
One of our lab members, Ian Cajigas, focused on this in the earlier stages of the project, actually. Some of the algorithms he looked at included support vector machines, neural networks, and decision trees and bagged trees, of which an online bagged tree classifier performed the best. However, once we deployed the system for at-home use and started collaborating with Emory Brown’s lab at MIT, we chose to implement a two part decoder consisting of a Hidden Markov Model and a Linear Discriminant Analysis classifier. A combination of the two has been the most robust in not only classifying the patient’s movement intent but also training the algorithm, which becomes a more challenging problem especially in the home setting where distractions such as a running TV or someone conversing in the background can distort the training.
What is your perspective on how rapid the development of BCI technology has been these past couple of decades? Why aren’t we seeing every single ALS or Parkinson’s patient with an implanted BCI device? Are we moving too slowly when it comes to treating patients with movement disorders with the technology? And if so, what are the major speed bumps?
I love this question because this is exactly what I was diving into the last couple of weeks. There have been papers that have been published that have demonstrated what our lab has developed; in fact, the same system was used in ALS patients so they can communicate at home back in the early 2010s. So the question is, why isn’t it being adopted in the field?
There were a couple of researchers who conducted surveys of individuals with movement disorders on the topic of BCI technology and from what I gathered, one of the biggest challenges is ensuring adoption of these devices once they reach the patient. This means asking ourselves, what is going to make patients keep such technology and actually want it? The key answers to that are performance and ease of setup. Because our system is invasive, we are able to set up our system within 5 minutes, which is especially important for decreasing the burden on the caretaker. Therefore, to adopt this technology on a wider scale, we need devices that not only increase the independence of the patient but also help the caretaker.
From a research perspective, we need to be able to show that such technology can be successfully used by a vast number of patients to the point where physicians can prescribe BCI technology to patients upon being diagnosed with ALS, spinal cord injury, stroke that leads to paralysis, or other movement disorders and injuries.
So what is the current methodology to how a BCI is prescribed for a patient? Would it come up as a treatment option from the doctor or is it something that the patient would have to push for especially since a lot of it is still in the research and development phase?
That’s right - this is very much in the research phase and isn’t presented to patients as a possible treatment/ However, once it becomes more accessible, I think it will pretty much be the same, if not very similar to how patients can obtain a deep brain stimulator (DBS) for treating Parkinson’s disease. Currently, once a neurologist authorizes a patient to receive a DBS, a functional neurosurgeon will help implant it into the specified region of the brain. For post-op and post-visits, usually the neurologist will use an iPad app to monitor measurements and metrics collected from the device.
For invasive BCI devices, I can imagine the same concept: a neurologist would refer the patient to a functional neurosurgeon who would eventually implant the device followed by post-visits with the neurologist. For noninvasive models, the neurologist could directly suggest available options, such as a noninvasive helmet, before modulating the functionality of the end effector. Our lab is actually using an FDA-approved DBS for BCI so you could imagine the process slowly transitioning over in a very similar fashion.
Alright you know I have to talk about futuristic things whenever we’re on BCI but say it’s 2050 and we have multiple options for intracortical BCI devices - almost as many as the variety in corn flakes at the supermarket. And perhaps you are a practicing doctor that can prescribe these devices to your patients. How would you even go about doing that? Perhaps with improved electrode technology we might see a large gamut of intracortical BCI devices that would better fit one patient over another because of type of injury, disease, or residual function in peripheral nerves. Or is it the opposite and we might be able to see a one-size-fits-all type of device that can be used for multiple types of neuromuscular disorders and injuries?
I would say the former mainly because a one-size-fits-all model is difficult to implement especially if you are placing all of the different causes of paralysis in the same boat. Take stroke and spinal cord injury, for example. The former directly affects a specific area of the brain while the latter leaves the brain totally unaffected. As a result, you can imagine the system has to differ according to the condition.
Now if you look at the different components of the BCI - the electrodes or device reading the signals, the system processing the outputted neural signals using machine learning, and the end effector - then perhaps the device capturing the neuronal signals could be one-size-fits-all, especially the electrodes interfacing the region of interest. Its implementation and how it’s placed in the body would nonetheless be subject-specific, of course.
The other major concern for physicians is insurance. In the future, if these devices did restore independence for the patient, then I would be surprised if they weren’t covered especially considering the amount of money that insurance companies would save from such a device. But if there was great expense then perhaps that would ultimately determine whether the patient can afford a device tailored for their needs as opposed to a one-size-fits-all device.
Now there has been an exponential growth of startups that aim to bring BCI technology to the general public, especially since Elon Musk’s venture into the field with his startup Neuralink. As an MD PhD candidate in the field, how do you respond to companies such as Neuralink that aim to commercialize more invasive BCI technology by convincing the general public of its equal role as a consumer device and a medical device?
The commercialization of BCI technology actually reminds me of hearing aids. These are really interesting devices because ideally, you could simply pick one off the shelf, but you do need to visit an audiologist to obtain access to the right model for proper use.
I think it’s great that some of these companies are promoting information on what this technology really is and what it can do, regardless of its accuracy. These mission-driven companies can certainly add to the hype, as was the case with machine learning and 3D printed devices. However, if you follow the hype cycle, you often start to realize the limitations after the hype declines, which is something to keep in mind. The promotion of these devices would nonetheless inform individuals who could benefit from BCIs on their potential use in the future and perhaps incite enough curiosity for them to appreciate their benefits.
Now if it provides more benefit to the patient as a commercial device than a medical device then I would support that. If it provides more benefit to the patient as a medical device instead, then I would support that as well -- whichever method will truly help the patient regain their independence.
As far as promoting the use of the technology for the general population goes, you cannot leave out discussions regarding ethics including how this will affect humanity and -- perhaps the real elephant in the room -- how it will impact politics, government, and warfare. That’s a whole hour and a half discussion in and of itself. Nonetheless, I say go for it. Science is a frontier that needs to be discovered but keep the ethics in mind.
That’s actually a good segue for my final question. As with any technology that policymakers have a hard time wrapping their heads around, it becomes extremely difficult to set forth regulatory measures that aim to protect the well-being of citizens, especially when the ethicists evaluating the ethics behind the technology are writing almost exclusively for other ethicists. What sort of ethical concerns have you and your colleagues raised from working with patients that now have years of their brian signal data stored on a cloud out there?
Our BCI data is stored on our university-supplied Box drive. But the question of how to ensure that data is HIPAA-compliant certainly arises. Ideally in a commercial device you wouldn’t need external storage space and could store data locally to prevent it from being stolen by others which is unfortunately quite common.
There’s no natural scientific method that exists in determining what is ethical and unethical - it’s a human-defined concept which makes arriving to a consensus quite difficult. As our views on ethics shift, however, we need to keep our eyes open for signs that something needs to be addressed when pushing the boundary to what we consider ethical forward. This is especially important for researchers: it’s super important to always obtain the perspective of the patients because they are the ones who will be affected by the technology. Therefore, even though we are the ones that are driving this technology forward, the patients play a key role in guiding it.
About the Author
Fouzia Raza is a sophomore at Harvard College concentrating in BioEngineering.
But what exactly is a BCI? To put it simply, a BCI is any computer-based system that collects signals of the brain before analyzing and processing them to produce commands that are translated by an output device for a desired outcome (Shih et al, 2012). They are currently used to restore, replace, enhance, and supplement bodily functions such as speech, movement, sleep, among other activities of daily living.
To better understand the technology, I interviewed Kevin Davis, an MD-PhD candidate at the University of Miami Miller School of Medicine and a researcher in the Neural Interfaces Lab, whose goal is to restore communication and control in paralyzed individuals. Davis and his colleagues have developed a Python-based computer application that uses electrocorticography (ECoG) to actuate generalized end-effector devices, such as a soft robotic glove, for at-home use by a spinal-cord injury patient. Currently, the subject is able to adjust the system over bluetooth low energy using a custom designed mobile phone application. The team is also working with Emory Brown’s Neuroscience Statistics Research Laboratory at MIT to improve their machine learning algorithms used to decode the subject’s brain signals.
Conducting ground-breaking research in the field arguably puts Davis and his colleagues at the forefront of moving this technology forward. Here were his thoughts on BCI devices as well as the field’s current efforts:
The scientific community has come a long way in developing robust BCI technology used for communication, rehabilitation, leisure activities, recovery of function post injury or disease, and oftentimes understanding the complexities of the human nervous system. What led you to pursue research in using BCI for treating movement disorders above other applications?
There are numerous ways we are currently using to treat movement disorders, including stem cell therapies which would ideally serve as the most robust solution in the future. But the reason why there is a big push for brain computer interfaces is their utility. Currently there are mainly three different methods used to obtain information from the brain for developing BCIs: noninvasive means such as electroencephalography or EEG, electrocorticography or ECoG, and intracortical or more invasive measures. And of course, unlike the invasive method that we are researching, noninvasive BCIs have been extensively studied because they don’t require surgery.
I am interested in BCI technology because of the opportunity to restore independence in those who have lost it. The economic and public health impact that BCIs can have on individuals with spinal cord injury, such as decreasing the cost of living, as well as on society as a whole, is tremendous. This is, of course, considering the theoretical idea that these devices can actually restore full independence. I also have a profound interest in computer programming and signal processing, and research in BCI technology has allowed me to find a marriage between my interests in computer technology and medicine.
What sort of skills do you think are essential in developing this kind of technology? Would you place more weight on software development? Mechanical design? Electrical engineering and electrode design? Neuroscience? Clinical experience? Or something else altogether?
If it was an ultimatum, I would say electrical engineering and systems design and architecture. This is mainly because those skills are critical in developing something that is robust, low powered, and can efficiently distribute power regardless of the environment. The long answer: you need a team with skills in all of those specialties. You can probably build a rudimentary system with some skills in each of those areas or perhaps expertise in just one area but in order to build more robust systems, all of those areas are crucial in developing a full end product.
Did you feel like you were missing those skills, given your background in computing?
As I mentioned earlier, I have some software background and have been very interested in neuroscience, having done wet lab neuroscience research. When you pursue an MD-PhD, you have to submit a fellowship grant outlining the areas \you want to gain more exposure to. For me, those are signal processing and machine learning. Do I feel like I am missing skills in those areas? Yes. But do I feel like I will improve in those with my research? Absolutely. You enter the program with certain skill sets but with the mindset to learn and gain experience in even more areas.
So yes, there are definitely some things that I might be short on; in fact, when I first joined the program, I was told I couldn’t focus on biomedical engineering since I was never officially trained in an engineering background. But recently they changed the policy and the program director for the biomedical engineering department at Miami thought my experience in software development was sufficient in pursuing the things I want to learn.
So this next question concerns your most recent paper. What was the most difficult challenge that you and your research team faced in developing and deploying what is reportedly the first at-home BCI system for patient use?
I can think of two things, and both of them are still ongoing challenges. The first one involves ensuring the physical components attached to the wheelchair remain portable. Currently, they are housed in a rudimentary 3D casket which could be easily solved if we were able to move it to a smaller system. It may seem minor, but I think that's one of the biggest challenges in deploying it to the patient.
The other, and the more difficult challenge is performance, given the limited number of channels our device uses to collect neural signals. This is actually why we reached out to a team at MIT who helped us develop more robust machine learning algorithms to obtain better performance, with about 90% accuracy. But we're always looking to improve that number because at the end of the day, if the device cannot accurately detect the motion intent of the patient, it's not going to be useful. And that's a particular challenge that we continue to try and address.
On the topic of improving performance - how much do you think that is dependent on improving the electrode technology versus the algorithms decoding the signals?
This is a hard question to answer. With our technology, which uses ECoG, it has to be machine learning, simply because ECoG has not been used widely in research. But then I would push back on that a little bit and say the sensor technology is incredibly important as well. As I mentioned earlier, there are three major ways of collecting neural signals from the brain: the surface of the scalp via EEG, the surface of the brain via ECoG, and the deeper regions of the brain via spikes and local field potentials. A lot of what we are trying to do with ECoG data can easily be done with more invasive measures that can collect a higher resolution of neural signals. The less invasive your electrode technology is, the more you will have to rely on strong machine learning algorithms.
So what sort of classifiers has your lab used and which have been the most successful in correctly identifying the movement intent of the patient?
One of our lab members, Ian Cajigas, focused on this in the earlier stages of the project, actually. Some of the algorithms he looked at included support vector machines, neural networks, and decision trees and bagged trees, of which an online bagged tree classifier performed the best. However, once we deployed the system for at-home use and started collaborating with Emory Brown’s lab at MIT, we chose to implement a two part decoder consisting of a Hidden Markov Model and a Linear Discriminant Analysis classifier. A combination of the two has been the most robust in not only classifying the patient’s movement intent but also training the algorithm, which becomes a more challenging problem especially in the home setting where distractions such as a running TV or someone conversing in the background can distort the training.
What is your perspective on how rapid the development of BCI technology has been these past couple of decades? Why aren’t we seeing every single ALS or Parkinson’s patient with an implanted BCI device? Are we moving too slowly when it comes to treating patients with movement disorders with the technology? And if so, what are the major speed bumps?
I love this question because this is exactly what I was diving into the last couple of weeks. There have been papers that have been published that have demonstrated what our lab has developed; in fact, the same system was used in ALS patients so they can communicate at home back in the early 2010s. So the question is, why isn’t it being adopted in the field?
There were a couple of researchers who conducted surveys of individuals with movement disorders on the topic of BCI technology and from what I gathered, one of the biggest challenges is ensuring adoption of these devices once they reach the patient. This means asking ourselves, what is going to make patients keep such technology and actually want it? The key answers to that are performance and ease of setup. Because our system is invasive, we are able to set up our system within 5 minutes, which is especially important for decreasing the burden on the caretaker. Therefore, to adopt this technology on a wider scale, we need devices that not only increase the independence of the patient but also help the caretaker.
From a research perspective, we need to be able to show that such technology can be successfully used by a vast number of patients to the point where physicians can prescribe BCI technology to patients upon being diagnosed with ALS, spinal cord injury, stroke that leads to paralysis, or other movement disorders and injuries.
So what is the current methodology to how a BCI is prescribed for a patient? Would it come up as a treatment option from the doctor or is it something that the patient would have to push for especially since a lot of it is still in the research and development phase?
That’s right - this is very much in the research phase and isn’t presented to patients as a possible treatment/ However, once it becomes more accessible, I think it will pretty much be the same, if not very similar to how patients can obtain a deep brain stimulator (DBS) for treating Parkinson’s disease. Currently, once a neurologist authorizes a patient to receive a DBS, a functional neurosurgeon will help implant it into the specified region of the brain. For post-op and post-visits, usually the neurologist will use an iPad app to monitor measurements and metrics collected from the device.
For invasive BCI devices, I can imagine the same concept: a neurologist would refer the patient to a functional neurosurgeon who would eventually implant the device followed by post-visits with the neurologist. For noninvasive models, the neurologist could directly suggest available options, such as a noninvasive helmet, before modulating the functionality of the end effector. Our lab is actually using an FDA-approved DBS for BCI so you could imagine the process slowly transitioning over in a very similar fashion.
Alright you know I have to talk about futuristic things whenever we’re on BCI but say it’s 2050 and we have multiple options for intracortical BCI devices - almost as many as the variety in corn flakes at the supermarket. And perhaps you are a practicing doctor that can prescribe these devices to your patients. How would you even go about doing that? Perhaps with improved electrode technology we might see a large gamut of intracortical BCI devices that would better fit one patient over another because of type of injury, disease, or residual function in peripheral nerves. Or is it the opposite and we might be able to see a one-size-fits-all type of device that can be used for multiple types of neuromuscular disorders and injuries?
I would say the former mainly because a one-size-fits-all model is difficult to implement especially if you are placing all of the different causes of paralysis in the same boat. Take stroke and spinal cord injury, for example. The former directly affects a specific area of the brain while the latter leaves the brain totally unaffected. As a result, you can imagine the system has to differ according to the condition.
Now if you look at the different components of the BCI - the electrodes or device reading the signals, the system processing the outputted neural signals using machine learning, and the end effector - then perhaps the device capturing the neuronal signals could be one-size-fits-all, especially the electrodes interfacing the region of interest. Its implementation and how it’s placed in the body would nonetheless be subject-specific, of course.
The other major concern for physicians is insurance. In the future, if these devices did restore independence for the patient, then I would be surprised if they weren’t covered especially considering the amount of money that insurance companies would save from such a device. But if there was great expense then perhaps that would ultimately determine whether the patient can afford a device tailored for their needs as opposed to a one-size-fits-all device.
Now there has been an exponential growth of startups that aim to bring BCI technology to the general public, especially since Elon Musk’s venture into the field with his startup Neuralink. As an MD PhD candidate in the field, how do you respond to companies such as Neuralink that aim to commercialize more invasive BCI technology by convincing the general public of its equal role as a consumer device and a medical device?
The commercialization of BCI technology actually reminds me of hearing aids. These are really interesting devices because ideally, you could simply pick one off the shelf, but you do need to visit an audiologist to obtain access to the right model for proper use.
I think it’s great that some of these companies are promoting information on what this technology really is and what it can do, regardless of its accuracy. These mission-driven companies can certainly add to the hype, as was the case with machine learning and 3D printed devices. However, if you follow the hype cycle, you often start to realize the limitations after the hype declines, which is something to keep in mind. The promotion of these devices would nonetheless inform individuals who could benefit from BCIs on their potential use in the future and perhaps incite enough curiosity for them to appreciate their benefits.
Now if it provides more benefit to the patient as a commercial device than a medical device then I would support that. If it provides more benefit to the patient as a medical device instead, then I would support that as well -- whichever method will truly help the patient regain their independence.
As far as promoting the use of the technology for the general population goes, you cannot leave out discussions regarding ethics including how this will affect humanity and -- perhaps the real elephant in the room -- how it will impact politics, government, and warfare. That’s a whole hour and a half discussion in and of itself. Nonetheless, I say go for it. Science is a frontier that needs to be discovered but keep the ethics in mind.
That’s actually a good segue for my final question. As with any technology that policymakers have a hard time wrapping their heads around, it becomes extremely difficult to set forth regulatory measures that aim to protect the well-being of citizens, especially when the ethicists evaluating the ethics behind the technology are writing almost exclusively for other ethicists. What sort of ethical concerns have you and your colleagues raised from working with patients that now have years of their brian signal data stored on a cloud out there?
Our BCI data is stored on our university-supplied Box drive. But the question of how to ensure that data is HIPAA-compliant certainly arises. Ideally in a commercial device you wouldn’t need external storage space and could store data locally to prevent it from being stolen by others which is unfortunately quite common.
There’s no natural scientific method that exists in determining what is ethical and unethical - it’s a human-defined concept which makes arriving to a consensus quite difficult. As our views on ethics shift, however, we need to keep our eyes open for signs that something needs to be addressed when pushing the boundary to what we consider ethical forward. This is especially important for researchers: it’s super important to always obtain the perspective of the patients because they are the ones who will be affected by the technology. Therefore, even though we are the ones that are driving this technology forward, the patients play a key role in guiding it.
About the Author
Fouzia Raza is a sophomore at Harvard College concentrating in BioEngineering.