Brain-Computer Interface and Convolutional Neural Networks

Brain-Computer Interface and Convolutional Neural Networks

Can the mind🧠 connect directly with artificial intelligence, robots🤖, and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?

What is Brain-Computer Interface(BCI) ?

It is a device that enables its users to interact with computers by means of brain activity only, this activity is generally measured by ElectroEncephaloGraphy (EEG).

It's been known since 1875 that the brain is based on electrical signals that can be measured by placing electrodes around a person's head. By analyzing the electrical impulses picked up by these electrodes, one can record the brain waves. This is called ElectroEncephaloGraphy.

The output of the EEG can be displayed on a computer screen, which the subject can watch. After a while, the person is able to move the cursor only by thinking.

bci2.gif

With BCI mind can speak silently with a smartphone or other devices. Recent advancements of neuroprosthetic, linking the human nervous system to computers and providing unprecedented control of artificial limbs, and restoring lost sensory function.

BCI establishes two-way communications between the brain and the machine. One is the brain-computer interface and another is called computer-brain interfaces (CBI). BCI hopes to create new communication channels for disabled or elderly persons using their brain signals.

BCI research also created hopes to remove the fear, disturbing thoughts, feelings, and bad dreams of any common people. With AI, BCI could send signals to the brain that combines the rapport-building skills of human caregiver with the feelings of love, care, protection, forgiveness, and safety. By sending signals of a welcoming expression and posture, and being attentive, loving, and responsive, BCI can help many patients and people who are suffering from mental disorders. AI-based BCI has tremendous scope to remove the suffering of humanity.

With the advancement and exploration of new mobile bio-monitoring devices, earphones, neuroprosthetic, wireless wearable intelligent sensors, it is becoming possible to monitor the activities of the neuron in the human brain.

The small motion sensors will notice the brain's activities and make sync with your mobile devices. Noninvasive devices are used to measure brain activities, people’s emotions, their movements, their interactions with others, and notices their bio-metric changes under different conditions, and so on.

bci1.gif

The role of AI is to help the sick, and physically and mentally challenged people who are suffering. With BCI the intelligent sensors will capture the brain signals and compassionate AI will interpret that and take suitable kind action to eliminate the pain.

Types of Brain Computer Interface

The brain-computer interface can be classified into three main groups; Non-Invasive, Semi-invasive, and Invasive.

In invasive techniques, special devices have to be used to capture data (brain signals), these devices are inserted directly into the human brain by a critical surgery.

In Semi-invasive, devices are inserted into the skull on the top of the human brain. In general, non-invasive are considered the safest and low-cost type of devices. However, these devices can only capture “weaker” human brain signals due to the obstruction of the skull. The detection of brain signals is achieved through electrodes placed on the scalp.

Brain Signal Acquiring Instruments:

Functional magnetic resonance imaging (fMRI), positron emission tomography (PET), magneto-encephalography (MEG), and scalp electroencephalography (EEG), are commonly used for non-invasive BCI study.

bci5.jpg

The brain signal:

The Brain signals are generated by differences in electric potential carried by ions on the membrane of each neuron. There are a plethora of signals, which can be used for BCI.

bci6.jpg These signals are divide into two classes: spikes and field potentials. Although the paths of the signals take are insulated by something called myelin, some of the electric signals escapes. Scientists can detect those signals, interpret what they mean, and use them to direct a device of some kind.

Convolution Neural Network and BCI:

CNN is a type of AI neural network based on the visual cortex. It has the capacity to learn the appropriate features from the input data automatically by optimizing the weight parameters of each filter through the forward and backward propagation in order to minimize the classification mistake.

bci3.gif

The human auditory cortex is arranged in a hierarchical organization, similar to the visual cortex. In a hierarchical system, a series of brain regions performs different types of computation on sensory information as it flows through the system. Earlier regions or “primary visual cortex”, react to simple features such as color or direction. Later stages enable more complex tasks such as object recognition.

One advantage of using deep learning techniques is that it requires minimal pre-processing since optimal settings are learned automatically. Regarding CNNs, feature extraction and classification are integrated into a single structure and optimized automatically.

However, one of the biggest issues in BCI research is the non-stationarity of brain signals. This issue makes it difficult for a classifier to find reliable patterns in the signals, resulting in bad classifying performances.”

Starting with BCI :

For direct “brain” interfaces, you need a set of EEG electrodes, and for peripheral nervous system interfaces, you need EMG electrodes.

Once you can get that data into your computer, you’ll need to do some signal conditioning. Things like filtering for the frequency of the signal you’re looking for, filtering out environmental noise.

After, you need to think about what you’re actually trying to have the system do.

Do you need it to detect a particular change in your EEG patterns when you think about any color?

Or do you need it to detect a change in your EMG when you’re moving a finger? What about the computer? Should it run a program? Type some text?

Think about how you’re going to label your data. How will the computer know initially that a particular signal is meaningful?

This is supervised learning. Choose your preferred classification method, get lots of labeled data, and train your system. You can use methods like cross-validation to check if your trained models are doing what you think they’re supposed to.

After all of this, you might have something that looks like a brain-computer interface.

Where can I find datasets for machine learning on brain-computer interfaces?

You can find several publicly available EEG datasets on the following website:

Free EEG data database freely ERP data publicly available

Berlin Brain-Computer Interface

Outro:

Recent advances in artificial intelligence and reinforcement learning with neural interfacing technology and the application of various signal processing methodologies have enabled us to better understand and then utilize brain activity for interacting with computers and other devices.

For more information :

Thanks for reading😀

Happy Learning

-JHA

Â