How I Built a Car I Could Control with my Brain using my Muse Headband as a Brain-Computer Interface.

By Shifra Khan, TKS Innovator

Spoiler – this is the end result.

This is Ray Kurzweil. ⤵️

Or as media outlets like to call him: “Google’s AI guru.”

He’s an insanely smart person who’s made some scarily accurate predictions in the past.

In the ’90s, he made 147 predictions for 2009. In 2010, he reviewed his predictions, 86% of which were correct. (He gave himself a “B” grade.)

He thinks we’re on our way to becoming gods.

One of his current predictions is that we’re going to have brain-chips by the year 2030, and our brains will be connected to the cloud. He puts it more eloquently than I do:

“We’re going to gradually merge and enhance ourselves. In my view, that’s the nature of being human — we transcend our limitations.”

Ray Kurzweil

So, we’re going to have “synthetic neocortices?”

Look, I know what you’re thinking: Shifra, our world would fall apart if we uploaded our brains to a computer. Have you even SEEN Black Mirror/The Matrix/Altered Carbon?

I’ve seen all three.

Seriously though, this technology isn’t as terrifying as it is cool. Once it becomes mainstream, it would be the single most significant paradigm shift in our understanding of humanity since… forever.

Whether you like it or not, there’s some super insane stuff happening in Human-Brain Interfaces RIGHT NOW:

  • Typing with your brain. Instead of having to click away at a keyboard, what if you could just think, and the words would translate onto your computer screen? Sound cool?
    • Engineers at Facebook are working on letting you do this right now. They’re also using the same technology to allow people to hear with their skin.
  • Brain-to-brain communication. This technology would mark an enormous paradigm shift in our ability to communicate since the Cognitive Revolution 70,000 years ago. And it’s actually possible…
    • A team at the University of Washington enabled subjects to be able to play a game of “20 questions” using only their brainwaves and phosphenes, or spots of light that are produced by direct stimulation of our visual system using anything but light. The University of Washington team used transcranial magnetic stimulation. In simpler terms, when you press on your eyeball you see colours.
  • Brain chips for soldiers? That is legit a Black Mirror episode, but DARPA — the mad science wing of the US Military — recently invested $65 million towards developing this technology.

Do you hear that? 🤔

This tech is going to shape your future — pretty much whether you like it or not.

But what is this tech I speak of? Brain-computer interfaces (BCI). 🧠

At a very high level, a BCI system is a direct communication channel between a brain and some external device.

You can break a brain-computer interface system into three parts:

  1. Signal Acquisition: capturing electric signals, which are then amplified and digitized.
  2. Signal Processing: analyzing obtained signals to gain control signals.
  3. Data Manipulation: manipulating output to suit output device (e.g. laptop screen.)

A BCI extracts your brain signals, sends them over to a computer to try and make sense of your brain data, and then enables you to do all sorts of cool stuff.

If this tech is going to change life — and humanity — as we know it, it only makes sense to me to do one thing: get my hands dirty and start building something.

Now, I’ve built a simple BCI-system before, but I optimized my system for artifacts. An artifact is anything that isn’t of cerebral origin — so it’s not really brain activity. You can think of it as spikes in brain activity caused by variables such as blinking, chewing, or moving your head around. It’s just noise.

Raw EEG signal with eye-blink artifact vs processed EEG signal

This time, I built a system that relies solely on brain activity.

I used the Muse headband, which uses a bunch of electrodes to pick up on brain activity and sent over the data to my Mac. From there, it was just a matter of programming an algorithm that could detect whether I was focused or not.

Simple enough? Kind of 🤷🏽‍♀️

EEG signals are complicated electrical waveforms, and when it comes to quantifying them, raw EEG is a pain to work with.

The FFT

Applying the Fast Fourier Transform (FFT) simplifies this process. Basically, it shows which frequencies make up a signal, and “how much” of each frequency is present.

What does this mean?

The best explanation I’ve found is: Given a smoothie, it finds a recipe.

At a high level, it helps break down our overall brain activity into different frequency bands, all of which possess unique features. This is possible because all waveforms can be re-written as the sum of sinusoidal functions.

Conceptualizing the Discrete Fourier Transform

After this segmentation, we can extract whichever frequency band we find most interesting or provides us with the most information.

A Fast-Fourier Transform converts a signal from the time-domain to the frequency-domain. Basically, according to FFT analysis, any time-domain signal can be broken down into sine waves, allowing us to visualize features that would otherwise be hidden.

Here’s an example:

The top blue trace shows an EEG reading for a person who’s awake.

The bottom red trace displays an EEG reading for a person who has been sedated.

Clearly, the traces are different, but we can’t really quantify how different they are.

Here’s the same data after it’s been filtered in the delta band and gone through FFT:

The frequency spectrum (right panel) displays that the peak for both traces is 0.2 Hz, but that the peak for the sedated state is twice as prominent as the normal state.

This means that now we can tell that the anesthetized brain reveals more low-frequency activity.

For my BCI-system, I just needed to be able to detect which waves would showcase whether I was focusing or not.

After I accomplished that, I’d be able to optimize my outputs and send commands (in this case: turning music on or off.)

But HOW do you enable a brain-computer interface to detect concentration?

The answer: figuring out your concentration threshold.

Now, I have a well-documented history of my struggle with finding optimal thresholds, but this process was surprisingly easy.

Different types of brain waves correlate to different types of brain activity.

Concentration is usually seen as increased activity in the higher, faster EEG frequencies — the Beta waves (13–30 Hz). I estimated that my brainwaves when I concentrated would show themselves as increased EEG energy above 22 Hz.

All that was left was filtering my EEG data to asses the intensity of activity that exceeded 22 Hz. This would allow me to pick a threshold that I could measure against the EEG intensity level.

If my recorded activity was stronger than my specified threshold, the system would be able to detect that I was concentrated and manipulate an output. In this case, have the car move forward.

On the flip side, if my recorded activity were weaker than the threshold, the system would be able to detect that I was not concentrated, and would stop the car accordingly.

I also closed my eyes when I started concentrating, generating alpha waves that would serve as a benchmark in my data.

In the end, I could look for my activity between the two alpha wave spikes — which would be the time that I’d be concentrating.

Concentration or artifacts?

It would be fair to say that in closing my eyes, I had my brain activity exceed a noise threshold as opposed to a concentration threshold. That would be a pretty legitimate assumption to make.

Thankfully, EEG Hacker runs an awesome blog that answers this question and proves otherwise. (Seriously, this dude is so cool. He even measured his concentration while going through his morning routine — and found that he’s more concentrated when staring at birds as opposed to surfing the internet.)

Anyways, here’s a demo (ft. my dad) 😀

And I get it. I haven’t built some sort of nanobot you can inject in your brain that also brings you closer to being God-like (yet), but I did manage to move a piece of plastic with my mind. That would’ve been impossible years ago.

Hell, most of us still think it’s impossible.

Here’s how I justify my endeavours: I truly, truly hope Ray Kurzweil is right, and by 2030 we’ve expanded our neocortex and become closer to God. But here’s the thing: we don’t understand shit about the brain. And the only time this technology is going to become mainstream is when we do.

BCI-systems like these are what keep me on my journey to learn more about the brain, meet smart people, and eventually impact billions. It’s what has me excited to wake up in the morning.

shifrakhancah@gmail.com'

Author: Shifra Khan

16 y/o. BCI developer. Nanotech enthusiast. On a mission to solve some of humanity’s biggest problems :)