search slide
search slide
pages bottom

Scientists invent machine that can (sort of) read your mind

The human brain is still brimming with mystery, but powerful instruments like functional MRI (fMRI) are beginning to offer a glimpse at how things work up there. Neuroscientists have devised a way for an fMRI to read a person’s mind… sort of. The study led by researchers Brice Kuhl and Hongmi Lee from the University of Oregon used an AI program that matched brain activity to a set of variables to recreate faces that the study participants were seeing. It’s not perfect, but it’s a big step forward.

The process of recreating faces from brain activity started with a training phase. The team showed several hundred faces to study participants while they were in an MRI. The program had access to real time MRI data from the machine, as well as a set of 300 numbers that described each face. They covered everything from skin tone to eye position. An MRI can detect the movement of blood around the brain. That movement equals activity; then the program uses this data to learn how a particular brain reacts to known stimuli.

With a few hundred examples incorporated into its algorithm, the AI was put to the test. The participants were again shown a face, but this time the program didn’t know anything about the numbers describing it. The only thing it had to go on was the MRI data that described brain activity as the person saw the face. From this, the AI was able to reconstruct the face, or at least try. Here’s what it managed.


So, these aren’t the best guesses, but neither are they awful. The far right two sets are the worst guesses and the rest are the best. The top row of images is what the study participant actually saw, and the bottom two are guesses based on two different areas of the brain. OTA is the occipitotemporal cortex, which deals with visual inputs. ANG is the angular gyrus, a region that shows high activity when we experience vivid memories.

faces 2Kuhl and Lee showed the reconstructed faces to a separate group of people and asked basic questions about gender, emotion, and skin tone. The respondents were able to guess correctly at a rate higher than random chance, indicating the AI renderings do provide useful data about the original image.

Taking things a step further, the team wanted to see what would happen if the program only had memory to work with instead of live visual data. Again, participants in the MRI were shown a face the AI could not see, but this time they were asked to think about the face after it had been hidden. Based only on that memory, the AI constructed a version of the face from fMRI data. You can see those guesses at right, and they’re not as good. The majority of the variables are right, but there’s enough wrong that we don’t see these as the same faces.

This is being presented as a mind-reading program (which should have been named Xavier, by the way), but it’s mostly a tool for better understanding the way the brain operates. Researchers want to know what certain brain activity means, and this research improves our understanding. It may be possible to actually salvage useful information from the brain in this way, but the team thinks that would currently require several days of continuous training in the MRI, but finding willing participants for that seems unlikely.

Leave a Reply

Captcha image