Is Mind-Reading the Future of BCI Technology?

Sai Mannam

What if computers could decode what people think and dream? Recent studies and advancements in artificial intelligence suggest that soon, computers may be able to do just that. In a 2014 study, Dr. Rajesh Rao, a neuroscientist at the University of Washington, and Dr. Jeff Ojemann, a neurosurgeon at the University of Washington Medical Center, asked seven people with severe epilepsy to watch images on a screen after surgical implantation of electrodes in their temporal lobes (the region of the brain that coordinates sensory input and recognition). Patients with epilepsy were used for the study because they had already had electrodes implanted for doctors to observe the place of origin of seizures within their temporal lobes. The electrodes were connected to a computer program that could process and analyze brain signals at a speed of 1000 inputs per second. An algorithm allowed the computer to determine the difference in brain signals when someone viewed a house versus when they viewed a face. After processing the signals for two-thirds of the pictures, the computer was able to accurately predict what each person saw 96 percent of the time for the final third of the pictures, deciphering what each person saw within only 20 milliseconds.

Entrepreneur Outlook

As scientists make breakthroughs in thought identification by artificially intelligent machines, entrepreneurs are investing and inventing in the field. In 2016, Elon Musk founded a neurotechnology company called Neuralink that is reported to be developing brain-computer interface (BCI) technology that can be safely implanted in humans. The company’s short-term goal is to facilitate the treatment of patients suffering from detrimental brain diseases.

Facebook founder and CEO Mark Zuckerberg is also foraying into brain-computer technology as he described in his Harvard public discussion on the future of technology in society. He claims to be working toward an augmented reality in which people will have an enhanced access to information using only their thoughts. Zuckerberg sees applications of this technology ranging from superimposed driving directions in a driver’s field of vision to AR modelling to see what furniture would look like in your apartment.  

These ideas may sound like a fantasy of the distant future, but the integration of artificial intelligence into human minds seems to be closer than ever to becoming reality. A new implantable device was recently developed by researchers at the University of California, San Francisco, that can translate brain activity into shockingly accurate synthetic speech. This technology aims to provide an artificial voice for individuals who cannot speak as a result of conditions like ALS. Not only can the device translate thoughts to audible speech, but it can also imitate the person’s mode of speaking through a virtual vocal cord.

BCI Signal Detection

There are also other kinds of brain activity that can be detected and recorded using BCI technology. One special type of signal used by researchers in BCI is event-related potentials, specifically the P300 signal. P300 signals in the brain are elicited when something out of the ordinary is noticed, such as finding the pencil you always use in your pencil pouch or seeing the term you were searching for in a glossary. The P300 signal is unique to every individual; therefore, researchers need training data (previous recordings that contain definite P300s) specific to the particular individual they are analyzing. These person-specific P300 signals are used to calibrate the computer, which can then detect P300s outside of the control condition, in a way similar to detecting lies in a polygraph test.

As exciting as new advancements in BCI technology may seem, skeptics worry about its potential downsides. For example, P300 signals could be used to derive information about what month an individual was born or which bank they use without their consent, a violation of computing ethics. The ethical implications of this technology have not been fully explored, and it is uncertain where lines will be drawn legally if people begin to use BCI for malicious purposes.

Fortunately, this invasion of privacy is impossible right now due to two major setbacks in P300 technology. First, commercial devices use dry electrodes. This requires direct contact with the skin of the individual to produce readable signals, which would be a difficult feat for someone attempting to read another person’s mind secretly.  Secondly, the P300-detecting electrodes have a low signal-to-noise ratio, making it difficult to separate true P300s from other environmental signals that cannot be filtered out. BCI devices that are currently on the market have not yet overcome these challenges.

Buyer Beware

Despite the current limitations of BCI, cruder forms of “mind-reading” technology are already being applied to the public to influence our decisions. The Facebook-Cambridge Analytica scandal is one example of corporate misuse of consumer data for the purpose of manipulating people. It was revealed in early 2018 that Cambridge Analytica, a British political consulting firm, targeted political ads at individual users based on psychological profiles generated from their Facebook data. Personal information from over 50 million profiles was used without the consent of the users. Unethical privacy breaches by corporations have been expedited as investors funnel large sums of money into private research. As mentioned previously, Elon Musk and Mark Zuckerberg are already investing money into BCIs; given Facebook’s involvement in privacy scandals and the distinctive secrecy of Musk’s businesses, this may be cause for concern.  

Also alarming is the lack of established ethical guidelines for corporate investments in artificial intelligence at academic institutions. Stephen Schwarzman, a business mogul with a contentious political background, recently donated $350 million toward the new Schwarzman College of Computing at MIT. Many MIT faculty, students, and alumni have opposed accepting the donation and naming the college after Schwarzman, as the terms of the donation are unknown to the public and could include research contracts with his private equity firm, the Blackstone Group. The legal implications of artificial intelligence, including BCI mind-reading technology, have not yet been fully sorted out, causing some researchers to urge for caution and neuromodesty, approaching neurological developments with ethics in mind.

Future Implications

With increasing advances in artificial intelligence and decoding of thoughts, there are growing opportunities to improve society through treatment of neurological disease. However, there are also possibilities for these technologies to be used for unethical purposes, such as harvesting thoughts in the same way personal data is being harvested by corporations. Nevertheless, the future implications of BCI technology are unpredictable, as not enough is known about how they will be used. According to Stephen J. Morse, the Associate Director at the Center for Neuroscience and Society at Penn Law, “As more data accumulate, the ethical issues specific to DBS (deep brain stimulation) will emerge more clearly.”


  1. Future Brain Technology.: Download now Premium vectors on Freepik. (2018, April 26). Retrieved from

  2. Ghose, T. (2016, January 29). Mind-Reading Computer Instantly Decodes People's Thoughts. Retrieved from

  3. Marsh, S. (2018, January 01). Neurotechnology, Elon Musk and the goal of human  enhancement. Retrieved from

  4. Cohen, N. (2019, March 11). Zuckerberg Wants Facebook to Build a Mind-Reading Machine. Retrieved from

  5. Nelson, B. (2019, April 25). 'Mind-reading' device can translate your brain activity into audible sentences. Retrieved from

  6. Klein, E., Pratt, K., & Oregon Health and Sciences University. (2019, May 14). Helping or hacking? Engineers and ethicists must work together on brain-computer interface technology. Retrieved from

  7. Mind-reading technology was once part of a dystopian future. Now it exists. (2018, August 20). Retrieved from

  8. Greenfield, P. (2018, March 25). The Cambridge Analytica files: The story so far.Retrieved from

  9. Morese, S. (2012, January 01). New Therapies, Old Problems, or, A Plea for Neuromodesty. Retrieved from

  10. Cho, S., & Cho, S. (2019, February 20). Schwarzman gift to MIT draws criticism. Retrieved from