Researchers claim fMRI can probe the workings of the brain as never before—revealing everything from when you tell a lie (read: interrogations) to how you fall in love (read: divorce court)—while critics counter that reports of digital mind readers are premature, and we should think twice before using fMRI in our public and private lives.
Like it or not, the new brain science is here, and the world inside our heads is never again going to be completely private.
Popular Mechanic | Nov 2007 issue
By Jeff Wise
Frank Tong is peering into another man’s mind. The Vanderbilt University neuroscientist is sitting in front of a bank of monitors inside a dimly lit room. On the other side of a plate-glass window, an undergraduate lies immobile, his legs protruding from a functional magnetic resonance imaging (fMRI) scanner. A display unit above the young man’s eyes flashes a picture of a pigeon or a penguin—at this point Tong doesn’t know which. A low roaring reverberates through the room as the scanner sends powerful waves of magnetic energy cascading through the subject’s cranium.
On Tong’s screens a series of images appear: black-and-white cross sections of the living brain, with small fluctuations in brightness that indicate regions of increased activity. Tong leans closer to the pixelated images. The complex patterns look nothing like a bird, of course, but hidden within them lie clues to the student’s thoughts. He cannot tell what the undergrad is looking at just by peering at the dance of neuronal firings inside his head. So Tong extracts the data from the scanner, takes it back to his lab and runs it through his processing software. After several hours he has a prediction: The test subject was looking at a penguin. As it turns out, Tong was right. His accuracy for this kind of mind reading is 70 to 80 percent. “Our ability to guess what a person is thinking about binary decisions is not super dramatic,” he says. “But we’re doing it with really crude image resolution of samples from the brain. If we could access every neuron, and spent long enough analyzing the data, we could figure out in great detail what a person is seeing or thinking.”
It’s as if Tong has removed one small brick in the wall between the outside world and our inner lives. And he’s not alone. In the past decade, a wave of researchers using scans has laid bare the rough schematics of how our brains handle fear, memory, risk-taking, romantic love and other mental processes. Soon, the technology could go even further, pulling back the curtain guarding our most private selves. Indeed, boosters say, a nearly foolproof lie detector based on brain scanning is just around the corner.
If they’re right, then there may come a day when others—the government, employers, even your spouse—might turn to technology to determine whether you are a law-abiding citizen, a promising new hire or a faithful partner. But skeptics say that talk of mind-reading machines is nothing more than hype. “They’re marketing snake oil,” says Yale University psychiatry professor Andy Morgan. “We’ve been really skeptical of the science. But even if it works, it raises interesting questions about Fourth and Fifth Amendment rights. Is [an involuntary fMRI scan] illegal search and seizure since something was taken from you without your permission? And how do you protect your right not to incriminate yourself if people have a way of asking your brain questions, and you can’t say no or refuse to answer? These are some serious questions we have to begin to ask.”
In the wake of Sept. 11, the potential for fMRI to distinguish liars from truth-tellers generated particular interest as the U.S. government sought more reliable ways to extract information from detainees in the global war on terror. The Pentagon’s Defense Academy for Credibility Assessment at Fort Jackson, S.C., formerly the Polygraph Institute, has financed over 20 projects aimed at developing improved lie detectors. DARPA, the Pentagon’s high-tech research arm, also jumped into fMRI work. “Researchers, funded by the Department of Defense,” a recent article in the Cornell Law Review noted, “have developed technologies that may render the ‘dark art’ of interrogation unnecessary.”
Entrepreneurs, meanwhile, are looking for civilian applications. In 2006, a California company called No Lie MRI, which had conducted a DARPA-funded study, began touting its commercial lie-detection services, offering $10,000 brain scans that it says can determine whether subjects are telling the truth. Among the first customers: an arson suspect who wanted to establish his innocence. (The case was eventually dropped.) More than 100 other potential clients have since expressed interest.
Even some of fMRI’s most enthusiastic supporters recognize that using the technology in this way could pose gigantic risks to civil liberties. Joel Huizenga, chief executive officer of No Lie MRI, says he anticipates a potential backlash against his firm—and welcomes it. “There should be controversy,” he says. “If I were the next Joe Stalin, I could use this technology to figure out who my friends and enemies are very simply, so I’d know who to shoot.” To allay concerns, No Lie only scans those who ask to be scanned: “We will only test individuals who come forward of their own free will,” Huizenga says. “We don’t want to be forcing anyone’s head into the machine.”
Huizenga’s firm may advocate strict limits on the technology, but there’s no reason to expect that other companies will. What if em-ployers want to use this technology as part of a standard job interview? How about a classroom scanner to detect plagiarism and other forms of cheating? What if airport security agents could screen our state of mind along with our luggage?
Such applications are wildly hypothetical, of course, but their implications are already being hotly debated by bioethicists and legal scholars. The Cornell Law Review article asserts that “fMRI is one of the few technologies to which the now clichéd moniker of ‘Orwellian’ legitimately applies.” The article goes on to conclude that “fMRI’s use remains legally questionable” and that “the involuntary use of fMRI scanning in interrogation most likely violates International Humanitarian Law.”
Since 2001, several companies have sprung up offering to decode thoughts for the benefit of retailers. One pioneering firm, The BrightHouse Institute for Thought Sciences, in Atlanta, claims to be the first neuromarketing research firm to land a Fortune 500 client—though it wouldn’t identify the company.
Consumer advocates worry that corporations will use fMRI to hone ever more insidiously effective marketing campaigns. In 2004, the executive director of Commercial Alert, a group co-founded by Ralph Nader, sent a letter to members of the U.S. Senate committee that oversees interstate commerce, noting that marketers were using fMRI “… not to heal the sick but rather to probe the human psyche for the purpose of influencing it … in a democracy such as ours, should anyone have such power to manipulate the behavior of the rest of us?”
Do we really need to start worrying—yet? For all the promise of fMRI, some critics think the technique is critically flawed. For one thing, though neurons typically fire on a scale of milliseconds, changes measured by fMRI occur about 5 seconds later, so that fast, complex neurological events may be lumped together. Other critics worry more that the algorithms needed to create images from complex, noise-ridden data introduce the possibility that scans can be misinterpreted.
William Uttal, a professor emeritus of psychology at the University of Michigan who has written a book about fMRI’s potential shortcomings, points out that researchers don’t know how brain activity correlates to the mechanisms of thought. “The big problem is that the brain is far more complex than we understand at the present time,” he says. “With this MRI stuff, it’s very, very easy to misunderstand and to simplify things that are much more complicated.”
The most withering criticism centers on using fMRI scans as lie detectors. “Some people claim they can show you pictures of suspected terrorists, and even if you say you don’t know them, they can tell by looking at an fMRI scan whether you know them or not,” says Yale’s Andy Morgan. “Well, a positive result doesn’t necessarily mean you’re lying, because no one’s done studies involving faces that look alike. A familiar-seeming face may give you the same response as one you actually know.” And, regardless of Huizenga’s promise that his No Lie staffers won’t force anyone’s head into an fMRI machine, such assurances might not be necessary: Current scanning technology does not work with nonconsenting subjects. In fact, even tiny movements inside the scanner can negate results.
Unfortunately, doubts about fMRI accuracy hardly lessen its potential for misuse. For decades, polygraph tests have been widely used despite their flaws. (Even proponents of polygraphy admit a 10 percent failure rate.) And junk science has long been rife in the courtroom. Earlier this year, law professor Brandon L. Garrett of the University of Virginia published a study analyzing 200 cases in which innocent people were wrongly convicted of a crime. In 55 percent of the cases, he found that jurors had been presented with faulty forensic evidence. “I personally am quite concerned,” says Vanderbilt’s Frank Tong. “If brain scans were admissible in court, and became popular enough, then even if they were not mandatory they would become in a sense obligatory. Because if you didn’t voluntarily undergo it, then there would be the question, ‘Why didn’t you take the test?’”
No doubt many brain-scan applications that critics most fear will never come to pass, and others as yet unseen will arise. What’s certain is that the technology will be transformative, with hardly an area in the public or private spheres that won’t be affected. Like it or not, the new brain science is here, and the world inside our heads is never again going to be completely private.