Mind-reading devices to end wrongful convictions: Future of law P2

IMAGE CREDIT: Quantumrun

Mind-reading devices to end wrongful convictions: Future of law P2

    The following is an audio recording of a police interrogation using thought-reading technology (starts 00:25):

     

    ***

    The story above outlines a future scenario where neuroscience succeeds in perfecting the technology of reading thoughts. As you might imagine, this technology will have an outsized impact on our culture, especially in our interaction with computers, with each other (digital-telepathy) and with the world at large (thought-based social media services). It will also have a range of applications in business and national security. But perhaps its biggest impact will be on our legal system.

    Before we dive into this brave new world, let's take a quick overview of the past and present use of thought reading tech in our legal system. 

    Polygraphs, the scam that fooled the legal system

    The idea of an invention that could read minds was first introduced in the 1920s. The invention was the polygraph, a machine devised by Leonard Keeler that he claimed could detect when a person was lying by measuring fluctuations in a person's breathing, blood pressure, and sweat gland activation. As Keeler would testify in court, his invention was a triumph for scientific crime detection.

    The wider scientific community, meanwhile, remained skeptical. A variety of factors can affect your breathing and pulse; just because you're nervous doesn't necessarily mean you're lying. 

    Because of this skepticism, the polygraph's use within legal proceedings has remained controversial. In particular, the Court of Appeals for the District of Columbia (US) created a legal standard in 1923 stipulating that any use of novel scientific evidence must have gained general acceptance in its scientific field before being admissible in court. This standard was later overturned in the 1970s with the adoption of Rule 702 in the Federal Rules of Evidence that said the use of any type of evidence (polygraphs included) was admissible so long as its use was backed up by reputable expert testimony. 

    Since then, the polygraph has become widely used in a range of legal proceedings, as well as a regular fixture in popular TV crime dramas. And while its opponents have gradually become more successful in advocating for an end to its use (or abuse), there are various studies that continue to show how people hooked up to a lie detector are more likely to confess than otherwise.

    Lie detection 2.0, the fMRI

    While the promise of polygraphs has worn off for most serious law practitioners, it doesn't mean the demand for a reliable lie detecting machine has ended with it. Quite the opposite. Numerous advances in neuroscience, combined with elaborate computer algorithms, powered by monstrously expensive supercomputers are making surprising headway in the quest to scientifically spot a lie.

    For example, research studies, where people were asked to make truthful and deceitful statements while undergoing scans from a functional MRI (fMRI), found that people's brains generated far more mental activity when telling a lie as opposed to telling the truth—note that this increased brain activity is entirely isolated from a person's breathing, blood pressure, and sweat gland activation, the simpler biological markers that polygraphs depend on. 

    While far from foolproof, these early results are leading researchers to theorize that to tell a lie, one first has to think of the truth and then spend extra mental energy manipulating it into another narrative, as opposed to the singular step of simply telling the truth. This extra activity directs blood flow to the frontal brain region responsible for creating stories, an area that's rarely used when telling the truth, and it's this blood flow that fMRIs can detect.

    Another approach to lie detection involves lie-detecting software that analyses a video of someone talking and then measures the subtle variations in their tone of voice and facial and body gestures to determine whether the person is telling a lie. Early results found the software was 75 percent accurate in detecting deception compared to humans at 50 percent.

    And yet even as impressive as these advances are, they pale in comparison with what the late 2030s will introduce. 

    Decoding human thoughts

    First discussed in our Future of Computers series, a game-changing innovation is emerging within the bioelectronics field: it's called Brain-Computer Interface (BCI). This technology involves using an implant or a brain-scanning device to monitor your brainwaves and associate them with commands to control anything that's run by a computer.

    In fact, you might not have realized it, but the early days of BCI have already begun. Amputees are now testing robotic limbs controlled directly by the mind, instead of through sensors attached to the wearer’s stump. Likewise, people with severe disabilities (such as quadriplegics) are now using BCI to steer their motorized wheelchairs and manipulate robotic arms. But helping amputees and persons with disabilities lead more independent lives isn’t the extent of what BCI will be capable of. Here’s a short list of the experiments now underway:

    Controlling things. Researchers have successfully demonstrated how BCI can allow users to control household functions (lighting, curtains, temperature), as well as a range of other devices and vehicles. Watch the demonstration video.

    Controlling animals. A lab successfully tested a BCI experiment where a human was able to make a lab rat move its tail using only his thoughts.

    Brain-to-text. Teams in the US and Germany are developing a system that decodes brain waves (thoughts) into text. Initial experiments have proven successful, and they hope this technology could not only assist the average person but also provide people with severe disabilities (like the renowned physicist, Stephen Hawking) the ability to communicate with the world more easily. In other words, it's a way to make a person's inner monolog audible. 

    Brain-to-brain. An international team of scientists was able to mimic telepathy by having one person from India think the word “hello,” and through BCI, that word was converted from brain waves to binary code, then emailed to France, where that binary code was converted back into brainwaves, to be perceived by the receiving person. Brain-to-brain communication, people!

    Decoding memories. Volunteers were asked to recall a favorite film of theirs. Then, using fMRI scans analyzed through an advanced algorithm, researchers in London were able to accurately predict which film the volunteers were thinking about. Using this technique, the machine could also record which number the volunteers were shown on a card and even the letters the person was planning to type.

    Recording dreams. Researchers at Berkeley, California, have made unbelievable progress converting brainwaves into images. Test subjects were presented with a series of images while connected to BCI sensors. Those same images were then reconstructed onto a computer screen. The reconstructed images were grainy but given about a decade of development time, this proof of concept will one day allow us to ditch our GoPro camera or even record our dreams. 

    By the late 2040s, science will have achieved the breakthrough of reliably converting thoughts into electronic ones and zeros. Once this milestone is achieved, hiding your thoughts from the law may become a lost privilege, but will it really mean the end of lies and mistruths? 

    Funny thing about interrogations

    It may sound counterintuitive, but it’s possible to tell the truth while also being completely wrong. This happens regularly with eye-witness testimony. Witnesses to crimes oftentimes fill in missing pieces of their memory with the information they believe is entirely accurate but turns out to be entirely false. Whether it’s confusing the make of a getaway car, the height of a robber, or the time of a crime, such details can make or break in a case but are also easy for the average person to get confused.

    Similarly, when the police bring in a suspect for interrogation, there are a number of psychological tactics they can use to secure a confession. However, while such tactics have proven to double the number of pre-courtroom confessions from criminals, they also triple the number of non-criminals who falsely confess. In fact, some people can feel so disoriented, nervous, afraid and intimidated by police and by advanced interrogation tactics that they will confess to crimes they did not commit. This scenario is especially common when dealing with individuals who suffer from one form of mental illness or another.

    Given this reality, even the most accurate future lie detector may not be able to determine the whole truth from a given suspect's testimony (or thoughts). But there's a concern even greater than the ability to read minds, and that's if it's even legal. 

    Legality of thought reading

    In the US, the Fifth Amendment states that "no person ... shall be compelled in any criminal case to be a witness against himself." In other words, you’re not obligated to say anything to the police or in a court proceeding that can incriminate yourself. This principle is shared by most nations that follow the Western-style legal system.

    However, can this legal principle continue to exist in a future where thought reading tech becomes commonplace? Does it even matter that you have the right to remain silent when future police investigators can use technology to read your thoughts?

    Some legal experts believe that this principle only applies to testimonial communication that is verbally shared, leaving the thoughts in a person’s head to be free reign for the government to investigate. If this interpretation were to go unchallenged, we could see a future where the authorities can get a search warrant for your thoughts. 

    Thought reading tech in future courtrooms

    Given the technical challenges involved with thought reading, given how this tech can’t tell the difference between a lie and a false lie, and given its potential infringement on a person’s right against self-incrimination, it’s unlikely that any future thought reading machine will be allowed to convict a person purely based on its own results.

    However, given the research well underway in this field, it's only a matter of time before this tech becomes a reality, one that the scientific community supports. Once this happens, thought reading tech will at the very least become an accepted tool that criminal investigators will use to discover substantive supporting evidence that future lawyers can employ to secure a conviction or to prove someone's innocence.

    In other words, thought reading tech may not be allowed to convict a person on its own, but its use can make finding the smoking gun much easier and faster. 

    The big picture of thought reading tech in law

    At the end of the day, thought reading tech will have wide-ranging applications throughout the legal system. 

    • This tech will significantly improve the success rate of finding key evidence.
    • It will significantly reduce the prevalence of fraudulent lawsuits.
    • Jury selection can be improved by more effectively weeding out bias from those selected decide on the fate of the accused.
    • Similarly, this tech will substantially reduce the incidence of convicting innocent people.
    • It will improve the resolution rate of escalated domestic abuse and conflict situations that feature hard to resolve he said, she said accusations.
    • The corporate world will employ this technology heavily when resolving conflicts through arbitration.
    • Small claims court cases will be resolved faster.
    • Thought reading tech may even replace DNA evidence as a key conviction asset given the recent findings proving its growing unreliability. 

    At the societal level, once the wider public becomes aware this technology exists and is being actively used by the authorities, it will deter a wide range of criminal activity before they are ever committed. Of course, this also brings up the issue of potential Big Brother overreach, as well as the shrinking space for personal privacy, but those are topics for our upcoming Future of Privacy series. Until then, the next chapters of our series on the Future of Law will explore the future automation of law, i.e. robots convicting people of crimes.

    Future of law series

    Trends that will reshape the modern law firm: Future of law P1

    Automated judging of criminals: Future of law P3  

    Reengineering sentencing, incarceration, and rehabilitation: Future of law P4

    List of future legal precedents tomorrow's courts will judge: Future of law P5

    Next scheduled update for this forecast

    2023-12-26

    Forecast references

    The following popular and institutional links were referenced for this forecast:

    YouTube - World Economic Forum
    Social Science Research Network

    The following Quantumrun links were referenced for this forecast: