Transhumanism explained: Is the future friendly?

<span property="schema:name">Transhumanism explained: Is the future friendly?</span>
IMAGE CREDIT:  

Transhumanism explained: Is the future friendly?

    • Author Name
      Alex Rollinson
    • Author Twitter Handle
      @Alex_Rollinson

    Full story (ONLY use the 'Paste From Word' button to safely copy and paste text from a Word doc)

    Imagine waking up in the year 2114.

    The computer processor in your brain controlled your sleep cycle so that you feel perfectly refreshed as you rise out of bed. Becky, the artificial intelligence that controls your house, lifts the toilet seat and slides open the shower curtain when you open the bathroom door. After you finished your morning hygienic routine, you think about the large dinner you will be having tonight; it’s your two hundred and eleventh birthday. You open the medicine cabinet and take out a yellow pill. It will compensate for your expected excessive calorie intake.

    Though it is science fiction right now, a scenario like this is possible in the eyes of a transhumanist.

    Transhumanism is a cultural movement, often represented as H+ (humanity plus), which believes that human limitations can be overcome with technology.  While there are people who actively consider themselves a part of this group, everyone uses transhuman technologies without even realizing it—even you. How can this be? You don’t have a computer integrated into your brain (right?).

    With a broader understanding of what technology means, it becomes clear that you don’t need Star Trek gadgets to be transhuman. Ben Hurlbut, co-director of The Transhumanist Imagination project at Arizona State University, says that “technology is codified forms of technique.”

    Agriculture is technology. Aviation is technology. Not just because they use machinery like tractors or airplanes, but because they are practices that have become part of society. With this understanding, transhuman technology (transtech) can be any set of learnable techniques that overcome certain human weakness. Clothing that protects us from the elements; glasses and hearing aids that overcome sensory impairments; low calorie diets that consistently extend healthy lifespan; all of these things are transhuman technologies that we have right now.

    We have already begun to displace certain attributes typically characterized as human into technology. Our memories have been on the decline since the invention of writing when remembering whole stories became unnecessary. Now, our memory has been almost entirely displaced onto our smartphone calendars and search engines like Google.

    But just because you use the tech, it doesn't necessarily mean you’re a part of the cultural movement. In fact, some applications of transtech have been argued to be contrary to transhumanist ideals. For example, an essay in the Journal of Evolution and Technology argues that its use for military benefits is opposed to the transhumanist ideal of world peace. Overcoming biological limitations and world peace? What else could transhumanists possibly want?

    Well, according to the Transhumanist Declaration by groups such as the World Transhumanist Association, they “envision the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering and our confinement to planet Earth.”

    Yes, transhumanists want to colonize other planets. Not being able to live anywhere other than the perfectly coddling atmosphere of Earth is a biological limitation after all! This might sound crazier if 200,000 people didn't already volunteer for a mission to colonize Mars by 2024. What would humanity look like if transhumanists reached all their goals? 

    This is a problematic question for a number of reasons.The first is that there are varying levels of commitment to the goals of transhumanism. Many tech enthusiasts only focus on the short term ways in which technology can reduce suffering or enhance ability. True believers look to a time beyond transhumanism referred to as posthumanism.

    “In the posthuman future, according to these visionaries, humanities will not exist at all and will be replaced by super-intelligent machines,” says Hava Tirosh Samuelson, also a co-director of The Transhumanist Imagination project.

    Regardless, the hypothetical completion of transhumanist goals will mean three things: all forms of life will be free of disease and sickness; human intellectual and physical abilities will no longer be constrained by biological limitations; and most importantly, the quest that has spanned millennia of human existence—the quest for immortality—will be complete.

    Trans What Now?

    The lofty goals of transhumanism have profound implications for our species. So why have most people still not heard of it? “Transhumanism is still in its infancy,” says Samuelson.

    The movement has really only developed in the past few decades. Despite showing some signs of trickling into the public stream, such as the transhumanism subreddit, it has not yet broken into mainstream discourse. Samuelson says that despite this, “transhumanist themes have already informed popular culture in numerous ways.”  

    It is just that people don’t realize where the ideas are coming from. This is most readily apparent in our fiction. Deus Ex, a computer game from 2000, features a protagonist with superhuman abilities because he is augmented with nanotechnology. Nanotechnology could revolutionize health care and manufacturing and thus is important to transhumanists. Upcoming computer game, Civilization: Beyond Earth, focuses on space colonization. It also features a playable faction of people who use technology to improve their capabilities.

    Interestingly, there is also a faction that opposes these transhumans and believes in remaining true to humanity’s original form. This same tension serves as the driving conflict in the 2014 film, Transcendence. In it, terrorist group, Revolutionary Independence from Technology, attempts to assassinate a scientist who is trying to create a sentient computer. This leads to uploading the scientist’s mind into the computer to save his life. He continues to make new enemies as he works towards achieving the singularity in his transcendent state.

    What on earth is the singularity, you ask?

    It is a moment when super-intelligence dominates and life takes on a form we cannot comprehend. This super-intelligence could be a result of advanced artificial intelligence or biologically modified human intelligence. In addition to be being a popular concept in science fiction, the singularity has also inspired new ways of thinking in reality.

    The Singularity University (SU) is one such example. The mission stated on its website is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.” To achieve this, a small number of students are introduced to promising technologies during short (and expensive) courses. The hope is that alumni will start up companies to bring these techs into fruition.

    Hurlbut says that SU “student groups are sent to undertake projects that are supposed to improve the lives of a billion people within ten years.” He continues on to say, “They’re not worried about what that billion thinks exactly, they’re only worried about what the one thinks and what the one can produce.”

    Are these people qualified to decide how a billion people’s lives will be altered just because they can afford a $25,000 course? It’s not a matter of who is qualified or unqualified, according to Hurlbut. He says, “There is no external adjudicator … because these visions don’t simply come to pass naturally, they’re enacted, and they’re a function of who is in a position of power and authority.”

    But are our current societal structures truly prepared for the future envisioned by transhumanists?

    Transhuman Class Division?

    People who think this is not the case come from as wide a variety of disciplines as transhumanists themselves. The list of reasons to oppose the pursuit of transhumanist goals without deep consideration is long.

    Imagine you are back in 2114 again. Your self-driving car takes you through the downtown core of the autonomous city; as a nanoarchitect, you need to supervise the high rise that is building itself across town. The poor and destitute beg on the streets as you pass by. They cannot get jobs because they refused or were unable to become transhuman.

    Francis Fukuyama, professor of international political economy at Johns Hopkins School of Advanced International Studies, considers transhumanism as the world’s most dangerous idea. In an article for Foreign Policy magazine, Fukuyama says, “The first victim of transhumanism might be equality.

    “Underlying this idea of the equality of rights is the belief that we all possess a human essence,” he continues. “This essence, and the view that individuals therefore have inherent value, is at the heart of political liberalism.”

    In his view, the core of transhumanism involves modifying this human essence and will have dramatic implications for legal and social rights. Nick Bostrom, a professor of philosophy at the University of Oxford, has devoted a page of his website to countering Fukuyama’s argument. He labels the idea of a distinct human essence as “an anachronism.” Furthermore, he points out that, “Liberal democracies speak to ‘human equality’ not in the literal sense that all humans are equal in their various capacities, but that they are equal under the law.”

    As such, Bostrom says there “is no reason why humans with altered or augmented capacities should not likewise be equal under the law.”

    Both Fukuyama and Bostrom’s arguments represent a key anxiety about a transhuman future. Will transhumans only be the rich and powerful while the rest of humanity is left behind to wallow in suffering? Samuelson argues that this is not the case. “It is more likely,” she says, “that these technologies … will become cheap and readily available, precisely as the smartphones have become in the developing world.”

    Similarly, when presented with a scenario where transhumans and humans are separated by a class divide, Hurlbut says, “I think that’s a ludicrous way of mapping society.” He compares the situation to the Luddites, English craftsmen in the 19th century who destroyed textile machinery that was replacing them. “History showed [the Luddites], right? That’s the sort of thinking,” says Hurlbut, about those who propose the “class divide” narrative. He explains that the Luddites were not necessarily opposed to technology. Rather, they opposed “the notion that technology invites forms of social reorganization and asymmetries of power that are profoundly consequential for people’s lives.”

    Hurlbut uses the example of the Bangladeshi factory that collapsed in 2013. “These are not problems that were contrived [by the Luddites] and they’re not problems that have gone away.”

    Dividing society into the haves and the have-nots clearly places the latter in an inferior position. In reality, they, like the Luddites, have made a choice. People who make different choices can coexist in a liberal democracy and that should continue.

    Brad Allenby, an American environmental scientist and co-author of The Techno-Human Condition, says it is still far too early to tell. “You can come up with both utopian and dystopian scenarios. And at this point, I think you have to regard them as scenarios rather than predictions.” However, he says, “It is not unlikely that the global economy predicated on advanced technologies is going to significantly reward [transhumans] and pass by [non-transhumans].” Fortunately, he also believes this kind of future is avoidable. “Given that we can create a scenario that says that this might happen, we can then go back and watch the trends. Then we can act to change the effects.”

    Speculative Implications

    The dystopian narrative of class division between those who embrace transhumanism and those who do not is far from the only one.

    The fear of a kind of societal latency abounds; many fear that technology is accelerating far faster than our laws and institutions can keep up with. Steve Mann is a professor at the University of Toronto who wears (and invented) the EyeTap. This device digitally mediates his vision and can also serve as a camera. What does mediate mean in this context? Basically, the EyeTap can add or remove information from one’s vision.

    For example, Mann has demonstrated its ability to remove advertisements (e.g. billboards) for cigarettes from his vision. On July 1, 2012, Mann was eating at a McDonald’s in Paris, France. Then, three people attempted to forcefully remove his EyeTap in what has been called the first cybernetic hate crime.

    “The eyeglass is permanently attached and does not come off my skull without special tools,” Mann wrote in his blog recounting the incident.

    While this assault is clearly unethical, it does raise questions about transtech such as the EyeTap. When taking a photo or video of someone, you typically must have their permission. Recording everyone you see with a device like an EyeTap removes this possibility. Does this violate the law? People’s privacy? Mann likes to point out that surveillance cameras are constantly recording us without our express consent. In fact, to counter this “oversight,” Mann advocates for sousveillance, or “undersight.”

    He believes that all forms of authority can be held accountable if we all wear cameras. Initial empirical evidence may support this. Police officers in Rialto, California were equipped with wearable video cameras as part of an experiment. In the first 12 months, the department had an 88 per cent decline in complaints against officers, and the officers used force almost 60 per cent less.

    Despite this success, the ethical implications of constant recording have yet to be fully considered or legislated. Some people are concerned as technology may not take long to become ubiquitous with gadgets like Google Glass. On top of that, there is still a host of speculative technologies that have even more dramatic consequences to ponder.

    Samuelson says, “Policy makers are not prepared to handle the ramifications of accelerating technologies.” In fact, she believes, “Engineers of AI and the promoters of transhumanism have barely begun to address the ethical challenges they have created.”

    Are we really inventing technology faster than we can handle it? Hurlbut thinks this is another flawed narrative; “An enormous amount of social work and political work takes place beforehand, not after the fact.” He says, “We create the conditions of possibility for the kinds of innovation to take place because we created regulatory regimes.”

    Using the Singularity University as an example, Hurlbut goes on to explain, “These guys … are telling us what the future holds and how we ought to orient ourselves as a society toward that future … before there is actually any technological reality to those visions.” As a result, “Those visions are profoundly consequential for the way we undertake innovation on all levels.”

    That seems to be the point that Hurlbut repeats: technology does not just occur, it does not evolve naturally. It requires substantial foundational groundwork that occurs because of our current societal systems, not in spite of them. If this is the case, then we should be able to expect proper regulation and cultural reaction when devices such as Google Glass become prominent. Whether or not such regulation will involve changes to privacy laws or restrictions on the devices themselves is yet to be seen.

    Techno Optimism?

    How should we prepare for the possibility of a transhumanist future? Brad Allenby and Ben Hurlbut weigh in.

    Allenby: The question it seems to me is, how can we develop the institutions, the psychologies, the frameworks that actually allow us to respond ethically and rationally? That would be where I would like to put our intellectual energy. If there is a moral requirement, or a moral call in this, it isn’t a call to stop the technology, which is what some people would say, and it isn't a call to continue the technology because we’ll make ourselves perfect, as some people would say. It is a call to try to engage with the full complexity of what we have already created, because that’s there—it’s out there—it’s not going to go away and it’s going to continue developing. And if all we can do is pull up old quasi-religious ideas or utopian fantasies, then we are not doing anybody any good and, more importantly, I think we’re not treating the world with the respect that it deserves.

    Hurlbut: I think that the real kind of technologies that we need are technologies of reflection and technologies of self-critique and humility. What does that mean exactly? That means developing ways of knowing problems, ways of understanding problems, and ways of thinking about solutions that recognize that they’re necessarily partial, that they’re necessarily being introduced into a world where we don’t and can’t understand their consequences completely. In undertaking those kinds of projects we need to be able to do them with conviction and with humility, recognizing that we are taking responsibility for others, for people outside the community of creators and for future generations. Those are the forms of innovation that we don’t put a lot of emphasis on. Those are in fact the kinds of innovations that are seen as inhibitory rather than engendering desirable technological futures. But I think that’s wrong-headed; they do engender those good technological futures because they give us a sense of what the good is.

    What is clearly emphasized by Allenby, Hurlbut, Samuelson, and even prominent transhumanists like Nick Bostrom, is that a serious public discourse needs to take place. Too few people know what transhumanism is. Even fewer are considering what it may mean for the future of humanity. Samuelson points out that humanity ultimately doesn't have a future after transhumanism if people are replaced with super-intelligent machines. She “regard[s] these future scenarios as unacceptable and [she] speak[s] against it as a humanist and as a Jew.” Furthermore, she says, “Since Jews have already been the target of planned annihilation by means of modern technology (i.e., the Holocaust), Jews have the responsibility to speak up against the planned destruction of the human species.”

    But there is room for hope, says Hurlbut. He speaks about the era his father grew up in: an era where the threat of nuclear holocaust hung from the clouds like death’s cloak. “Yet, here we are: thirty or forty or fifty years later, still existing.” Hurlbut wonders, “Should we be optimistic or pessimistic about a world in which such regimes exist but somehow we manage to make it through?”

    Whatever the answer, all of my interviewees said some variation of the same thing; it’s complicated. When I mentioned this to Hurlbut, he decided I should add on to that mantra; “It’s complicated: definitely.”

    If we are to be optimistic about this complicated subject, we must imagine the future and all of its implications the best we can. It seems that if we do this in a public and systemic way, technology can serve human flourishing. But what can someone like you or I do? Well, you can start by imagining you are in the year 2114.

    Tags
    Category
    Topic field