Ghostless Machines: Technophobia and/as Substance Dualism

A synthesis of various topics I’ve been writing about:

With the growth of science and technology has come growing skepticism. Some of this is warranted. Some solutions to problems — pesticides, cars, psychotropic drugs, etc. — cause problems of their own. This situation evokes the Frankenstein dilemma: Scientists must be cautious because they can’t always predict what their inventions and discoveries will lead to. Still, I wonder whether it has become socially acceptable to criticize technological advancement to an extent not justified by its potential risks — especially when it interferes with what is vaguely defined as “humanity” or the “human”:

“As we move into these larger and larger technological forms, we’re dealing with the complete takeover of nature . . . and in the end, probably the destruction of humanity as well.” – Jerry Mander, author of “In the Absence of the Sacred–The Failure of Technology and Survival of the Indian Nations.”[1]

“The notion that human beings, per se, may eventually become obsolete is not just a paranoid fantasy of science fiction-bashing technophobes.” – Bob Sipchen, LA Times1

“The dark side of virtual reality has to do with its ability to alter human consciousness.” – Howard Rheingold, editor of Whole Earth Review1

“We foresee a sustainable future for humanity if and when Western technological societies restructure their mechanistic projections and … respect both human dignity and nature’s wholeness.” – Chellis Glendinning, “Notes Toward a Post-Luddite Manifesto”

“Changing the genetic identity of man as a human person through the production of an infrahuman being is radically immoral” – International Theological Commission, “Communion and stewardship: human persons created in the image of God

What is this “human”? An endangered species nearly outcompeted to extinction by machines? A natural resource under threat of overuse and depletion? This could be the case with military technologies. Still, the population is larger than ever. Is humanity a product whose value is dropping because of high supply and other ways to meet what was once its demand? Or does loss of humanity mean more than mere figures and numbers? Is it the loss of an idea? An experience? A sense of connectivity? A way of being?

Or is it a loss of self? If so, what kind of self? A physical self, or a spiritual one? A self that can do things machines never could? A machine-like self, or a ghost in a machine? A self inside a body, or a self that emerges from one? A self with or without a purpose? A religious self or a secular self? Those skeptical of technological advancement often evoke spiritual fulfillment as its alternative or predecessor:

“We perceive the human role … as integrated into the natural world with appreciation for the sacredness of all life.” – Chellis Glendinning, “Notes Toward a Neo-Luddite Manifesto”

“For the first time, God has competition.” – ETC Group, quoted in an editorial on synthetic biology in Nature[2]

Could such rhetoric disparaging technological advances reveal and preserve the privilege of the ghost in the machine over the machine itself?

William Sims Bainbridge conducted a survey in 2002 on people’s spiritual views and opinions on transhumanism, a movement to improve humans with current and potential enhancement devices such as prosthetics and artificial intelligence. He found that religious respondents were more likely to distrust transhumanism, many seeing it as a threat to theism.[3] What does religion have to do with technologically altering humans? What view of the human does it challenge?

Could the belief at stake perhaps be that in a spiritual human that should not and could not be (partially) man-made? Is fusing the material with the human anxiety-provoking because it opens up the possibility that the human is largely or merely material? The argument that it is impossible to create or add to a person’s mind through physical means, after all, hinges on the premise that the mind is metaphysical. It is still a well-represented belief among philosophers that thought and emotion come from more than an amalgamation of brain activity. Many, especially those who believe in a soul, hold this view. Is artificial intelligence — whose supposed ability to create a conscious being relies on a material source of sentience — forcing them to question this belief? Hans Moravec, director of the Mobile Robot Laboratory at Carnegie Mellon University, argues in the book Mind Children that robots will approach humans in mental capacity within decades.

Does skepticism about artificial intelligence stem from a belief in an origin of intelligence that can only be produced naturally, non-artificially (antonyms of “artificial” from Random House Dictionary: real, genuine)? And as computers get closer and closer to imitating human thinking, are people forced to question this belief, even when they don’t want to?

Or, take synthetic biology. The ETC Group, an activist organization on “erosion, technology and concentration” wrote a document entitled “Extreme Genetic Engineering” in which they accuse synthetic biologists of “playing God.” Genomic futurist Juan Enriquez prophesized that synthetic biology will make people “able to program life.” Amidst the anxiety over people interfering with their own supposedly natural state of affairs, I wonder whether such a nature is already absent, or ever existed. Does resistance to creating humans who are not natural mask the possibility that it is too late to preserve a pure, uncontaminated nature? Should we be asking whether technology is too unnatural, or whether people are?

To generalize the question, does the concrete transformation of man into machine and machine into man before our eyes collapse the conceptual man-machine dichotomy inside our heads? To take the inquiry a step further, is the fear of machines replacing humanity not just a failure to acknowledge, but also a mechanism to avoid, the real issue at hand: that humans are machine-like in their materiality?  Could fear of what humanity may become reflect, as well as divert attention from, fear of what humanity already may be? Is technology, assuming increasingly human roles, forcing people to question what they are?

What do people fear more: that technology opposes them, or that it resembles them? What is scarier: A robotic rival or a robotic relative? That machines will replace us, or that they could — a possibility whose premises, to many, diminish human sacredness?

And if technology stirs up the prospect of our strictly material existence, why is that so dreadful as to be unthinkable? Why does being made of matter matter? Could it be the type of causation that drives a material universe: one that is deterministic (except in very small systems) but not teleological, and hence full of purposeless, out-of-control events?

Is the creation of, or addition to, intelligent beings by humans going to diminish the belief that life is meaningful? The role of creating life is usually allotted to God and/or Nature. Is such a nature teleological; that is, inherently directed toward an ultimate goal? Does insistence on preserving humanity and resisting technology emanate from the idea that man-made things oppose God’s plan, or Mother Nature’s? Could part of this idea be a vestige of destiny, under which every life has a purpose? Are life prolongation, reproduction, and information technologies really interfering with how the life cycle was meant to occur, or is it naïve to believe anything was ever meant? Are critics determined to hold onto the belief that people are born with a higher purpose? Could this contribute to an explanation of their anxiety over a troubled definition of birth?

Though physicalism may imply a life devoid of inherent purpose, it also implies a life full of causes and effects that are out of people’s control. Under physicalism, humans, like machines, make decisions based on a cascade of electrochemical reactions. According to another philsosophical position called determinism, everything is caused by what came before it. If all physical things are deterministic (which they are, with the exception of quantum physics, which is only relevant on a micro level), and human thought and behavior are physical, then people have no say in their actions; such actions could be predicted at any point in time based on the laws of physics and the positions of particles. This is different from the idea of fate, the teleological concept that God or some knowing agent predetermined the events in one’s life.

In “The Forbidden Fruit Intuition,” Thomas Metzinger asks, “Can one really believe in determinism without going insane?” This is open for debate, but people have long created elaborate belief systems to validate their free will or God-given destiny and hence avoid such insanity. Could technophobia function as one such system? Does fretting over the valuing of physical, deterministic systems over human ones reaffirm the implications of this dichotomy: that humans are free and have souls?

A qualification: I agree that it is important to question the ideology of exploitation and unquestioning reception that makes technology worship possible. At the same time, the premises of opposing argument require examination. I am not summarily dismissing the arguments, but rather wondering whether they rest too heavily on beliefs that some of their receivers and even proponents might not agree with. This examination begs the question that technology itself presents: whether the fear of losing our humanity is projecting onto the future what is already present, or, on the flip side of the coin, dreading the loss of what never was.

 


[1] Sipchen, Bob. “Stop the World: Neo-Luddites Fear Steamroller of Technology Threatens Humanity.” Los Angeles Times. http://articles.latimes.com/1992-02-25/news/vw-2639_1_automotive-technology

[2] “Meanings of ‘Life.’” Nature 447, 1031-1032 (28 June 2007) | doi:10.1038/4471031b; Published online 27 June 2007

[3] Bainbridge, William Sims (2005). The Transhuman Heresy. Journal of Evolution and Technology –  Vol. 14  Issue 2 – Aug. 2005. pgs 1-10  
jetpress.org/volume14/bainbridge.html

 

 

Advertisements

One thought on “Ghostless Machines: Technophobia and/as Substance Dualism

  1. Pingback: intelligence capabilities, intelligence exploitation, subject matter expertise, technical intelligence, intelligence data, price location, infrasound, Analyst, SIGINT | nuclearmissile.net

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s