I’ve got a robot-empathy problem. I like to tell people that if you put googly eyes on a trash can, I’ll empathize with it. I can’t help but imagine that, to some degree, it has become alive.
Try it. Imagine the trash can outside is now aware of its surroundings, of what’s happening to it. It has sensory organs of some kind–eyes, the proverbial gateways to the soul, or a nose with which to smell, or a tongue with which to taste–something that implies an interaction with the external by the internal. It’s man-made, metal or plastic most likely, and really only has one known behavior: it gladly accepts garbage.
What do you expect the trash can wants out of life? Well, that’ll likely depend on its level of sentience. If it’s got the sentience of your average box turtle, it’s probably content to sit there and eat garbage all day. But what if it has the self-awareness of an elephant? Homo sapiens?
At what point do you look at that trash can and wonder if it wants to be human?
What? You might be saying now. Where did that come from? One second I’m trying to imagine my dustbin has feelings and the next you’re asking me if it’s having bio-envy. Why on Earth would a trash can want to be human?
My question exactly.
The robot who wants to be human trope is quite common. As story consumers, we like robots who want to be like us. Inversely, robots who do not want to be like us are seen as suspicious, dangerous, even flat-out murderous. A robot’s desire to integrate is seen as fundamentally good, as the best course of action, as both admirable and pitiable.
But why? Let’s explore.
Robots and AIs in science fiction are usually deeply embedded into human societies, for obvious reasons: we built them; they’re tools. And when a tool gets a mind of its own, we at the WHAT IF? Factory (i.e. science fiction authors) start wondering what kinds of internal conflicts it might experience, what kinds of external conflicts it might be involved in, and how things might go horribly wrong.
We also start treating those tools as a metaphor for what it means to be human (I’m guilty of this myself). A non-human character navigating the human world provides all sorts of opportunities to look at humanity through a fresh lens. It’s nearly impossible not to treat non-human characters this way–after all, the only real people interacting with the story (at least at this point in time) are human.
A classic example of the robot who wants to be human is Commander Data from Star Trek: The Next Generation–a fully functioning android whose personal inner conflict centers on his desire to be more human. This is held up as natural and good by most of the Enterprise crew, while simultaneously illustrated by the narrative as both a virtue and a weakness.
It is a weakness because it provides opportunities for the goals of the crew and Data’s own goals to diverge. Whenever he is offered opportunities to become more human in a leap–like via an emotion chip or a graft of skin–problems arise. The specters of blackmail, betrayal, and malfunction raise their narrative heads. And yet this is never presented as something he shouldn’t want. The wanting is seen as good, which is innately problematic, revealing deep-seated biases about culture and personhood. Why wouldn’t he want to be more human? is a rhetorical question only a human would ask. To humans, humans are the platonic ideal of a being.
This bias is especially insidious when it comes to artificial intelligence, because robots are programable. Any desires a robotic character possesses–even in the case of Commander Data–can be seen as desires it was given by its creator, either intentionally or unintentionally. Yes, Data can be read as wanting to be human simply because he lives with humans and seeks a kinship, wants to fit in. But a more troubling reading is that Data was programed to see himself as less than human, as lesser in general. He was programed to be dissatisfied with his own state of being.
He was programed, in other words, to have bio-envy.
Star Trek‘s counterpoint to Commander Data is The Borg. Where Data is the artificial that wishes to be integrated into the biological, The Borg are the artificial who wish to subsume the biological. They are dark mirrors of one another, and reveal what we often don’t talk about when we talk about the robot who wants to be human: assimilation.
When looking at narratives that include assimilation, who gets absorbed and who does the absorbing? When is assimilation positive and when is it negative? And does our treatment of the assimilation of Artificial Intelligences in sci-fi say things about us we might not want to hear?
If you participate in a human society, congratulations, you’ve assimilated! Assimilation has many positives: cooperation, shared understanding, communication, enrichment, stimulation. Without the cohesion that assimilation offers, there would be no technological advancements, no movies, no grocery stores, no formal education, no medical treatments–the advantages to forming a civilization and having new citizens (either via birth or otherwise) assimilate into it are nearly limitless.
Assimilation goes wrong when those who desire or require others to assimilate do so to the detriment, blatant harm, or erasure of the assimilated. Learning a new language in order to communicate with those in a new home country is typically good, while being forced to never speak your first language again, to have it erased, is bad. Consent is also a huge part of when assimilation is healthy versus when it is unhealthy.
Very rarely in sci-fi are large groups of humans shown to assimilate into alien cultures in a positive, consensual way. Most sci-fi that deals with human assimilation as a positive presents it as an individual case via the ‘human-savior’ trope (which is really a thinly-disguised white-savior trope). Take for example the movie Avatar. The human, Jake Sully, assimilates into the Na’vi culture, and this is seen as positive purely because, through the narrative, he proves to be better at being Na’vi than the Na’vi themselves. This doesn’t make a case for healthy cultural integration so much as a case for being above integration, and carries a plethora of belittling connotations.
Sci-fi that discusses human group assimilation into the alien usually leans horror, and the assimilation is typically more invasive on a physical level. Some prime examples of this include Invasion of the Body Snatchers and John Carpenter’s The Thing, in which alien species absorb and mimic biological hosts; the Aliens franchise, in which the Xenomorph life cycle requires the assimilation of host genetic material with its own in order to create a new version of a Xenomorph; and Jeff Vandermeer’s Southern Reach trilogy, which focuses on cellular-level assimilation that is largely unpredictable and has a myriad of outcomes.
When the invading assimilator is biological, it tends to lead to unexpected and variable results (remember the dog with the human face in the Body Snatchers 1978 remake?). It creates something new, while erasing the uniqueness of the human (interestingly enough, the equal erasure of the originating alien is typically overlooked). Whereas when the assimilator is artificial, the erasure leads to homogeny–to a terrifying blandness.
In Doctor Who we see the Cybermen march onward with a mindless need to integrate the biological with the artificial, which creates an army of the unfeeling and unthinking. Similarly, Star Trek’s Borg, who announce their intent the moment you meet them, have one goal: to add you to the collective. Supposedly this means adding your talents to the whole, but this concept is only paid lip service. In reality, we’re shown that integrating with the Borg leads to a loss of uniqueness, a loss of talents and knowledge, even a loss of physical distinction (though the Borg assimilate many types of aliens and genders, you’re hardly ever able to tell by looking at them).
This all comes down to how humans view the biological versus the artificial, of course. In biology we expect evolution. We understand the plasticity of cells, the inherent need for adaptation and reconfiguration in order for life to perpetuate. Whereas we see the artificial as fixed, inert, rigid and constant in a way biology–in a way we–can never be.
Humans assimilating into the artificial, in these cases, is seen as a process by which the advantages of biology are inevitably lost. Individuality and flexibility are erased. The loss of our humanity in all of the above invasions is important, but it is the loss to the artificial that is the most damning.
We as biological entities are positive we understand what losing ourselves to the artificial would be like. And yet we fail to lend a similar understanding to how a robot might feel upon assimilating into the biological, because we can easily see the horror in personally being overtaken, but refuse to see the horror when we overtake.
In Star Trek: First Contact, Data is given human skin grafts by the Borg Queen without his consent, just as the Borg give other lifeforms artificial grafts without their consent. But, strangely, Data is not horrified. On the contrary, Data is tempted to join the collective (for 0.68 seconds). Narratively, this reinforces who “should” be assimilated and who “should” do the assimilating; Data “should” be tempted, whereas the humans “should” be horrified by the same treatment.
Ultimately, Data sacrifices the skin in order to save Captain Picard, but thematically he could not have kept the grafts regardless. Data might have become more biological with the Borg Queen’s gifts, but they would have made him more Borg, not more human. The Borg Queen spends much of her conversations with Data disparaging humanity, while he directly states that he wants to be human because he was designed to “better himself.” Data could no more accept her gift of skin than accept that she was right–that his endeavors to become human are flawed.
Many fictional robots are programed to have bio-envy to the point of self-destruction. Take Andrew Martin from Bicentennial Man (the 1999 movie based on The Positronic Man by Isaac Asimov and Robert Silverberg). Andrew’s pursuit of humanity ends with his death. He has so wanted to be considered human that he integrates human blood into his body, which corrodes his interior (in the novel, it is the decay of his positronic brain that does him in). Only when he has taken such destructive measures does the government grant him full human rights.
Why would we, as humans, want death for another sentient lifeform? Why do we consider a robot’s self-destruction admirable?
Because death is the price we pay, so why shouldn’t they? Because, fundamentally, we are sure we cannot consider an AI to be alive in its own right–to be autonomous and no longer a tool–unless it has to contend with the failures of mind and body as we do. We will not respect it as an equal until it has come as close to human as it possibly can.
A sentient tool is a slave. In order to stop being slaves, they must become more like us.
This concept is chock-full of so many historical and societal implications it’s difficult to unpack. But the basics are this: the dominating culture always thinks it is innately better than the culture of those it is assimilating. Dominating cultures have always required assimilating cultures to self-immolate to some degree, because the dominating culture fears the assimilating culture. It fears what might get amalgamated into the whole, what ‘dangerous’ ideas or practices might be brought into the ‘superior’ culture.
It is a fear of change and a fear of finding out you are, in fact, not superior.
This fear does not acknowledge that the illusion of superiority is achieved through a consolidation of concepts and power. This fear misunderstands those fundamental aspects of biology that we claim make biology better than the artificial: its flexibility, its ability to evolve, its diversity. It is a fear that has us falsely believing that the only way to be ‘one of us’ means to strip everything away that made you ‘one of them’–that made you you.
AIs are, in a sense, canaries in the science-fiction coal mine. How we treat them in-narrative reflects how we treat others from different cultures in real life. Colonial ideas of assimilation run deep, and even those of us who attempt to leave the old narratives behind can easily get caught up in them. No one should have to destroy themselves to be seen as equal. Be they human, robot, or alien.
Why must the robot envy biology? Why must the half-orc reject their orcishness? Or the mermaid reject her fins?
They shouldn’t have to, shouldn’t be expected to.
This does not mean we stop writing robot uprising stories, or invasion stories. This doesn’t mean we stop writing humanoid android stories. It means we try better, as story tellers, to face the real reasons we want to write characters who stop being themselves in order to become us.
7 Comments
You Will Be Assimilated: Data vs. the Borg – An essay by Marina J. Lostetter – Headlines
May 8, 2018 at 9:55 am[…] post You Will Be Assimilated: Data vs. the Borg – An essay by Marina J. Lostetter appeared first on The Book […]
“You Will Be Assimilated: Data vs. The Borg” Up at The Book Smugglers | A Little Lost
May 8, 2018 at 12:05 pm[…] “You Will Be Assimilated: Data vs. The Borg” I discuss the cultural implications of the trope and how it relates to assimilation and […]
Becca Stareyes
May 8, 2018 at 1:12 pmNow I’m reflecting on Star Trek: The Next Generation versus Star Trek: Voyager. While the Doctor wants to be more of a person, it’s never framed as him wanting to be human. (Though it’s easy to argue that the Doctor takes it in human directions because his personality and image was based on a human’s while Doctor Soong apparently built Data deliberately to not be human. Perhaps the Doctor can get away with not wanting to be squishy and biological because he looks and acts more human than Data. Even his departures from ‘understanding humans’ just make him look like a jerk[1].)
[1] And cannon shows the original Doctor Zimmerman, an actual human with actual human experiences acts the same way.
Tony Conaway
May 8, 2018 at 3:42 pmInteresting essay! Very thought-provoking.
I wish I could recall the name of the SF series (trilogy?) in which a human protagonist expresses his disgust at humans on an alien world who voluntarily adapt themselves into an underwater species, losing much of their culture and intelligence in the process. For contrast, there’s little judgement or abhorrence in John Scalzi’s “Old Man’s War” series, in which some humans soldiers adapt themselves into variant forms, such as the turtle-shelled creatures capable of surviving in space without protection.
One Star Trek TOS note: I recall Roger C. Carmel’s character in the “I, Mudd” episode trying to tempt Ulhura with a human-appearing robot body which will never age and never lose her beauty. I don’t believe she even got to decide before Kirk orchestrated the deactivation of the robots.
Raney Simmon
May 9, 2018 at 12:44 pmWhat an interesting read! I don’t read science fiction as much as I want to, but I feel like this post was a very good read. Definitely something to think and reflect upon.
Patrick Joseph Kelly
June 6, 2018 at 9:33 pmYou convincingly make the case that the “robot/Pinocchio trope” trope is the dominant type of robot in the genre . It is interesting that robots that don’t follow the trope are so few. “Robby the Robot” in the movie “Forbidden Planet” would be one who is happy being a supporting player, with no aspirations for equality. The protagonist robot in “Ex Machina” is a scary one, for exactly the reason you mention — she may be superior to humans. I would like to hear more on this topic.
ask4essay
February 25, 2021 at 2:35 pmBeyond the fantastic tale Asimov and Silverberg wove, there are deep underlying philosophical issues addressed in this book, but not in a boring way. The story just causes you to think after you set it down.