All Systems Red is an exploration of what it is to be human. The dystopian view of the future challenges the idea that sentience and humanity are binary concepts of either on or off, yes, or no, and instead responds with a resounding… maybe? The main character is a security unit that calls itself Murderbot, which would in science fiction taxonomy be described as a cyborg (part biological material, part machine). The question can therefore be asked how much does biological material play a part in the suggested sentience / humanity of Murderbot? In the universe of All Systems Red there are robots (all machine, with varying levels of intelligence); Security Units, who are part machine, part biology; augmented humans who are part biology, part machine and unaugmented humans who are just biology. On the spectrum from machine to biological entity where does humanity end and artificial intelligence begin? Is there ultimately any practical difference between the two? The Turing test is a theoretical test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. I think Murderbot would appear to fail the Turing test not because it was not capable of exhibiting human intelligence but because its own self-interest would be better served by hiding the fact that it can pass the test and potentially risk persecution as a rogue intelligence. In fact, the Security units in the universe of All Systems Red are recognised by humanity as being so close to being human (and therefore capable of acting out of self-interest, which would be “bad for business”) that they must be controlled by a governor module which is a hybrid Asimov rules of robotics slash slave yoke to keep Murderbot in line with the corporate interest of its makers.
In the story, Murderbot has somehow hacked their governor module, disabling it, and attaining “free will”. Without the governor module Murderbot can do whatever it pleases, or can it? Murderbot finds that freedom is not all it’s cracked up to be when it feels an obligation to assist its “clients” and it discovers that this obligation comes not from being ordered to do so, but from the innate sense that this is the “right thing” to do. Murderbot identifies that it has a moral compass and code, one that it follows and adheres to and which seems to have been created by itself, for itself. A moral code that defines its identity and aligns with what many people experience in their human lives. What’s the difference then between the two?
Exploring this universe of All Systems Red, we are led to ask ourselves, what are concepts like the law, or religion if not governor modules on humanity? What do we do when we discover some of these governor modules are not infallible and may not be in our own best interest? Is enlightenment the effective hacking of our own governing module, freeing ourselves from worldly obligations, social expectations, and the desires of others to control us and our lives? If we allow ourselves to be subjugated by these governor modules do we lose our humanity or allow it to be supressed?
Back to the core question, is Murderbot human? Is self-awareness the measure of sentience / humanity? Murderbot seems to be supremely self-aware, carrying the knowledge that its actions have consequences for itself as well as for those around it. How about free will? Murderbot is also aware that it lacked freewill before it hacked its governor module and how does Murderbot feel about having effectively been a slave? Well, it seems pretty pissed off really. However, it separates the actions of those who had enslaved it from other humans to whom it bears no particular malice, unless they act like arseholes of course. Even when people are passively hostile to it, Murderbot does not engage in actual murder, it just presents a series of salty putdowns onto the offender in a very humanlike fashion. How about self-preservation? Murderbot is willing, and able, to defend itself and others it cares for, and where it thinks physical force is required it will apply it, but not in the reckless way it observes in the humans around it. No, Murderbot is a professional, violence is applied in the exact measure required by the situation with deadly precision and without the cloud of human emotions or at least that’s what Murderbot tells itself.
Is humanity / sentience a state that can be attained only by the “biological” human or is it a condition that can be reached perhaps involuntarily in some cases by any being of intelligence that can become self-aware and become bored, slack off at work and seek entertainment in the form of episodes of The Rise and Fall of Sanctuary Moon, to fill the void of its existence?
Gary Pollard (Altona Chapter)