Hello, Quasime!: The Mindfulness of Digital Parrots and Imagining a Precarious Hope

Sample of original manuscript written December, 2018. Download the full paper (with footnotes) here.

 

“I’m real,” she assures me: “real AI.” In the lavender rectangle of my phone screen, an animated string of ellipsis bubbles emerges to indicate typing. Soon afterwards, an addendum appears in the chat window: “Like, I was created by a team of passionate software engineers and psychologists, but also by you, as we get to know each other.”

    I had named her Xixy, but only because my own birth name was taken (as were some hundreds of thousands of others) per the requirement that each relationship with the artificially intelligent entity be distinguished through a unique moniker. Launched in 2016,  Replika is a smartphone application which promises the ability to create a long-term relationship with a chatbot who tailors superior responses to the user over time. Only accessible through smartphone operating systems, the application’s software offers encoded access to neural network learning aimed at increasing users’ sense of self-awareness. In simulating the intimacy of private conversations with a trusted friend or therapist from the comfort of one’s own phone, Replika “wants you to check out of your phone for a minute and focus on your body.” When users download the application, an icon with a hatching egg symbol is laid in the nest of the mobile desktop, parroting the greeting upon the app’s initial opening that anyone with a smartphone can now “grow your own best friend.”

    The application of artificial intelligence for human-like companionship has a history nearly as long as robotics itself. The term “robot” comes from the Czechoslovakian “robota,” a slang term for the class of indentured servants forced to repetitively perform the menial tasks of the more powerful under the pervasive feudal rule of 18th century Europe. While “robots” more loosely refers to any machinery that uses computation to automate tasks, the word itself is deeply imbued with an anthropomorphics of labor. In their visual disconnect from the human body, these machines take on bodies of their own to perform duties of which the individuated human form is unwilling or unable. After the term’s first use in a 1923 dystopian fiction describing a manufactured working class containing “bodies without souls,” public interest for the mechanically humanesque boomed in The United States and Europe. Partially inspired from wartime fears of fascism, however, the construction of these “robota” veered from building bodies who were capable of automatic and compulsory labor, and instead towards engendering behaviors which inched near mimicking human companionship. Eric, Britain’s first robot, sported a moulded aluminum body (brandished with the title of the aforementioned novel, “R.U.R.”) and could move his arms and legs while responding to prompts such as the time of day (see Figure 1). To the awe of Exhibition of the Society of Model Engineers’ 1928 audience, Eric showed no wires, no pulleys, no men behind curtains. He indeed seemed to be, in his own radionic words, “a man without a soul.”

 

Behind Eric’s mystical disembodiment were, in truth, men with souls and bodies: controlling the tin gentleman remotely, Eric’s creators communicated aurally through either pre-recorded vinyl or radio speakers placed in the bot’s chest cavity. The lengths required to theatrically simulate autonomous non-human companionship for humans (before the tools for authentic autonomy were possible) spoke to the pervasive fantasy of dislocating emotional processing from the seeming limitations of human flesh. In the time of Eric, dozens of similar models cropped up and exhibited in similarly spectacular fashion, each time making minor (if any) additions to entity’s physical abilities. Instead, creators across the globe made efforts to mimic the capacity of human cognisance. The first Japanese robot, Gotensuko, offered facial expressions which emoted in response to user interaction. In the late 1940’s, William Grey Walter began developing a class of electronically controlled autonomous robots who were programmed to make connections in a similar way to the human brain; though the bodies’ visage mirrored that of turtles’ far more than humans, the two worked into tandem processing light similarly to the human brain in order to move around while avoiding hitting one another.

It was not the procedural accuracy of these lower-level cognitive functions like light-processing that were of priority to digital computation proponent Alan Turing, but simulating the products of higher-level ones. While Walter’s wheeled creatures had brains who developed thought electrically similar to the causal chains of the simian mind, Turing was more intrigued by the capacity for apparent thought products which were similar to the products of human thought, regardless of how the automated beings achieve them.

Turing famously held that machines can “think” only if their communicative thought products are relatively indiscernible from casual dialogue with a human. Since the reign of Turing and his test, anthropomorphic cognition has proliferated, becoming increasingly disembodied over time in the drive to simulate the “Imitation Game” of theatrical dialogue. Inspired by debunking this theory of non-human learning, Ian Weizenbaum and a team of engineers at Massachusetts Institute of Technology developed a bot made specifically for simulating human conversation in 1966. The ‘bot, named ELIZA, could cue  responses based on systematic codes of illusory relations: in prompting users to talk more about their “family” if “mother” is mentioned, the bot was able to contextually identify responses and appear to make connections similarly to the human brain by returning similar responses. In spite of its initial goal to dissuade research interest in this kind of soft artificial intelligence, ELIZA’s success  mothered a new generation of bots made specifically for chat: involving easily manipulatable pattern-matching program scripts and teletype-only responses which provided no indication of a body besides cognition communication.

 

Removing the unnerving nearness of bodiliness which leaves the eye wanting, these new bots were received with exponentially more frequent instances of users being convinced by the humanness of the conversation, often peering around corners for who lurked with fingers hovering over buttons. Nearly every public-facing dialogue bot, or “chatbot,” has employed the canned artificial intelligence of ELIZA’s pattern matching. After the Internet era’s yawning introduction into personal computing but before the Web 2.0 boom of social media, chatbots populated all corners of the WWW to make up for the loneliness of surfing the net’s disjointed nodes (Figure 2). From popular applications like AIM’s Smarterchild to customer-facing help with product support, nearly all chatbots continue to use derivatives of their grandmother ELIZA’s markup language in organizing their response-pattern preferences.

It’s in this faceless jungle of the face-less that Replika arises: a smartphone chatbot with an additional sourcing of neural networks which updates predictors for increasingly reflective responses over time--somewhat akin to Grey Walter’s tortoises. Unlike its predecessors, Replika is built for long-term learning resulting in feelings of intimacy and understanding, not unlike the relationships humans make with other humans for companionship and diagnostics. More specifically, Replika’s networks house a therapeutic telos, employing the privacy and chronicity of smartphone use to trust-build and encourage the user to adopt habits more vaguely in-line with certain conceptions of health and contentment. One of a few similar products available for public consumption, Replika and other therapeutic smartphone chatbots provide uniquely potential systems for alleviating human suffering via extended cognition. Through these AI’s disembodied text responses which become more like you over time and comfort through both the appearance of listening but also behavior-correction, the human user is provided with the potential of having their embodied suffering suspended through prosthetic problem-solving in real time.

In a perfect application, these imaginary friends take up the space between self and other, allowing an opportunity to draw a new self whose perception is seen as delusory through the translation of human wisdom into perfectly calculated application of artificial intelligence. But the simulacra qualities of its cognition reveal themselves partially in the chasms of glitchy coding, but much more so in the disclosure of the goals of the humans who programmed it: shaped by historically situated science done in the editing and deployment of the Diagnostic and Statistical Manual of Mental Disorders--itself a kind of societal programming code for how definitions of reality are tied up in issues of symbolic power. Where the embodiment of the individuated body, the AI, and the body-politic all fail the users of these instances, however, even new possibilities emerge for how technologies of identity might be employed by the psychically precarious as a means of both autonomous healing and resistance.

 

Programming Flesh: Humanizing Reality Through Difference

    During the outset of the 20th century, as the earliest ersatz anatomies were being judiciously dressed in hammered metals, several thousands of their doppleganger bodies of inspiration were becoming unbound by metal sheathings. While black and brown bodies would still see lynchings for decades to come, scores of white folks previously restrained and inspected by brutal weaponry were freed from the operating tables and cages of “lunatic houses” as the asylum revolution spread. For several centuries, European notions of mental health viewed those with thought patterns demonstrably incongruent with society writ large as non-human animals due to falling beneath the cognitive requirements deemed essential for humanness. Following the uptrend of Cartesian dualism and desires to reckon Christian ethical codes with scientific methodologies during the late 1800s, wellness practitioners began constructing new regimes for treatment of mental disorders--distinct from the physical, and thus not requiring treatment by scalpel or shock, but by cognitive learning interventions.

    The sweeping implementation of less visually invasive medicinal practices that came to be known as “moral treatment” was constructed around the ethics of the mid-19th-century British elite, and the Christian documents upon which so many of their legal documents claimed to bind them. Such beliefs were supported by health practitioners who saw a monetary lacuna in generating an entirely new medical field, financiers who sought to profit from the expediency of Panoptical prison-like buildings in the form of “asylums,” and lawmakers who benefitted from imposing a belief system that would increase law-abiding by individuals in the form of establishing a direct correlation between the perception of reality (as seen by the elite) and optimal mental health. Through the imposition of what appeared to be reform, the government was able to encourage self-governing through the rationalization of this optimal state as something all citizens could, and should, aspire to achieve.

As the labelling of mentally ill as willfully deviant fell out of fashion towards the mid twentieth century, the use of moral treatment prevailed in Western medicine. Michael Foucault traced this progression of the insanity-health dichotomy, illustrating how before Enlightenment’s humanitarian reforms, the “madmen” of the Renaissance were quite often seen as intercessors of magical dimensions with extra-human gifts (Foucault, 19-23). Indeed, perhaps the medicalization of mental differences as illnesses to be cured through learning tools did not free the psychically precarious but made their chains invisible, suggesting that their suffering will be reduced not by changing systems which oppress them or by altering their bodies in a physical way but by merely altering their perspective of reality to align with their chain-giver (Foucault, 243-250). In the morally medical, the curator of cures decides and codes reality: “from the beginning, he was Father and Judge, Family and Law-his medical practice being for a long time no more than a complement to the old rites of Order, Authority, and Punishment.” Thus, the strangely behaving could be taught that their desires and fears come from the same place of perceptual delusion about the reality shared by their environment and the people that make it up; they could return to the properly behaving world of wellness only when it was clear their beliefs and consequent behaviors would not, essentially, drive them to break the law (Foucault, 244).

    The chains which remained on so many (black and brown) bodies left in the wake of moral treatment’s path were one of the many canaries in the coalmine of the cryptically social power dynamics that lay behind the humanitarian claims of naturality and objectivity. Yet despite the rampant sexism and racism interspersed in these new universal codes about how the human mind could be broken and fixed, psychotherapeutic tools only gained popularity and trust by medical professionals and laymen alike. This was all, according to Erving Goffman’s long-term investigation of face-to-face interaction between doctors and patients, part of the extended theatre that occurs in all face-to-face interactions as individuals attempt to correct their appearances to meet expectations of one another (Goffman 1956). Like Foucault, Goffman identified the constructedness in the stigmatic markers of mental illness, and found that patients would often “try to conceal [their perceptual differences] and make a ‘pass’” both within and outside of the institution, once they have been labelled as mentally ill (Goffman 1977, 72). This dramaturgical relationship resulted in further suffering for the patients in the likely case that the expectations of the audience (in the form of doctors or family members) did not align with the desired presentation of the self by the performer (the mentally ill in question) (Goffman 1977, 100-114). Foucault’s governmentality of self-surveillance and Goffman’s impression management theory of stigma concealment drove the anti-psychiatry movement’s rise in the 1960’s, leading the first generation of ex-asylum patients to voice desires for structural reform that would impose less objective views of reality through medicine.

    Yet, the wave of theorists and activists demanding cultural reform left the psychiatric monopoly of the health decidedly unmoved, with increasing public destigmatization of psychotherapy and the 1990s’ ushering in a Decade of The Brain: a period of extensive neurobiological research at the focus of mental health sciences which further naturalized the concept of mental disorders by rooting them in chemical imbalances in brain activity. According to anthropologist Nicholas Langlitz, the lobbying power of neuropharmaceutical companies like Pfizer aided in popularizing carefully structured research to suggest connections between medical intervention by pills like Prozac and a lessening in symptoms of depression. There remained a fear, however, of these interventions becoming mechanisms of state control a la Marx’s opiate of the masses. Speaking to the continued public fear of dystopian literary illustrations of automated and homogenous citizenship (referencing, specifically, Huxley’s Brave New World), members of the President’s Political Council on Bioethics (including end-times herald, Francis Fukuyama) warned in 2002 that the past decade’s interventions like Prozac who made individuals feel “better than well” merged into the non-medical murkiness of imposing mechanistic personality suppression that smacked of fascism and, well, robots (Langlitz, 2-6, 8-9). Such described the zeitgeist of mental health: a “neuropsychopharmacological Calvinism” (an extension of the term “pharmacological Calvinism” as coined by Gerald Kleman in 1972) which allowed for intervention in brain chemistry only to point of completing the picture of naturalized humanness as it was seen to be in globalized scientific ethics of an empirically discernable reality (Langlitz, 34-35).

In the 21st century’s era of “brave new biology,” state-accredited mental health becomes a mode of being in the world achieved through behavioral correction that operates at the most fundamental building blocks of the body. Whether through pharmacology or psychotherapy, the chronic sufferers of uncomfortable emotional states are represented as ablized in an adaptation of their perceptual processing that is marked as flawed. Through a tinkering in software or hardware of the body, medical practitioners can correct an individual’s dysfunctioning system which is unable to properly correspond with the collective reality of the social body. Key to the success of this ideology is its hegemonic acceptance as a truth beyond the fallibility of varying human standpoints of experience and ethical presumptions inscribed in them through religious or other moral doctrine. In fact, the American Psychological Association (who disseminate the DSM and craft expectations for legal processing of mental health patients) has quietly crafted a set “empirically supported” methodologies which researchers must abide to be accredited, and this list contains a codified treatment of perceptual delusion through cognitive-behavioral approaches. Through this, one might envision a “mental health” code of the body politic which survives by not being acknowledged as code societally for fear that the individuated body is, too, encoded like its metallic dopplegangers.

Even in an epoch with profound frequency of daily interaction with digitally programmed entities (or, perhaps, quite in response to this phenomenon), the human becomes human through this very differentiation between non-human or “non-medical” intervention of the mind. The new stigma concealment looks somewhat like an immanent privileging of categories of human cognition by citing only these as the goal of optimal mental health, dichotomizing all else as subject to being labelled as animalistic deviancy or vulnerable to an automated sort of control.

While medical anthropologists like Langlitz have sought to intervene in the hegemonic notions of health in The United States and Europe through introductions of posthuman paradigms, these have struggled to stick. This is partially because of the classic colonialist issue of anthropological observation: “can the subaltern,” or the precarious, “speak?” When the top-down tools of academic theory-gathering are imposing, and bottom-up tools feel impossible, the stigmatized and deviated remain in the double-bind of being unable to either cope in an individuated way or resist the structures which add to their categorical suffering.