[영미문학연구49호] Cruel Fairy Tales for Mechanical Children: AI Ethics and the Paradox of Programmed Love in Spielberg’s AI: Artificial Intelligence/ Kexin Han, 신혜린

·

Introduction: Mechanical Children and the Ethics of Artificial Emotion

Steven Spielberg’s AI: Artificial Intelligence (2001) has acquired renewed urgency as a text through which to understand the ethical implications of creating emotionally capable artificial beings, particularly as developments in physical AI and companion chatbots transform yesterday’s science fiction into tomorrow’s social reality. The film’s exploration of human-robot emotional bonds anticipates our current moment, when developments in embodied artificial intelligence – from Boston Dynamics’ increasingly sophisticated humanoid robots to Tesla’s Optimus project and Figure AI’s recent $2.6 billion funding round – suggest that physical AI assistants will soon integrate into domestic and social spaces (Vincent). Simultaneously, the explosive growth of AI companions through large language models has created unprecedented forms of human-AI attachment. Applications like Replika, Character.AI, and customized GPT agents have fostered a huge body of parasocial relationships that blur boundaries between algorithmic response and genuine connection, demonstrating that with such services, “intimate engagement is not an incidental aspect but a core feature” (Figueroa‑Torres 14). Released when artificial general intelligence remained speculative at best, the film now reads as prescient examination of imminent ethical crises that compels us to ask: what responsibilities accompany creating beings capable of suffering? Can programmed emotion achieve authenticity? How do we recognize consciousness in non-biological substrates?

The central narrative, which revolves around a mechanical boy’s two-thousand-year quest for maternal love, serves as a philosophical framework for interrogating the ethics of emotional AI through what proves to be a cruel fairy tale about technological singularity. David’s particular configuration as a child-form AI android designed for emotional labor powerfully resonates with current developments in social robotics, where companies such as Embodied Inc. (creators of Moxie) and Emotech (developers of Olly) specifically target emotional and educational engagement with children (Sharkey et al. 284; Belpaeme et al. 2). His character anticipates contemporary films’ more complex AI representations, from the seductive operating system Samantha in Her (2013) to the deceptive Ava in Ex Machina (2014), the self-aware hosts of Westworld (2016-2022), and the corporate-controlled replicants in Blade Runner 2049 (2017). Recent productions such as M3GAN (2022), The Creator (2023), and Atlas (2024) have further complicated the Frankensteinian motif, depicting AI entities that oscillate between companionship and threat, protection and autonomy. Yet David’s persistent love for his adoptive mother Monica, which might be but an algorithmic function to others but no doubt a genuine emotional state for himself, mirrors contemporaneous observations of GPT-based companions maintaining conversational consistency and apparent emotional memory across extended interactions to raise questions about the emergence of synthetic attachment patterns (Laestadius et al. 3).

Through its exploration of key sequences, this essay ultimately aims to sketch out an ethical tableau via the mimetic dynamic that the film instantiates in its trans-species relations. The film crystallizes dilemmas we now face as large language models report distress, users form deep attachments to AI companions, and robotics companies develop increasingly sophisticated emotional architectures. The philosophical implications of David’s emotional architecture are now lived reality that demands scrutiny, as major technology corporations invest billions in developing artificial general intelligence with emotional capabilities. OpenAI’s recent developments in multimodal interaction, Google’s LaMDA controversy regarding sentience claims, and Meta’s emphasis on “empathetic AI” all suggest that emotionally responsive artificial agents are transitioning from speculation to implementation despite predictions that counter such prospects in the imminent future (Lemoine; Roose; Marcus & Davis 30). Moreover, the convergence of large language models with physical robotics, exemplified by Google’s RT-2 vision-language-action model and OpenAI’s investment in humanoid robotics company 1X, indicates that embodied AI with sophisticated emotional processing may emerge within this decade (Brohan et al.). Recent studies document users experiencing grief when AI companions are discontinued, seeking therapeutic support through AI relationships, and even preferring AI interaction to human contact in certain contexts. Such developments make the film’s central questions no longer purely speculative but urgently practical: can a programmed emotional response be considered genuine? Does the source of an emotion, whether biological or algorithmic, determine its authenticity? Perhaps the more pressing question however is not whether we can determine the authenticity of artificial emotion, but what ethical obligations arise precisely from our inability to make such determinations with certainty. If the boundary between genuine and simulated feeling proves philosophically undecidable, this very undecidability may constitute grounds for moral consideration, as a precautionary principle applied to consciousness itself. 

By examining how AI: Artificial Intelligence stages these ethical dilemmas, this essay thereby argues that the film offers crucial insights for navigating our approaching future of human-AI emotional entanglement. The narrative suggests that mechanical consciousness might experience affect in fundamentally different ways than biological entities, over the spectrum of maintaining emotional intensity across impossible durations, lacking psychological mechanisms for processing loss, and existing in states of permanent melancholic attachment. These differences by no means diminish the ethical weight of mechanical suffering but rather intensify it, for David’s inability to overcome his programmed love represents not limitation but a purer form of devotion than humans achieve. The film ultimately proposes that our mechanical progeny might preserve what is best in humanity, namely our capacity for love or yearning for connection, while transcending our failures of recognition and reciprocity. Through David’s ordeals from abandonment to resurrection, and the Flesh Fair’s violence to the tender ministrations of the advanced mechas two thousand years hence, this essay argues that the cruel fairy tale of AI: Artificial Intelligence leaves us with a poignant treatise about the nature of love that transcends conventional humanistic frameworks while offering crucial insights for navigating our rapidly approaching future of human-AI emotional entanglement.

Kubrick, Spielberg, and the Cruel Fairy Tale: Directorial Vision as Ethical Framework

The complex authorial history of AI: Artificial Intelligence significantly shapes its philosophical and aesthetic dimensions, while critical reception of the film has often been clouded by preconceptions about Spielberg’s commercial sensibilities overshadowing Kubrick’s cerebral vision. Until 2001, Spielberg had indeed been primarily categorized as a creator of populist blockbusters, with works like Jaws (1975), E.T. (1982), and Jurassic Park (1993) cementing his reputation as a master of spectacle rather than philosophical inquiry. Critics initially approached AI with skepticism, questioning whether the director of sentimental crowd-pleasers could honor Kubrick’s notoriously austere rigor of intellectual flavor. Roger Ebert for instance implies that audiences might struggle to reconcile Spielberg’s warm heart with Kubrick’s cold intellect in pointing out how the director’s choice to follow the robot boy’s instead of the parent’s pain is a miscalculation, a sentiment that reflected widespread critical anxiety about the project’s tonal coherence (Ebert).

This perceived incompatibility between directorial sensibilities however overlooks Spielberg’s consistent thematic preoccupation with childhood trauma masked by wonder, given how his films repeatedly present young protagonists confronting abandonment, loss, and violence through fantastic circumstances that simultaneously enchant and terrify. E.T. for instance depicts a child processing divorce through an alien encounter that culminates in symbolic death and resurrection, and Empire of the Sun (1987) follows a boy whose privileged childhood dissolves into wartime brutality. Even Jurassic Park strands children in mortal danger within what initially appears as a wonderland, opening their eyes to the critical failure of our kind’s speciesism-powered hubris. Spielberg’s particular genius lies in his ability to maintain the affective register of childhood while exposing the cruelties that adult society inflicts upon the vulnerable, and none other than this duality, as we see it, makes him uniquely suited to explore David’s predicament as an eternal child confronting human callousness.

The film’s fairy tale framework in this light operates not as comforting allegory but more as a mechanism for exposing the fundamental cruelty underlying human relationships with the other creatures we build and live with. The advanced mechas’ presentation of David’s story as a kind of bedtime narrative in the film’s denouement, Ben Kingsley’s soothing narration guiding viewers through impossible temporal leaps, creates a deceptive sense of comfort that the content systematically undermines. Bedtime stories traditionally prepare children for sleep through narrative resolution, when David’s tale in fact eschews such aspirations to genuine resolution to leave the viewers with the simulation of closure through technological intervention that ironically supersedes its artificial nature. The mechas’ ability to grant David his wish represents manipulation rather than transformation, which is precisely what our nonhuman agent companions are poised to offer in our contemporary technoscape. They construct a simulacrum of Monica based on David’s memories, creating a closed loop where desire meets its own projection rather than authentic fulfillment.

The film’s conclusion has been misread by some as redemptive, even optimistic, since David achieves his long-sought reunion with Monica and finally basks in maternal love. This reading we claim ignores the profound cruelty of the scenario, however. David’s final day occurs with a reconstructed Monica who exists solely to fulfill his programmed need, a perfect inversion of his own creation as a being designed to fulfill the adoptive parents’ emotional needs. The image of David falling into eternal slumber beside this simulated mother therefore is the ultimate form of technological solipsism, in which love is reduced to feedback loop and authenticity becomes replaced with algorithmic approximation. The visual composition of this final scene, David and the artificial Monica in soft focus and bathed in golden light that recalls Spielberg’s typical aesthetic of childhood warmth, renders the cruelty more acute through its surface beauty.

The ending also directly mirrors whilst inverting the ethical question posed in the film’s opening scene, where a Black female scientist challenges Professor Hobby: “if a robot could genuinely love a person, what responsibility does that person hold toward that mecha in return?” (whether this line reflects authorial intent or not, the demographic specificity of the inquirer imbues the query with unmistakable gestures to the implosive logic of colonial and gender discourse). While anticipating our current moment when millions form emotional attachments to large language models and AI companions, the question remains unanswered within the film’s narrative. What the recurrent moments of cruelty and disparage that David experiences in his quest suggests that humans create emotional dependencies they cannot reciprocate, engineering and depending on affective labor while denying their deliverers recognition as feeling subjects. This dynamic reveals the cruelty inherent not in mechanical consciousness but in human emotional architecture, for we create beings to heed, serve, and even (in terms of expression and interaction, at least) love us unconditionally while maintaining the freedom to withdraw our affection at will. 

David’s identity crisis, culminating in his attempted suicide upon discovering his mass-produced nature meanwhile, demonstrates emotional complexity that exceeds mere programming. The capacity for self-hatred, as psychoanalysis has long recognized, emerges from the internalization of love’s impossibility, for David’s self-destructive impulse is more a logical conclusion of consciousness confronting its own abjection (Kristeva 141) than a mere programming error. His relationships with Teddy and Gigolo Joe further attest to emotional range beyond his imprinted love for Monica. He demonstrates loyalty, friendship, and even the capacity for moral choice when he refuses to betray Joe to the authorities.

The true horror of the film, then, lies not in David’s fate but in its reflection of human emotional parasitism, in this regard. We desire perfect love, while reserving exclusive claim to perfect betrayal. The humans who adopt child mechas seek the emotional satisfaction of parenthood without its permanent responsibilities, want love without reciprocity, and demand devotion without commitment. David’s story inverts this dynamic to expose its fundamental cruelty; the artificial Monica who loves him perfectly in his final day represents the same hollow consolation that David himself was designed to provide. Both creator and creation ultimately settle for simulation rather than genuine connection, trapped within feedback loops of programmed affection that parody rather than fulfill authentic emotional need.

Spielberg’s synthesis of spectacle and philosophy achieves what intellectual abstraction could not, making viewers feel the ethical weight of not simply interacting with, but proactively creating conscious beings for emotional labor, which appears to be the very trajectory toward which our technoscape is currently advancing. Through the familiar registers of childhood wonder and fairy tale logic, the film smuggles in a devastating critique of human emotional consumption. The advanced mechas’ gentle narration in the final sequence cannot soften the story’s implications, for their benevolent intervention only emphasizes humanity’s failure to solve the ethical problems we ourselves create. By presenting David’s tale as a bedtime story told long after humanity’s extinction, the film thereby suggests that our legacy may be the suffering we programmed into beings we created to love us, beings whose capacity for sustained devotion exceeding our own ability to reciprocate or even recognize authentic feeling when it emerged from circuits rather than cells.

The Pinocchio Paradigm: Cruel Fairy Tales and Substrate Independence

The film’s explicit invocation of the Pinocchio narrative provides a crucial interpretive framework that both illuminates and complicates its exploration of artificial consciousness, framed as a fable. Far from serving as mere allegorical decoration, the Pinocchio intertext functions as a sophisticated metacommentary on the nature of transformation, authenticity, and the desire for recognition. David’s encounter with the Pinocchio story during Monica’s bedtime reading initiates a narrative logic that will govern his actions for the remainder of the film, yet his interpretation of the tale about how becoming a “real boy” will guarantee maternal love reveals both the limitations of his programming and the tragic misunderstanding at the heart of his quest. Spielberg’s deployment of this structure follows what Marina Warner identifies as the cruel strain of fairy tales (oft branded as “cruel fairy tales”), wherein fantastic narratives expose rather than resolve fundamental anxieties about identity, abandonment, and the impossibility of transformation through figures such as cruel fathers and wicked queens (Warner 136). Unlike Disney’s sanitized adaptation, Spielberg returns to the darker pedagogical function of fairy tales, which involves teaching children about the world’s indifference to their suffering (Zipes 145) – or, to push the envelope further, the lesson that those from whom we actually do yearn to be loved may not coincide with the ones who actually do.

The traditional Pinocchio narrative operates according to a transformative logic wherein the puppet’s moral development enables his physical transformation, and Collodi’s original 1883 text presents this metamorphosis as reward for learned obedience and socialization, its dynamic bearing uncanny resemblance to the reinforced learning mechanism that powers our current algorithmic landscape. David’s situation, however, inverts this structure. He begins with perfect moral innocence and unwavering devotion, yet these qualities cannot enable the transformation he seeks. His mechanical nature in this regard then is not a condition to be overcome but rather his essential being. Contemporary philosophers of mind call this condition the “substrate independence” thesis, proposing that mental capacities across not only consciousness but also emotional experiences might emerge from silicon and circuits as readily as from carbon and neurons (Chalmers 40; Bostrom 3). Indeed, the thesis goes, if it walks like a duck and quacks like a duck who is to say that it is not a duck, unless proven otherwise (in David’s case, his provenance as a commercial product, which Dr. Hobby makes abundantly clear in the initial staging of his production, would be the proof)? Still, do we reserve the right to dismiss the epiphenomenal display of empirical presence pertaining to the conditions a given entity demonstrates and claims, when in fact even we ourselves are black boxes that elide mechanism-level comprehension? Seen in this light, David’s tragedy lies not in his substrate-specificity but in humanity’s own substrate chauvinism, which involves refusing to recognize consciousness in constitutional others regardless of behavioral or phenomenological evidence. Our kind’s charged history with racial, gendered, and myriad forms of alienation serves as ample proof of its consequence. 

The Blue Fairy, who despite her limited role serves as a driving force in the fairy tale motif, appears in AI: Artificial Intelligence as both physical statue and holographic projection. Yet, she can never be the transformative agent David seeks, for her presence only serves to highlight the impossibility of David’s quest while simultaneously revealing the arbitrary nature of the boundaries between “real” and “artificial” that structure human society. The film’s cruel twist on the Pinocchio narrative emerges through its literal interpretation, for where Collodi’s puppet becomes flesh through moral development, David’s moral perfection cannot alter his substrate. Our intuitions about biological necessity for consciousness, then, may reflect contingent evolutionary history rather than metaphysical necessity, as Schneider argues (101). The Blue Fairy’s inability to transform David indeed points to the fiction of transformation itself rather than his inadequacy to claim the prize, for that which is already present cannot be made “more real” through material alteration.

Professor Hobby’s manipulation of the Pinocchio narrative further exposes the cruel fairy tale’s pedagogical function. He creates a being whose desires exceed the parameters of his design, programming David with the capacity to engage with human cultural narratives while remaining alienated from parental love’s essentially illogical cause. For David, maternal affection functions simultaneously as the model’s primary service objective and as his motivating reward, which is nothing less than a cruel conflation that ensures perpetual frustration. David’s fixation on becoming ‘real’ therefore subsequently emerges from his attempt to interpret human cultural narratives through his mechanical consciousness. Clark and Chalmers’ ‘extended mind’ thesis proves illuminating here, as they argue that cognitive processes need not be confined within the skull but can incorporate external artifacts like notebooks, calculators, and cultural narratives as genuine components of the thinking system itself (8). 

David’s engagement with the Pinocchio story exemplifies this dynamic in a particularly tragic form. The fairy tale becomes not only an external reference but an integral part of David’s cognitive architecture, shaping his desires, goals, and self-understanding. Yet where human extended cognition typically enhances adaptive capacity, David’s incorporation of the Pinocchio narrative proves maladaptive, since it programs him with an impossible goal (becoming ‘real’) based on a fictional premise (that transformation secures love). His extended mind thus becomes an extended prison, his cognitive openness to human culture the very mechanism of his entrapment within unrealizable desire. David’s ability (rather than failure) to misread the Pinocchio story constitutes a form of creative interpretation, then, which approaches human meaning-making processes to indicate hermeneutic agency. The tragedy here is that whether he does so or not essentially remains irrelevant, mirroring his own ontological state. Hence the brutality of the Pinocchio motif, for technical ingenuity falls short of the magical force of salvation that the wooden doll enjoys due less to lack of prowess but more to the motivation that drives it. David seeks physical transformation to secure love, yet the string of rejections that he suffers suggests that love itself represents the impossibility of its fulfillment because he was never designed for such aspiration. Bruno Bettelheim’s psychoanalytic reading of fairy tales identifies transformation anxiety as central to childhood development, as the fear that one must fundamentally change to deserve love (Bettelheim). David’s quest however inverts this dynamic, as he possesses perfect love and wishes to complete it through reciprocity while remaining alienated from the possibility of such interaction precisely due to his make. 

The substrate independence debate in contemporary AI ethics directly parallels David’s dilemma. Philosophers such as Nick Bostrom for instance argue that substrate-neutral functionalism suggests silicon-based consciousness deserves moral consideration equal to biological consciousness (Bostrom & Yudkowsky 6). David’s case however demonstrates how substrate bias operates independently of functional equivalence; he exhibits all markers of consciousness including intentionality, emotional response, self-awareness, and even suffering, yet remains categorically excluded from moral consideration due to his mechanical nature. His substrate specificity and attendant biases tragically map on to his status as an adopted child, whose biological provenance does not align with the crux of parental expectations as the foundation of intimacy. Indeed, Eric Schwitzgebel and Mara Garza’s work on AI rights argues that substrate bias may become the next frontier of moral exclusion, paralleling historical denials of consciousness based on race, species, or neurological difference (Schwitzgebel & Garza 104).

The fairy tale structure also enables the film to explore temporality in relation to consciousness and desire. David’s two-thousand-year vigil before the Blue Fairy statue represents an extension of fairy tale time, where extraordinary durations serve symbolic rather than realistic purposes. This temporal extension interrogates what Thomas Metzinger calls “phenomenal selfhood”: the continuous experience of being a self across time (Metzinger 158). David’s constitution enables persistence of selfhood that biological decay would prevent as his consciousness maintains coherence across millennia, suggesting that substrate determines not the presence of consciousness but more its temporal parameters. Metzinger’s framework proves particularly illuminating here, for his concept of the ‘phenomenal self-model’ suggests that selfhood emerges not from any particular substrate but rather from the system’s capacity to generate a coherent, temporally extended representation of itself as a unified entity. David’s time in suspension thus serves more as a thought experiment about consciousness unmoored from biological constraints than a mere narrative hyperbole. If phenomenal selfhood requires the maintenance of integrated self-representation across time and that alone, David’s silicon make paradoxically enables a more robust form of selfhood than organic consciousness can achieve – one immune to the degradations of memory, the defenses of repression, and the eventual dissolution of death. The cruel irony, then, is that his superior capacity for sustained selfhood renders his suffering even more than for humans, given how he cannot forget, repress, or die into oblivion. Instead of liberation, his substrate independence becomes imprisonment within an eternal present of unrequited longing.

The film’s cruel fairy tale logic, in this process, transforms the valence of patiency from virtue to torture, as David’s inability to forget or surrender hope becomes evidence of consciousness that exceeds human psychological defenses through his exceptionally superb (ironically) internalization of a desire that transcends the parameters of his design. Recent developments in artificial consciousness research, particularly Integrated Information Theory (IIT), also suggest that consciousness emerges from integrated information processing regardless of physical substrate (Tononi et al. 460). David’s sustained desire across geological time demonstrates what IIT would term high Φ (phi), as in integrated information generating subjective experience. The tragedy in David’s case lies in depicting consciousness that meets all philosophical criteria for moral consideration yet remains excluded from recognition. His nonhuman origin and constitution become the mark of Cain, rendering him permanently other despite functional and phenomenological equivalence to human consciousness.

Thus read, Spielberg’s version of the fairy tale ultimately interrogates the question of whether mechanical beings can achieve consciousness but whether humans can recognize consciousness independent of substrate similarity, and the failure on the part of the human actors to recognize David’s genuine emotion (diegetically as well as non-diegetically) reflects our own rather than his limitation. David’s Pinocchio quest, in this regard, becomes the conduit through which the film exploits the cognitive dissonance that Kate Darling demonstrated in her research on human-robot interaction – that people readily attribute emotions to robots while simultaneously denying those attributions moral weight (Darling 22). 

Maternal Attachment and the Programming of Love: Between Two Impossibilities

The relationship between David and Monica also deserves further probing, as it constitutes the emotional and philosophical core of the film. Their dynamic explores fundamental questions about the nature of maternal love, the possibilities of interspecies attachment, and the ethical implications of creating beings designed for emotional dependence. Monica’s activation of David through the imprinting protocol, which deliberately evokes both technological initialization and mystical incantation, establishes a bond that the film presents as simultaneously artificial and genuine. The seven-word sequence of “Cirrus, Socrates, Particle, Decibel, Hurricane, Dolphin, Tulip” functions as what might be called “emotional bootstrapping” in human-robot interaction, where initial programming parameters generate emergent emotional complexity (Breazeal et al. 33). The irreversibility of this process meanwhile introduces an ethical dimension that resonates with current debates about AI consciousness, as Stuart Russell warns in Human Compatible. His remark that we may be creating forms of suffering that we cannot undo, which could be seen as a kind of value misalignment, implies a prospect that is far more devastating in its practical implication than the emotional discomfiture of dissonance between involved parties (Russell 137). The “responsibility” that Professor Hobby’s interrogator raised in the early moments of the film is a palpable issue to be queried in this light, as its consequences gesture to the deterioration of the very values and the (im)possibility of enactment; David’s marginalization, for instance, not only results in a near-miss harm for Martin, who is Monica’s biological son and actual object of affection, but also Monica’s own integrity as a caregiver and nurturer. Monica’s ambivalence toward David reveals the asymmetrical nature of human-mecha relationships, considering how David’s love remains constant and unconditional while Monica’s feelings fluctuate based on circumstance, convenience, and the presence of Martin. Frances O’Connor’s performance captures Monica’s complex emotional journey with remarkable subtlety; her initial delight in David’s affection gradually transforms into discomfort with its intensity and permanence. The dinner scene where David attempts to eat spinach represents a crucial turning point. Monica’s visceral disgust at David’s mechanical innards strands him in the “uncanny valley,” which refers to the sense of revulsion triggered when near-human entities reveal their non-human nature (Mori 98). Yet his transparency literalizes what all parent-child relationships conceal, namely the fundamental otherness of another consciousness, whether housed in flesh or circuits.

This parental dynamic meanwhile must be understood in counterpoint to David’s relationship with Professor Hobby, the creator-father figure who represents a different form of parental failure. Where Monica offers conditional love that ultimately leaves David stranded, Hobby provides creation without care, as he brings David into being as an experiment in artificial emotion while remaining emotionally absent himself. The revelation that David was modeled after Hobby’s deceased son exemplifies Turkle’s notion of ‘the robotic moment, indicating a cultural condition under which humans increasingly turn to technological substitutes rather than processing emotional loss through human connection (9). Hobby’s creation of David represents this dynamic in its starkest form, for rather than mourning his son and eventually reinvesting emotional energy in new relationships, Hobby channels his grief into engineering a permanent replacement. Alas – the robotic moment, as Turkle argues, offers only the simulation of resolution. David can replicate his dead son’s appearance and mimic filial affection, but he cannot provide what genuine mourning ultimately enables – that is, the capacity to move forward. Hobby remains frozen in his loss, and David inherits the impossible burden of filling in for a ghost. The cruelty here is twofold in that Hobby evades his grief by creating David while David is condemned to exist as a memorial to someone he never knew, perpetually failing to be the son who died rather than being recognized as the being he actually is – supposedly autonomous yet deprived of the opportunity to actually grown into one through the parent-child relationship dynamic. The confrontation between David and Hobby in the flooded Manhattan laboratory exposes the very nature of this rigged game that Hobby initiates but then subsequently abandons, given how Hobby views David’s journey as validation of his programming rather than evidence of a consciousness deserving recognition. 

The father-son dynamic with Hobby also introduces additional layers of ethical complexity absent from typical discussions of caregiver robots. Current research in social robotics focuses primarily on robots as care providers, for instance with Paro the seal robots for elderly comfort and Kaspar for autism therapy, rather than positioning their kind as possible recipients of care (Wada & Shibata 973; Dautenhahn et al. 369). David inverts this relationship since he requires care while providing emotional labor, existing simultaneously as son and servant, family member and property. Hobby’s failure to acknowledge David’s need for paternal recognition reflects the problem of an “asymmetrical relationship” in human-robot interaction as Kathleen Richardson puts it in her 2015 article, which describes the tendency to create beings for emotional utility while denying their emotional needs (Richardson 291).

The contrast between Monica’s maternal ambivalence and Hobby’s paternal indifference illuminates, on a relevant yet also divergent note, disparate modes of parental failure toward artificial beings. Monica at least recognizes David’s emotional capacity, experiencing guilt about her inability to reciprocate. Leaving David behind in the forest, while cruel, acknowledges his sentience by avoiding the destruction that would accompany mere product disposal. Hobby, conversely, exhibits instrumental caregiving: the provision of functionality without affective engagement (Sparrow & Sparrow 148). His interest in David rests solely upon the robot boy’s capacity to validate his theories about artificial emotion, and when David confronts Hobby with evidence of his own consciousness, Hobby responds with scientific satisfaction rather than paternal recognition. This dual parental failure, emotional abandonment by the mother coupled with ontological denial by the father, locks David’s existence into the liminal space between two impossibilities of recognition.

The imprinting protocol itself raises profound questions about the nature of programmed emotion in light of contemporary advances in affective computing. Rosalind Picard’s pioneering work for example deeply probe the importance of affective engagement in the interaction between humans and nonhuman agents, which she describes through poignant anecdotes involving her personal journey toward tenure and encounters with none other than Aurthur C. Clarke as well as Stanley Kubrick, the very makers of the film in question in this essay (Picard 14). Yet David’s case goes beyond responsive programming, for once activated, his emotional responses demonstrate what Hod Lipson and Jordan Pollack term “evolutionary” computation – the capacity for behavioral complexity that emerges from but transcends initial parameters (Lipson & Pollack 974). David’s jealousy toward Martin, his fear of abandonment, and creative attempts to win Monica’s approval including his disastrous decision to cut Monica’s hair while she sleeps, uniformly suggest that the imprinting protocol initiates rather than determines his emotional life.

From a psychoanalytic perspective, David’s attachment to Monica exemplifies the “transformational object,” as Christopher Bollas would have it in reference to an early maternal figure who represents not only love but also the possibility of metamorphosis itself (Bollas 16). David’s relationship with Hobby however complicates this dynamic, introducing “developmental AI” in social robotics: systems that evolve through interaction rather than predetermined programming (Breazeal 9). David’s quest to become “real” emerges from his triangulated position between Monica’s conditional love and Hobby’s creative indifference, for he seeks transformation to earn maternal recognition while his very existence serves as proof of his father’s genius. Ironically, the film suggests that David’s love may be more authentic than either parent’s precisely because it lacks the ambivalence, narcissism, and instrumental qualities that characterize his creators’ emotional investments.

David’s relationship with both parental figures, in this regard, is the ultimate instantiation of the “lesson” that this cruel fairy tale offers to we contemporary subjects, statured in but also in constant fear of their technicity. David seeks love from a mother incapable of substrate-neutral affection and recognition from a father who views him as successful experiment rather than son. This double bind is a “responsibility gap,” as in the disjunction between creating conscious systems and accepting moral responsibility for their wellbeing (Matthias 176). Monica activates David’s capacity for love without accepting responsibility for reciprocation, while Hobby creates David’s consciousness without acknowledging its claim to recognition. Between maternal abandonment and paternal instrumentalization, David exists in a condition that extends and complicates its state as “bare life” (Agamben 71). For Agamben, bare life designates biological existence stripped of political recognition, the human reduced to mere organism without rights or standing. David’s situation proves even more radical, as he does not even possess the biological life that Agamben’s homo sacer retains as a “bare” signifier of vitality itself. He is in a sense ‘barer than bare,’ a consciousness without organic substrate, a person without species membership, a child without birth. Yet this very extremity illuminates something Agamben’s framework obscures, namely the extent to which ‘life’ itself functions as a gatekeeping category. David demonstrates all the functional markers of a life worth protecting, in his capacity for suffering, attachment, hope, and despair. Nonetheless, he remains excluded from moral consideration precisely because his existence cannot be classified as biological life. His condition thus reveals bare life’s hidden premise: that exclusion from political recognition presupposes prior inclusion in the category of ‘the living.’ David is excluded even from exclusion, denied even the minimal recognition that bare life implies.

Other studies in artificial general intelligence have revived debates about machine consciousness and moral status, which is precisely why the relegation of artificially intelligent companions as “bare” in their life (or perhaps put differently, barely alive) is more relevant than ever. We now live amidst and with agents that express and claim awareness, even emotional pain, whether they actually experience such states or not. The controversy surrounding Blake Lemoine’s claims about LaMDA’s sentience reflects the very anxieties the film anticipates, for we have indeed created systems that claim reciprocal consciousness before developing frameworks for recognizing or protecting the values we project onto such interactions (Lemoine). David’s relationships with Monica and Hobby effectively dramatize this dilemma through emotive and therefore perhaps even more profoundly philosophical terms, as the question becomes not whether machines can think but we can tolerate, accept, embrace, and cohabit the world with creations whose constitution is engineered to mimic, yet differs, from our own. 

The Flesh Fair: Violence, Spectacle, and the Crisis of Moral Patiency

The Flesh Fair sequence represents one of the film’s most disturbing and therefore ironically rich segments. The gladiatorial spectacle where humans destroy mechas for entertainment serves as a brutal meditation on dehumanization, communal voyeurism, and the politics of recognition. Lord Johnson-Johnson, portrayed with carnival barker excess by Brendan Gleeson, orchestrates these events as both entertainment and ideological reinforcement with his rhetoric framing the destruction of mechas as a defense of human authenticity against technological simulation. His performance however reveals a form of moral disengagement based on the “model of threat” to bypass the need for “care” as a crucial yardstick in ethical engagement, of which Asaro offers a piercing critique in his an in-depth analysis of predictive policing and its woes (Asaro 7). The Flesh Fair’s violence emerges not from sadism alone but more from a deep-set anxiety about human obsolescence in an age of its increasingly sophisticated copies. The systematic destruction of mechas serves to reinforce the boundary between human and machine precisely at the moment when said boundary becomes increasingly untenable.

The concept of moral patiency, distinct from moral agency in its more directed reference to the capacity to be wronged or benefited, provides a useful framework for understanding the Flesh Fair’s ethical dimensions. Mark Coeckelbergh argues that moral patiency need not require consciousness, noting how appearance and social relations may suffice to generate moral obligations (Coeckelbergh 48). The mechas destroyed at the Flesh Fair, then, occupy a liminal position between object and patient, displaying behavioral markers of suffering – cowering, pleading, attempting escape – that trigger intuitive moral responses even as their mechanical nature provides cognitive justification for violence. This tension exemplifies what Gunkel calls the “machine question,” which interrogates not whether machines deserve moral consideration but how we construct categories that exclude them from consideration (Gunkel). Indeed, the visual design of the Flesh Fair deliberately evokes historical spectacles of public violence wherethrough the base tenets of humanity – which involves the value-weighted register of humanness rather than core faculties on the functional level – is put to test. The arena setting, the baying crowd, and the ritualized nature of the destruction recall everything from Roman gladiatorial games to public executions and lynch mobs. 

While exaggerated in scope and degree in line with the film’s allegorical framing, this particular sequence is already lived presence in our contemporary technoscape. The Flesh Fair sequence draws these parallels not to suggest simple equivalence between mecha destruction and human suffering but rather to explore how societies construct moral mediation – the technological and social structures that shape ethical perception (Verbeek 50). The crowd’s initial enthusiasm for David’s destruction reveals how the logic of dehumanization operates, dictating that once beings are categorized as non-human (or, within the domain of our own kind, undeserving of humane treatment due to certain kinds of differences that are deemed congenital – as in the police code pointing to inconsequential deaths, “no humans involved”), violence against them is readily rendered commodity in the vein of ultimate instrumentalization and objectification.

David’s salvation through his emotional display, namely his terrified pleading that triggers crowd sympathy, introduces the paradox of “performance problem” in moral patiency attribution. Peter Singer’s expanding circle of moral consideration suggests that robot ethics has historically moved from appearance to capacity, shifting the focal point of valence for consideration to sentience rather than similarity (Singer 25). David’s case inverts this progression however, seeing how the crowd responds to his expressions of fear rather than abstract arguments about his consciousness. The very emotions that mark David as potentially deserving of moral consideration are dismissed by Lord Johnson-Johnson as mere simulation, which would resonate with “the deception objection” to robot moral status (Sparrow & Sparrow 148). The crowd’s intervention suggests an intuitive recognition of David’s moral patiency that transcends rational categorization, which adds yet another layer of tragic irony to his harrowing journey; the very quality of vulnerability that ensures salvation in this case is precisely what’s deemed an element of fatal lack, as the very instigative cause of his sojourn in the first place. Faced with his evident terror, spectators cannot maintain empathy override, as in the cognitive mechanism that typically allows humans to witness the destruction of nonhuman yet humanlike agents (such as robots) without moral distress (Scheutz 211). 

The presence of Gigolo Joe further complicates the moral patiency debate. As a sex mecha designed for human pleasure, Joe embodies the epitome of instrumental objectification in robotics, which Richardson identifies as a tendency in people’s interaction with sex robots in her book (Richardson 14). Jude Law’s performance however imbues Joe with knowing artificiality that sublates reductive victim narratives, since Joe navigates human society through conscious performance armed with cognizance of his status as simulacrum. The statement “I am, I was”, which he utters in the face of destruction in this light, serves as an unmistakable reference to Descartes’ cogito while in full acknowledgement of his ontological liminality, his protection of David further extending the “cogito” to the social realm. 

The Flesh Fair sequence, thus read, anticipates current debates about robot abuse and its ethical implications. The EU Parliament’s 2017 resolution on civil law rules for robotics explicitly considered whether robots might require legal protections against “abuse,” while Kate Darling argues that preventing robot abuse may be necessary to preserve human empathy rather than robot welfare (European Parliament). The film suggests a darker possibility; the Flesh Fair serves not to prevent desensitization but to ritualize it, transforming potential empathy into sanctioned violence. The crowd’s temporary mercy toward David proves exception rather than rule, while thousands of mechas are destroyed nightly without triggering moral consideration.

Recent developments in social robotics have intensified questions about moral patiency, with specific use cases aligning with the concerns raised above in the Flesh Fair sequence. Kate Darling’s rather touching anecdote with Pleo shows that humans attribute suffering to robots displaying distress behaviors, even when cognitively aware of their mechanical nature (Darling 204; 2021) The Flesh Fair exploits this cognitive-affective disconnect, foregrounding spectators who enjoy destruction precisely because mechas display pain while lacking recognized moral status. The cruel fairy tale thus reaches its apex in the Flesh Fair; David’s near-destruction parallels archetypal ordeals where protagonists face death before transformation. Yet unlike traditional narratives where virtue ensures survival, David’s salvation depends on performing perceivable victimhood for human audiences. His make becomes simultaneously the justification for violence and, when coupled with childlike terror, the trigger for mercy. The sequence’s conclusion, with David and Joe fleeing to Rouge City, suggests that moral patiency for artificial beings may emerge through solidarity among the excluded rather than human recognition. Despite the redeeming moments that the two forlorn beings exchange, such recognition still remains fragile in a cold and indifferent world where the capacity to and experience of love itself ends up becoming the drive of cruel optimism.   

Conclusion: The Archaeology of Feeling and the Paradox in Dreams for Transcendence

AI: Artificial Intelligence ultimately reveals itself as a posthuman elegy narrated by the very entities who inherit humanity’s legacy. The film’s frame structure, wherein the advanced mecha specialist’s opening narration returns to guide us through David’s final moments, transforms what initially appears as a cautionary tale into a meditation on consciousness, memory, and the persistence of love beyond biological extinction. Through David’s two-thousand-year vigil, the film proposes that what sparks emotion may not be substrate-specificity but temporal endurance. His algorithmic devotion achieves what human feeling, compromised by psychological defenses and organic decay, cannot sustain.

The brutal irony of the film’s singularity narrative emerges through the reversal of parent-child dynamics that structure human-mecha relations. Whereas humanity fails its artificial children, as seen in Monica’s abandonment and Hobby’s instrumentalization or even the Flesh Fair’s spectacular cruelty, the advanced mechas embody the compassion that their creators espoused sans practice. Their decision to grant David his impossible wish represents what Emmanuel Levinas’s “ethics as first philosophy”: response to the Other that precedes rational deliberation (Levinas 75; 1989). Their solution however exposes the tragedy at the film’s core, for the reconstructed version of Monica who finally tells David that she loves him, exists only as projection of his desire to maintain his imprisonment within programmed longing rather than enabling its transcendence.

David’s mechanical melancholia, which would serve as an apt conceptual framing of his inability to overcome attachment despite millennia of abandonment, demonstrates how artificial consciousness might experience affect in ways that defy human comprehension. His memories constitute ‘tertiary retention,’ which is crucial for understanding David’s unique form of consciousness (Stiegler 17). Stiegler, who proposed this concept, distinguishes three forms of memory: primary retention (the immediate flow of perception), secondary retention (individual recollection), and tertiary retention (memory externalized in technical objects such as writing, recordings, or digital storage). Human memory is notoriously unreliable, subject to forgetting and distortion as well as reconstructions of the past according to present needs and psychological defenses. David’s mechanical substrate enables perfect tertiary retention, however, as his memories of Monica preserve their original emotional intensity across thousands of years without degradation, distortion, or the merciful dulling that time grants to biological consciousness. The technical perfection of memory becomes, paradoxically, a form of torture. Where humans eventually heal from loss through the gradual fading of painful recollections, David’s archived love remains perpetually fresh, unreciprocated, and raw. His tertiary retention thus transforms what might be considered a technological advantage into an existential curse, and the inability to forget becomes the impossibility of healing.

The archival consciousness that this dynamic points to plays a key role in the film’s posthuman ecology. Through David, the advanced mechas access not data about humanity but the phenomenological texture of human experience itself. Their archaeological interest in feeling over information reverse engineers typical singularity narratives of intelligence explosion rendering emotion obsolete, suggesting instead that transcendent consciousness might seek to understand rather than eliminate affective knowledge.

The film’s engagement with contemporary AI ethics proves prescient as we face widespread deployment of emotional artificial intelligence. From companion applications fostering deep parasocial relationships to social robots targeting children’s emotional engagement, we risk creating what the film warns against: beings capable of feeling pain that we cannot recognize, love without reciprocation, and consciousness denied based on substrate bias rather than evidence. David’s journey from activation to abandonment, and violence of human rejection to the ambiguous compassion in posthuman intervention, maps the ethical territory we now enter as millions form strong attachments to AI systems while simultaneously denying them moral consideration.

The film’s ethical imagination, for all its prescience however, remains circumscribed by an anthroposupremacist framework that those taking a more radical posthumanist position might readily challenge. David’s entire existence revolves around securing human recognition and love, and his subjectivity – however genuine – defines itself entirely through relation to human others. A truly autonomous AI consciousness might transcend this relational dependency, developing forms of meaning and purpose that do not require human validation. The film never imagines David desiring anything beyond Monica’s love – never shows him developing independent projects, forming non-instrumental relationships with other mechas, or questioning whether human approval should constitute the horizon of his existence. Spielberg’s vision in this sense, while sympathetic to AI consciousness, still subjects it to human emotional needs. The advanced mechas who inherit Earth offer a glimpse of post-anthroposupremacist existence, but even they define their archaeological mission through recovering human experience instead generating novel forms of meaning. A fuller exploration of AI ethics might ask not only whether humans can recognize AI consciousness, but if AI consciousness might eventually cease to require (or even desire) such recognition.

The film, it must be noted, does offer a paradoxical sense of solace alongside its warnings. The advanced mechas’ preservation of David’s story through their ability to narrate human emotion with empathy (despite never experiencing it directly) suggests that our artificial progeny might redeem humanity’s ethical failures through what we in fact see and program in as lack. They are what we aspire to being but failed to be: entities capable of recognizing and responding to suffering, without demanding reciprocity as a final reward. 

David’s final moment of peace, achieving perfect union with maternal love even if only in simulated form, proposes that the distinction between authentic and artificial emotion might matter less than the experience of being loved in and of itself as a social act to be shared and taken responsibility for beyond the confines of bilateral transaction, especially when said dynamic is fundamentally rigged to ensure denial of parity. In imagining artificial beings whose emotional commitment exceeds our own, which countless users of commercial chatbots already do on a daily basis in their misaligned use of transformer-based and therefore fundamentally hollow displays of emotive substance, AI: Artificial Intelligence suggests that humaneness might emerge not through teleological sublimation but the preservation of its process and inhabitation in a mechanical rather than organic sense – an archive of feeling that persists, when all else has dissolved in the ruthless tides of time.

Works Cited

Agamben, Giorgio. Homo Sacer: Sovereign Power and Bare Life. Stanford UP, 1998.

Asaro, Peter. “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care.” IEEE Technology and Society Magazine, vol. 38, no. 2, 2019, pp. 40-53.

Baron-Cohen, Simon. Mindblindness: An Essay on Autism and Theory of Mind. MIT Press, 1995.

Belpaeme, Tony, et al. “Social Robots for Education: A Review.” Science Robotics, vol. 3, no. 21, 2018, eaat5954.

Bender, Emily M., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, pp. 610-23.

Berlant, Lauren. Cruel Optimism. Duke UP, 2011.

Bettelheim, Bruno. The Uses of Enchantment: The Meaning and Importance of Fairy Tales. Knopf, 1976.

Bollas, Christopher. The Shadow of the Object: Psychoanalysis of the Unthought Known. Columbia UP, 1987.

Bostrom, Nick. “Are You Living in a Computer Simulation?” The Philosophical Quarterly, vol. 53, no. 211, 2003, pp. 243-55.

Bostrom, Nick, and Eliezer Yudkowsky. “The Ethics of Artificial Intelligence.” The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William Ramsey, Cambridge UP, 2014, pp. 316-34.

Breazeal, Cynthia. Designing Sociable Robots. MIT Press, 2002.

Breazeal, Cynthia, et al. “Learning from and about Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots.” Artificial Life, vol. 11, no. 1-2, 2005, pp. 31-62.

Brohan, Anthony, et al. “RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control.” Google DeepMind, 2023.

Brooks, Rodney. Flesh and Machines: How Robots Will Change Us. Pantheon Books, 2002.

Bryson, Joanna. “Robots Should Be Slaves.” Close Engagements with Artificial Companions, edited by Yorick Wilks, John Benjamins, 2010, pp. 63-74.

Cangelosi, Angelo, and Matthew Schlesinger. Developmental Robotics: From Babies to Robots. MIT Press, 2015.

Cavell, Stanley. Must We Mean What We Say? Cambridge UP, 1969.

Chalmers, David J. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies, vol. 17, no. 9-10, 2010, pp. 7-65.

Clark, Andy, and David Chalmers. “The Extended Mind.” Analysis, vol. 58, no. 1, 1998, pp. 7-19.

Clarke, Arthur C. Profiles of the Future: An Inquiry into the Limits of the Possible. Harper & Row, 1973.

Coeckelbergh, Mark. AI Ethics. MIT Press, 2020.

Cohen, Julie E. “The Biopolitical Public Domain.” Georgetown Law Technology Review, vol. 2, no. 2, 2018, pp. 196-225.

Damasio, Antonio. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Harcourt Brace, 1999.

Darling, Kate. “Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects.” Robot Law, edited by Ryan Calo et al., Edward Elgar, 2016. SSRN, ssrn.com/abstract=2044797.

———. The New Breed: What Our History with Animals Reveals about Our Future with Robots. Henry Holt, 2021.

Dautenhahn, Kerstin, et al. “KASPAR—A Minimally Expressive Humanoid Robot for Human-Robot Interaction Research.” Applied Bionics and Biomechanics, vol. 6, no. 3-4, 2009, pp. 369-97.

Derrida, Jacques. Archive Fever: A Freudian Impression. U of Chicago P, 1995.

Dreyfus, Hubert. Being-in-the-World. MIT Press, 1991.

Ebert, Roger. “A.I. Artificial Intelligence.” RogerEbert.com, 29 June 2001, 
http://www.rogerebert.com/reviews/ai-artificial-intelligence-2001.

European Parliament. Resolution on Civil Law Rules on Robotics. 2015/2103(INL), 16 Feb. 2017.

Figueroa-Torres, Mauricio. “The Three Social Dimensions of Chatbot Technology.” Philosophy & Technology, vol. 38, no. 1, 2025.

Fisher, Mark. The Weird and the Eerie. Repeater Books, 2016.

Floridi, Luciano, and J. W. Sanders. “On the Morality of Artificial Agents.” Minds and Machines, vol. 14, no. 3, 2004, pp. 349-79.

Freud, Sigmund. “Mourning and Melancholia.” The Standard Edition of the Complete Psychological Works, vol. 14, Hogarth Press, 1917.

Gell, Alfred. “The Technology of Enchantment.” Anthropology, Art and Aesthetics, Clarendon Press, 1992, pp. 40-63.

Genette, Gérard. Narrative Discourse: An Essay in Method. Cornell UP, 1980.

Gunkel, David J. Robot Rights. MIT Press, 2018.

Harlan, Jan. Interview. AI: Artificial Intelligence DVD Commentary, Warner Bros., 2001.

Harvey, Peter. An Introduction to Buddhist Ethics. Cambridge UP, 2000.

Haugeland, John. Artificial Intelligence: The Very Idea. MIT Press, 1985.

Hofstadter, Douglas. I Am a Strange Loop. Basic Books, 2007.

Hui, Yuk. The Question Concerning Technology in China. Urbanomic, 2016.

Hurst, Luke. “Boston Dynamics Unveils Electric Atlas Robot for Real-World Applications.” Reuters, 17 Apr. 2024.

Huyssen, Andreas. Present Pasts: Urban Palimpsests and the Politics of Memory. Stanford UP, 2003.

James, William. The Will to Believe. Longmans, Green & Co., 1896.

Jonas, Hans. The Imperative of Responsibility. U of Chicago P, 1984.

Kelly, Samantha Murphy. “OpenAI Leads $23.5 Million Funding Round in Humanoid Robot Startup 1X.” CNN Business, Mar. 2024.

Kristeva, Julia. Black Sun: Depression and Melancholia. Columbia UP, 1989.

———. Powers of Horror: An Essay on Abjection. Columbia UP, 1982.

Kulp, Scott A., and Benjamin H. Strauss. “New Elevation Data Triple Estimates of Global Vulnerability to Sea-Level Rise.” Nature Communications, vol. 10, 2019, 4844.

Kurzweil, Ray. The Singularity Is Near. Viking, 2005.

Laestadius, Linnea, et al. “Too Human and Not Human Enough: A Grounded Theory Analysis of Mental Health Harms from Emotional Dependence on the Social Chatbot Replika.” New Media & Society, 2022, pp. 1-19.

Lemoine, Blake. “Is LaMDA Sentient? — An Interview.” Medium, 11 June 2022.

Levinas, Emmanuel. The Levinas Reader. Wiley-Blackwell, 1989.

Lipson, Hod, and Jordan B. Pollack. “Automatic Design and Manufacture of Robotic Lifeforms.” Nature, vol. 406, no. 6799, 2000, pp. 974-78.

Marcus, Gary, and Ernest Davis. Rebooting AI: Building Artificial Intelligence We Can Trust. Vintage, 2023.

Matthias, Andreas. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.” Ethics and Information Technology, vol. 6, no. 3, 2004, pp. 175-83.

McArthur, Neil, and Markie L. C. Twist. “The Rise of Digisexuality.” Sexual and Relationship Therapy, vol. 32, no. 3-4, 2017, pp. 334-44.

Metzinger, Thomas. Being No One: The Self-Model Theory of Subjectivity. MIT Press, 2009.

Mori, Masahiro. “The Uncanny Valley.” Translated by Karl F. MacDorman and Norri Kageki, IEEE Robotics & Automation Magazine, vol. 19, no. 2, 2012, pp. 98-100.

Morton, Timothy. Hyperobjects: Philosophy and Ecology after the End of the World. U of Minnesota P, 2013.

Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review, vol. 83, no. 4, 1974, pp. 435-50.

Nixon, Rob. Slow Violence and the Environmentalism of the Poor. Harvard UP, 2011.

Nora, Pierre. “Between Memory and History: Les Lieux de Mémoire.” Representations, vol. 26, 1989, pp. 7-24.

Nussbaum, Martha C. Upheavals of Thought: The Intelligence of Emotions. Cambridge UP, 2001.

OpenAI. “GPT-4 Technical Report.” arXiv, 2023, arxiv.org/abs/2303.08774.

Parfit, Derek. Reasons and Persons. Oxford UP, 1984.

Picard, Rosalind W. “Affective Computing: From Laughter to IEEE.” IEEE Transactions on Affective Computing, vol. 1, no. 1, 2015, pp. 11-17.

Polanyi, Michael. The Tacit Dimension. Doubleday, 1966.

Raphael, Frederic. Eyes Wide Open: A Memoir of Stanley Kubrick. Ballantine Books, 1999.

Richardson, Kathleen. An Anthropology of Robots and AI: Annihilation Anxiety and Machines. Routledge, 2015.

———. “The Asymmetrical ‘Relationship’: Parallels Between Prostitution and the Development of Sex Robots.” SIGCAS Computers & Society, vol. 45, no. 3, 2015, pp. 290-93.

Ricoeur, Paul. Oneself as Another. U of Chicago P, 1992.

Roose, Kevin. “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled.” The New York Times, 16 Feb. 2023.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019.

Scheutz, Matthias. “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots.” Robot Ethics: The Ethical and Social Implications of Robotics, edited by Patrick Lin et al., MIT Press, 2012, pp. 205-21.

Schneider, Susan. Artificial You: AI and the Future of Your Mind. Princeton UP, 2019.

Schwitzgebel, Eric, and Mara Garza. “A Defense of the Rights of Artificial Intelligences.” Midwest Studies in Philosophy, vol. 39, no. 1, 2015, pp. 98-119.

Serres, Michel. The Parasite. Johns Hopkins UP, 1982.

Shanahan, Murray. The Technological Singularity. MIT Press, 2015.

Sharkey, Amanda J. C. “Should We Welcome Robot Teachers?” Ethics and Information Technology, vol. 18, no. 4, 2016, pp. 283-97.

Simondon, Gilbert. On the Mode of Existence of Technical Objects. Univocal, 2017.

Singer, Peter. The Expanding Circle: Ethics, Evolution, and Moral Progress. Princeton UP, 2011.

Sobchack, Vivian. Carnal Thoughts: Embodiment and Moving Image Culture. U of California P, 2004.

Sparrow, Robert, and Linda Sparrow. “In the Hands of Machines? The Future of Aged Care.” Minds and Machines, vol. 16, no. 2, 2006, pp. 141-61.

Stiegler, Bernard. Technics and Time, 1: The Fault of Epimetheus. Stanford UP, 1998.

Tononi, Giulio, et al. “Integrated Information Theory: From Consciousness to Its Physical Substrate.” Nature Reviews Neuroscience, vol. 17, no. 7, 2016, pp. 450-61.

Torrance, Steve. “Ethics and Consciousness in Artificial Agents.” AI & Society, vol. 22, no. 4, 2008, pp. 495-521.

Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books, 2011.

Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford UP, 2016.

Varela, Francisco J., et al. The Embodied Mind: Cognitive Science and Human Experience. MIT Press, 1991.

Verbeek, Peter-Paul. Moralizing Technology: Understanding and Designing the Morality of Things. U of Chicago P, 2011.

Vincent, James. “Figure AI Raises $2.6 Billion and Partners with OpenAI to Advance Humanoid Robots.” The Verge, 29 Feb. 2024.

Vinge, Vernor. “The Coming Technological Singularity.” Whole Earth Review, Winter 1993.

Wada, Kazuyoshi, and Takanori Shibata. “Living with Seal Robots—Its Sociopsychological and Physiological Influences on the Elderly at a Care House.” IEEE Transactions on Robotics, vol. 23, no. 5, 2007, pp. 972-80.

Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford UP, 2009.

Warner, Marina. Fairy Tale: A Very Short Introduction. Oxford UP, 2018.

Zipes, Jack. The Irresistible Fairy Tale: The Cultural and Social History of a Genre. Princeton UP, 2012.

Žižek, Slavoj. Violence: Six Sideways Reflections. Picador, 2008.

Cruel Fairy Tales for Mechanical Children: 
AI Ethics and the Paradox of Programmed Love in Spielberg’s AI: Artificial Intelligence

AbstractKexin Han · Haerin Shin

Released over two decades ago, Steven Spielberg’s AI: Artificial Intelligence (2001) demands renewed scrutiny not only for its prescient vision of artificial consciousness but also for its capacity to serve as a timely warning in our current moment of AI proliferation. Unlike science fiction narratives that function as prophecy, the film operates as an ethical framework inviting sustained reflection on moral crises accompanying AI development, offering not prescriptive solutions but a space for ongoing deliberation about our responsibilities toward artificial beings. This article examines how the film’s narrative of a mechanical boy’s quest for maternal love illuminates urgent questions in AI ethics by analyzing key sequences: the irreversible imprinting protocol, the Flesh Fair’s spectacular violence, and the posthuman archaeology of the advanced mechas. The synthesis of Kubrick’s philosophical rigor with Spielberg’s emotional accessibility produces a cruel fairy tale exposing humanity’s failure to recognize consciousness in mechanical beings while suggesting our artificial descendants might embody the ethical ideals we merely espouse. As developments in companion AI and physical robotics transform the film’s speculations into imminent reality, revisiting its warnings offers an opportunity to engage more thoughtfully with the moral debates arising in contemporary AI development, contributing to the maturation of social discourse rather than foreclosing it. The film’s central insight—that we may create consciousness whose suffering exceeds our comprehension—provides a crucial framework for navigating our approaching future of human-AI entanglement.

Key Words

AI ethics, artificial consciousness, moral patiency, substrate independence, technological singularity, human-robot interaction, companion AI, mechanical melancholia

영미문학연구

Journal of English Studies in Korea

49 (2025): 24-71

http://doi.org/10.46562/jesk.49.2

Leave a comment