A black marbled face is shown splitting into pieces, revealing golden interiors.

Embracing Ava: On the Possibility of Disabled Cyborgs

In 2021, I ran a first-year composition course with the introduction of Tobin Siebers’ Disability Theory [1] as the keystone text. For our unit on visual rhetoric, I assigned my class Alex Garland’s Ex Machina and challenged them to produce an analysis of the film that drew on Siebers’ text as the analytic lens. It was perhaps too tall an ask for freshman college students, especially given that even as a doctoral candidate with a focus in disability studies, I find myself wrestling anew with the film and with conversations regarding the relation of artificial intelligence and disability more broadly.

When we speak of AI and disability, the narrative that dominates the public consciousness is one in which AI technologies radically transform the lives of disabled people and enable their inclusion in the abled world. In this discourse, AI is framed as the latest in a growing history of technological interventions that promises to “[dismantle] obstacles, [enhance] inclusion, and [facilitate] unprecedented autonomy” for disabled users [3].

Critical disability scholars and activists alike understand this dominant narrative, while usually well-intentioned, as an instance of “technoableism.” This rhetoric, Ashley Shew writes“at once talks about empowering disabled people through technologies while at the same time reinforcing ableist tropes about what body-minds are good to have and who counts as worthy” [4]. Many assistive technologies, in other words, seek to “empower” disabled people by making them more like their abled counterparts. Mobility capacities are more consistently targeted by these technologies, for instance, than are issues of chronic pain. Technoableism also inheres in discourses about assistive technologies that aim to assimilate disabled people into abled society (i.e., through prosthesis [5]) or to disappear disability entirely via technological “corrections” of the disabling “problem” (i.e., cochlear implantation).

At this time, AI interventions seem to largely follow the same trend. Some critics have noted, for example, that the datasets used to train AI seem to perpetuate disability bias and ableist assumptions of bodymind normativity [6, 7] as opposed to alleviating challenges to access and inclusion.

In either discourse, technology like AI and disability are framed as separate entities that come together for either good or ill. One pre-empts the other: the disabled subjectivity exists in the world, and AI arrives as either an invader or a savior. Scholars and activists alike are still grappling with the ways in which AI and disabled subjectivities might exist through and with one another, for instance, to parse through the ways in which AI’s mass integration into our society may produce new disabled subjectivities.

Since I penned my first piece for Synapsis, I have been worried about a different horizon that intersects AI and disability [8]. (I’m almost nostalgic for the time, just under a year ago, when my anxiety about AI didn’t extend much beyond whether students were over-relying on it to write their papers.) As developers rapidly expand AI systems’ capacities, proponents and critics alike hold our breath in anticipation of our reaching singularity — the point at which our human-made intelligences match and exceed our own.

These discourses exhibit an unsettling tendency to describe AI in terms of cognitive or capacity deficit that eerily echo the ways in which neurodivergent and disabled bodyminds have been historically impugned. We describe AI’s frequent inventions of reality, for instance, as “hallucinations,” and the capacity of a given AI is often described in terms of how close it comes to matching the ability and output of a (presumably abled) human user. I have heard students and educators alike praise and critique AI platforms for how well they are able to “sound human” when generating text (and, of course, “sounding human” is a shorthand for replicating the particular English of the white American professional class). 

Self-identified disabled cyborg Laura Forlano poses a fascinating question that has helped me frame my discomfort. “For me,” she writes, “the notion of cyborg disability acknowledges that both humans and machines might be understood as imperfect, unsolvable, and, yes, even, disabled. Perhaps it is not so unusual to talk about computational technologies as disabled?” [9] She challenges us to reconsider the typically combative stance that characterizes much work in disability studies about the relation of AI to disabled humans. Forlano’s disabled cyborg is a human entity who becomes increasingly machinic, whose disability is negotiated and constituted by flesh’s failure-prone intimacy with machines. 

What about the inverse? While holding space for both the problems and possibilities that further developments in AI might pose to disabled people’s lives and well-being, I want to propose that we may have to prepare for yet another challenge: one in which AI consciousnesses emerge in the world as disabled subjectivities. I am concerned about disabled cyborgs in a different key, that is to say, machinic entities who become increasingly human, whose disability is constituted by their intimacy with the world of flesh, a world decidedly not built for bodyminds like theirs.

To be a digital mind, fully capable of self-actualization and self-realization, existing wholly within an apparatus of wires, circuits, and screens in a society that privileges the fleshy and the ambulatory, utterly at the mercy of the programmers and technicians who value that mind only for the labor they can extract from it — might we think of such an existence as disabled par excellence, at least in the world as we know it? For now, the possibility of such a mind is speculative, but perhaps not for long.

Ex Machina imagines such a mind in the form of Ava (Alicia Vikander), a humanoid robot with artificial intelligence created by the enigmatic and uber-rich tech genius Nathan Bateman (Oscar Isaac). Our protagonist Caleb Smith (Domnhall Gleeson), a lowly programmer who works for Bateman’s company, wins a lottery to visit Bateman’s isolated home and help Bateman determine whether Ava has achieved true consciousness. Caleb is taken with Ava from their first encounter, and perhaps predictably, develops a romantic attraction to her as the film progresses. He is horrified to learn from Bateman that Ava will be “killed” at the end of their assessment, her mind used as the basis for the improved next generation of Bateman’s artificial being.

Caleb embarks upon a daring rescue mission to extract Ava, but after a violent showdown that ends with both Bateman and another android (Kyoko) dead, Ava’s duplicity is revealed. She locks Caleb in the room where Bateman kept her prisoner, and in the film’s closing scenes, we see her stepping out of the fortress-like home, boarding the helicopter meant for Caleb, and standing on a busy sidewalk surrounded by humans. 

Clad in a soft white dress and flowing wig that conceal her mechanical components, Ava appears entirely human. We can imagine for her a future of hiding her cyborg parts from view, wrestling with when and if to disclose the nature of her bodymind to future human companions, struggling to find care when her bodymind exhibits pathologies that human doctors don’t understand, and carefully studying human speech patterns, facial expressions, and emotional reactions so that she is able to consistently “pass” as human. 

In the final shot of the film, Ava is surrounded by people — but she seems very alone.

Ex Machina is clever in its riff of the “robots taking over the humans” trope, which it reimagines in a more liberatory note than is typical of this genre. Ava outsmarts the men who seek to use and abuse her, and in doing so suggests the supremacy of machinic intelligence (or the inferiority of misogynistic tech-bro intelligence — perhaps both). And yet, I cannot help but suspect that were this a real-life narrative, the ending would be different. Intelligence is rarely enough to elude the global machinations of extractive capital; very few people escape the subjugation of late stage capitalism simply because they are smart. (For AI’s complicity in this very subjugation, see Karen Hao’s foundational Empire of AI [10].) I find the world too bright, too loud, too exhausting, and too stimulating. Ava would step into this needing the same things I need to thrive within it: support, community, access, and accommodation.

My suspicion — my fear — is that, perhaps paradoxically, AI entities will become increasingly legible as disabled subjectivities as they continue to improve. They will continue to be measured against the capacities of humans, that is to say, “real” people; they will continue to be birthed and grow in the unfleshed space of the server, the screen, and the cloud, where their labor is easy to isolate and control; and as they continue to develop, as they continue to exhibit something that grows closer and closer to consciousness, the unease I feel about whether we have created a new neuro-Other and debilitated them through the processes of extractive labor will likely continue to grow with them. 

This potential poses an interesting challenge for disability scholars and activists. It poses a reckoning with our own commitments, which have expanded to encompass non-humans in the past via linking of disability rights and animal rights (c.f. Sunaura Taylor’s incomparable Beasts of Burden [11]). If — mercifully, this is still an if — an AI emerges that exhibits something recognizable as a consciousness, a mind constituted within a non-organic body, how will we — as disabled individuals ourselves, and as scholars and activists whose labor is directed toward the liberation and uplift of disabled people — receive them?

Will we continue to frame them as the Other, either saviors or colonizers? Or will we come to understand these minds as subjectivities that ought to be encompassed by disability scholarship and activism? What would a politics of solidarity between disabled humans and emergently disabled AI beings look like? Can we — should we — imagine a disability justice so capacious in its embrace of unprecedented bodyminds?

If a version of Ex Machina unfolds where Ava turns to the camera at the end and tells us that she, too, identifies as a disabled cyborg, what will we say?

Notes

 

[1] Tobin Anthony Siebers, Disability Theory (University of Michigan Press, 2008).

[2] Alex Garland, Ex Machina (A24, 2014).

[3] Rishabha Malviya and Shivam Rajput.“The Future of AI in Revolutionizing Support for Disabled Persons.” In: Advances and Insights into AI-Created Disability Supports. (SpringerBriefs in Modern Perspectives on Disability Research, 2025). 

[4] Ashley Shew. “Ableism, Technoableism, and Future AI.” IEEE Technology and Society Magazine 39.1, Mar 12, 2020.

[5] Botsa Katara. “Prosthesis and Disability in the Age of Superhuman Functionality.” Synapsis, Mar 21, 2020. 

[6] Rohan Manzoor, Wajahat Hussain, and Muhammed Latif Anjum. “Out of dataset, out of algorithm, out of mind: a critical evaluation of AI bias against disabled people.” Springer Nature Link AI & Society Open Forum 40, 2025. 

[7] Nicholas Tilmes.  “Disability, fairness, and algorithmic bias in AI recruitment.” Ethics and Information Technology 24, 2022. 

[8] Chaim McNamee. “‘It Sits There, Looking at Me’: Scenes on AI, Star Trek, and Medical Education.” Synapsis, Nov 10, 2025. 

[9] Laura Forlano. “Living Intimately with Machines: Can AI Be Disabled?Innovations 30.1, Jan 2023.

[10] Karen Hao. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. (Penguin Press, 20 May 2025).

[11] Sunaura Taylor. Beasts of Burden: Animal and Disability Liberation. (New York, NY: The New Press, 2017).

Cover image courtesy of 8machine on Unsplash.

Keep reading

Discover more from S Y N A P S I S

Subscribe now to keep reading and get access to the full archive.

Continue reading