top of page

AI and Literature

BY ISABELLE WENTWORTH

The capacity for creating art has long been used to index humanity—it’s part of what distinguishes us from the rest of the animal kingdom. This logic, like all forms of categorisation, has both inclusionary and exclusionary force. We’ve seen it finance archaeological explorations for Hominid art and send symphonies into deep space. Yet we’ve also seen it become instrument of oppression and empire—a famous example here would be Thomas Jefferson’s dismissal of African Americans: “among the blacks there is misery enough, God knows, but no poetry.”


AI literature—literary texts created by AI—challenges this logic. Earlier anxieties about machine writing, from AI story-grammar models in the early 2000s, have taken on new urgency with systems that not only analyse data but learn its patterns to generate original material.


My recent research has focused on the questions raised by texts produced through generative AI. What are the potential implications of this emerging form of literature?


While they ramify in many different directions, one of the most important is the cultural cost of a literature that, rather than evolving with a changing society, remains tethered to its past. The imbalance between the vast reservoir of training data and the comparatively limited new inputs (new data, user prompts, retraining) ensures that AI largely reproduces existing material. As Cathy O’Neil observes, generative AI codifies the social, artistic, and cultural past, “injecting yesterday’s prejudice into tomorrow.” This dynamic will only intensify as AI-generated texts proliferate online, eventually feeding back into their own training data.


Part of the problem is the training corpus of most large-scale generative AI platforms reflects racial, gender, class, and ableist biases that have historically held sway in the Anglosphere. A common explanation is the “garbage in, garbage out” theory: algorithms reproduce the prejudices of their data. Yet beyond data quality, structural features of AI programming constrain its capacity to generate literature that responds ethically and adaptively to a changing world.


One such issue is temporality. Temporality is fundamental to literature, from the historical conditions of a text’s production and reception to the horizons of its author’s experience. Literature is thus never detached from time but entangled with it. By contrast, AI texts are, as scholars such as Kate Crawford, Hannes Bajohr, and Michele Elam have noted, radically decontextualised. Elam calls this an “algorithmic ahistoricity.” This does not mean that AI cannot be trained on historically accurate data, but rather that the data itself is stripped of its history in the process. Training data are processed at a granular level, within a local context window. While the mechanisms of positioning and self-attention allow the model to consider nearby tokens, they don’t inherently capture the global historical context of the entire document or text. All AI “knows” about the history of its data is what is visible within a limited window. Additionally, AI’s training data is often sampled in a way that doesn't preserve the original chronological order of its sources. The model therefore fails to learn temporal relationships. When it comes to producing literary texts, there is a disjunction as an ‘ahistorical’ logic system is tasked with simulating the fundamentally temporal dimensions of historicity, causality, or duration.


This lack of temporality explains a few things in terms of AI literary works—partly, why they’re often so bad, and why, despite ‘scrubbing’ (reinforcement learning, data labelling, etc), biases remain baked in. We should be wary of a future in which AI-produced literature dominates the field, offering little more than a simulacrum of the old Western canon with all its inherited priorities and exclusions. This will be particularly hard to ward against, as prejudicial categorisation can also be produced indirectly—both through correlations or proxies but also the subtleties of narrative voice, foregrounding or characterisation that go beyond direct representation.


Although there has been extensive retraining (often unethically outsourced… (Perrigo)), it only takes minor priming for ChatGPT to reproduce stereotypes. Asked to write a short story with a female protagonist and one with a male protagonist, it depicted the woman as a subordinate worker agonising over email etiquette (too many exclamation marks? too few?) while reflecting sentimentally on her child. The male protagonist, by contrast, was a structural engineer pondering the meaning of his life’s work at a bar. Asked for four short stories each with male and female protagonists, the women became a gardener, seamstress, wife, librarian, while the men appeared as novelist, boxer, engineer, soldier. Of course, our world has gendered occupations: AI reflects this. But as AI produces more of what we read, these stereotypes risk ossifying—ethical representation collapsed into probabilities.


So how can the biases embedded in AI literary texts be surfaced and challenged? Any solution must be multi-pronged. Statistical approaches have made progress, typically by detecting bias in quantifiable vector spaces. Yet, as I’ve discussed above, a deeper issue is the matter of historicity. This points to the need to bring the historicising insights of the humanities to bear on the creative texts of AI. As Kate Crawford has shown in her account of LLM training, many of the challenges posed by AI texts are precisely those that literary criticism has been interrogating for decades. Indeed, as Michele Elam argues, literary criticism may be uniquely positioned to address questions of equity, social justice, and power in AI texts “more capaciously and cogently than the sometimes reductive industry-speak of inclusion, fairness, or safety.” Literary texts require additional tools: ones that attend not just to lexical patterns or semantic associations but to narratological structures, characterisation, tone, focalisation, and the nuanced encoding of subjectivity. These features—central to literary form—are often where cultural biases are subtly embedded.


This is no straightforward process—it would be naïve to try to analyse AI literary texts the exact same way we analyse human literature. But it’s critical that we don’t simply ignore the advances of thousands of years of literary theory: AI authors may be programs, but their readers are people, and their effects will carry consequences for how people feel, think, and act. The humanities are still needed to keep literature open to change, grounded in ethics, and reflective of what it means to live and imagine in human time.



Works Cited


Crawford, Kate. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t.


Elam, Michele. 2022. “Signs Taken for Wonders: AI, Art & the Matter of Race.” Daedalus 151 (2): 198–217. https://doi.org/10.1162/daed_a_01910.


Elam, Michele. 2023. “Poetry Will Not Optimize; or, What Is Literature to AI?” American Literature 95 (2): 281–303. https://doi.org/10.1215/00029831-10575077.


Jefferson, Thomas. 1781. “Notes on the State of Virginia.” https://www.pbs.org/wgbh/aia/part3/3h490t.html.


O’Neil, Cathy. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.


Perrigo, Billy. 2023. “The $2 Per Hour Workers Who Made ChatGPT Safer.” TIME, January 18. https://time.com/6247678/openai-chatgpt-kenya-workers/.

© 2025 Scroll Magazine

Scroll Magazine acknowledges the traditional owners of the lands on which we live and work, and we pay our respects to Elders both past and present.

bottom of page