This is the first article coming out of “The Future of Knowledge Mobilization and Public History Online” workshop held at Huron College by ActiveHistory.ca in August 2024. Jessica DeWitt and Andrew Watson represented NiCHE at this workshop. This article was originally published on ActiveHistory.
The opening session of Active History’s late-August workshop on knowledge mobilization and public history confronted the changing digital environment and its consequences. Among the digital topics discussed, artificial intelligence (AI) stood out not just for the quantity of discussion it produced, but for the nature of that conversation. Historians are thinking about AI, that much is clear, but they are not necessarily of one mind.
A range of historian opinion about AI is displayed also in the Active History archives. Since the release of ChatGPT made AI technology readily available and easily accessible in November 2022, Active History has published a number of pieces on the topic, with various opinions and perspectives on display. In short essays published last year, Sara Wilmshurst reminded us that “there are questions machines can’t answer” and Carly Ciufo was impressed, but not too impressed, by the utility of ChatGPT for prompting research.
Among the Active History essays on AI, certainly the most bullish is one from Mark Humphries, who also writes a regular blog on AI and history, and Eric Story. They argued in March 2023 that Large Language Models (LLMs), like ChatGPT, are useful tools for writing, editing, teaching, and research (in other words, for the vast majority of the work that historians do). Like it or not, Humphries and Story insist, LLMs are here to stay. They conclude their article by agreeing with the Bing (Microsoft) chatbot, Sydney, that “historians and AI can work effectively together.”
In the most bearish essay of the bunch, Edward Dunsworth argued in September 2023 that AI is boring, most of all because it is, by definition (or by operation), unoriginal. For Dunsworth, writing mainly about the unauthorized use of AI in the classroom, ChatGPT is good for a “ho-hum encyclopaedia entry with a twist of textbook authority and a dash of generic, corporate blog-style prose,” but not too much else.1
Anecdotally, on the basis of the conversation at the Active History workshop and also informal conversations I’ve had with other historians in the past couple of years, bullishness is more popular than bearishness when it comes to AI and history. Or, if it’s not more popular, it’s more vocal. Something like the view of Humphries and Story seems, now, common: AI is an emerging fact of scholarly life, not to mention life in general, and while it presents challenges it also presents opportunities and we’d better learn to manage the former and take advantage of the latter. Learning how to do that is surely tricky, and most of the discussion at the Active History workshop revolved around that trickiness. About the particulars of how to both cope and work with AI, there is little consensus. About the first order question of whether historians should embrace or accept AI at all, there seems to be increasing agreement.
To the extent that we as historians accept as settled the first order questions about AI and instead opt to talk about nuanced details of implementation, I think we risk a very serious mistake.
To the extent that we as historians accept as settled the first order questions about AI and instead opt to talk about nuanced details of implementation, I think we risk a very serious mistake. Here, then, I want to publicly state my view of AI and its use in history, and to do so without any qualification. I hate AI, in general even, but especially when it comes to its use in the humanities and particularly in history. Like Dunsworth, I find it impossibly boring. And not just boring, but malign. I have no interest in working with AI myself (although I gather outright avoidance is all but impossible; I believe AI is operative in the Microsoft Word document I am writing in), and I have no interest in reading or engaging with anything that was “co-created” by a human being and an AI machine. If AI cures cancer, fine. Otherwise, no thanks. Based on my own understanding of the work that I do as a historian, and my understanding of the work that we collectively do as historians, I find AI utterly anathematic.
Some of the reasons why I hate AI are not exactly germane to the study, teaching, and writing of history. Still, I cannot go without noting that, until and unless AI halts climate change, which some think that it will do, it is rather an environmental disaster (if at relatively modest scale). The data centres that power AI technology consume vast amounts of energy already, and the energy demand from such centres is expected to double by 2026. In the United States, to note one very important example, climate goals with little hope of being reached to begin with are now even more unlikely thanks to all the energy being used to power AI. Consuming all that energy creates a lot of heat, too, and so data centres aren’t just churning through energy but also through water; multi-billions of cubic metres of water are used to cool the data centres down. Unsurprisingly, this is all prompting the giant tech companies to fashion themselves as energy companies, too, with the likes of Google and Amazon rushing into the nuclear energy business in hopes of cleaning the energy demands of the AI industry. If that fails, at least Amazon has a say in what standards will be used to judge in the first instance!
So, AI is not great for the planet on which humanity lives, and indeed for that reason alone it is worthy of humanists’ skepticism. What, then, of its prospects for humanistic and especially historical-humanistic endeavour on that planet? Here, I think it is useful to respond directly to some of the points raised in AI’s favour by Humphries and Story in their optimistic Active History essay, not least because, again, I think they were early to articulate a view that now has fairly wide purchase, and because of the view implied by their piece that AI can be a useful tool for historians across nearly all of the work that they do.
While ChatGPT is a competent technical writer, Humphries and Story suggest in opening their account of its utility, it is even more useful for editing. “The program can take original but poorly written text and make it cogent while still preserving the author’s original ideas,” they write. I am sure this is true, but at what cost? Without saying that such an “editorial process” would be one of little or no learning benefit to the person whose writing was being edited, though I do have that concern, I must say that to me this view implies a devaluation of what historical writing is and what it does. At bottom, we are often reminded, historians are storytellers. The idea of AI turning bad historical writing into good historical writing puts me in the mind of a point made by the writer Ted Chiang in a persuasive essay about why AI cannot make art:
“Many novelists have had the experience of being approached by someone convinced that they have a great idea for a novel, which they are willing to share in exchange for a fifty-fifty split of the proceeds. Such a person inadvertently reveals that they think formulating sentences is a nuisance rather than a fundamental part of storytelling in prose. Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium.”
The point applies equally to historical writing. Prose expression is not some barrier to the communication of historical knowledge, to be cleared by any means, but rather an integral aspect of that communication.
Humphries and Story are also optimistic about the benefits of AI for history students, noting that many teachers are now using AI, in some fashion, in the classroom. I am not one of those teachers, but I have of course encountered AI in my classroom. Once or twice, before I adequately altered assignments to be as AI-proof as I could make them, those encounters were of the ordinary, AI-as-cheating kind. But more interestingly, this spring I had an engaged and enterprising student approach me with a beta version of JSTOR’s “interactive research tool,” which “employs generative AI and other technologies to empower people to deepen and expand their research with JSTOR’s trusted corpus.” The student showed me an article with an AI-generated summary on the right-hand side of the page and asked me what I thought. I hemmed and hawed: “I guess these kinds of tools are inevitable,” “it seems possibly useful,” “I’m not sure.” I wish I had been honest: I’m afraid that AI is going to teach us to forget how to read.
The utility of AI for taking care of mundane historiographical tasks is another area of apparent promise. In their discussion of mundanity alleviation with AI, Humphries and Story ponder a less-tedious archival experience when they ask us to “Imagine a future in which thousands of pages of handwritten documents are quickly transcribed, proof-read, summarized, and analyzed by AI.” The transcription part of that future seems bright, but otherwise the vision I conjure is dystopian. I don’t think that AI should be doing our summarizing and of course it should not be doing our analyzing. Humphries and Story might clarify on this point and say that AI summary and analysis would be just a tool, a way to speed up the historian’s own work by providing, perhaps, a starting point. With critical AI scholars increasingly reminding us about the ways in which AI has already been “amplifying inequalities, perpetuating gender and racial biases, and consolidating new forms of knowledge extractivism,” we ought to question the value of such starting points.
Finally, the “real promise” that Humphries and Story see in AI for history is to do with the management of “big data.” On this, I tend to agree, as surely digital humanists, who do not count me among them and need not care about my view of their methodologies, will find ways to use AI for processing giant bodies of information in new ways. But in their optimism on this point, Humphries and Story make what I think is the most-objectionable point in their essay. They write: “Historians have a real opportunity here to help solve some of the technological problems at the heart of the AI dilemma. AI is very good at finding information, synthesizing it, and communicating it, but less so at being accurate and discerning fact from fiction. This is, of course, what historians do best.” What? Historians are not glorified fact-checkers. Neither Humphries nor Story, I’m quite sure, would actually say so in as many words, but that is the implication of the claim. On the contrary, finding information, synthesizing it, and communicating it is the very core of the historian’s craft. Accuracy is but a (very important) box to check along the way. Outsourcing the finding, the synthesizing, and the communicating to AI is to cede just about the whole craft to the machines.
Outsourcing the finding, the synthesizing, and the communicating to AI is to cede just about the whole craft to the machines.
We should not, of course, cede the craft to the machines. On this, surely all historians would agree. My concern is that, to the extent that we embrace AI as an inevitable and ultimately benign force in our field, we might inadvertently do just that. It’s not that we will participate in the replacement of historians by AI, but simply that we will become poorer historians, less connected to whole picture of the work that we do. So instead of embracing AI with open arms, we might take a cue from the pioneering AI researcher Joseph Weizenbaum, a computer scientist who developed a chatbot called Eliza in the mid-1960s before coming to believe that AI posed serious dangers. For writer Ben Tarnoff, Weizenbaum ultimately thought that, among other negative consequences, turning over human endeavours to computers meant that “the richness of human reason had been flattened into the senseless routines of code.”
Opposing the flattening of historical reasoning in this way is the normative order of the day. I’m not sure how this should be done, and maybe this is not the place. But in general terms, to paraphrase Gavin Mueller, we might benefit from approaching AI not as neutral technology but as a site of labour struggle. After all, I have repeatedly referred in this essay to the “work” that historians do, and the values that attend that work. Can those values, already under threat in many ways, be sustained alongside the AI-ification of history? Personally, I doubt it.
Feature Image: “MIT Museum: CADR – The LISP Machine (late 1970s) (detail)” by Chris Devers is licensed under CC BY-NC-ND 2.0.
Notes
1 In the winter semester of 2024, I used Dunsworth’s article as the basis for mini-speeches to students during introductory, syllabus-reviewing lectures. I found students amenable to the view that AI is, indeed, very boring.
Latest posts by Mack Penner (see all)
- Flattened History - December 17, 2024