Todd Presner presents on new book about digital humanities and Holocaust memory
On October 8th, 2024, Todd Presner (Michael and Irene Ross Professor and Chair, Department of European Languages and Transcultural Studies, UCLA) delivered a lecture on his recently published book Ethics of the Algorithm: Digital Humanities and Holocaust Memory. The lecture was cosponsored by the USC Mellon Humanities in a Digital World program.
In the book, Presner analyzes how computational methods can help expand how researchers interact with Holocaust testimony and the impact this new understanding can have on the field at large. He argues that these new computational methods can help attune the listener to elements of the testimony that have been overlooked. The book has an accompanying website with testimonies, interactive visualizations, and other supplemental components.
Presner situated his research by arguing that researchers cannot use computational tools such as algorithms and mass data collections without acknowledging the historical abuses of these types of data collections and their use, including in the persecution of the Jews. He then introduced the audience to David Boder, the first person to systematically record survivors’ testimonies after the Holocaust, as someone who grounds the book and pioneered integrated methodologies that act as a throughline for the book. Within the book, Presner switches between ‘distant’ and ‘close’ listening, which he defines as looking at the corpus as a whole and listening to single words, phrases, or intonations, respectively.
Discussing some of the central themes of the book, Presner described how he has utilized machine learning tools to examine what questions survivors are asked and how those questions have changed over time. Presner has explored and analyzed the theme of agency, specifically how moments of action have not been thoroughly indexed and yet can be illuminated by the tools he uses, widening our understanding of Jewish agency. He also explained how AI can both positively and negatively impact the future of testimonies and the research surrounding them. Presner argued that rather being computational in a narrow sense, the book is a humanities argument for how computational methods can be used to preserve, interpret and perpetuating cultural memory through a variety of methods.
Presner introduced the term ‘distant witness’ to describe where the field finds itself as more and more Holocaust survivors pass away. In light of this, he argues that digital tools, such as the USC Shoah Foundation Visual History Archive and the interactive Dimensions in Testimony program, as well as the Illinois Museum of the Holocaust’s virtual- and augmented-reality immersive testimonies, are ways of extending the archive and creating a digital memory culture. He went on to highlight forms of remediating the archive through mediums such as Instagram and the questions these projects raise. While discussing generative AI and the limits of representation, Presner highlighted the lack of transparency on the part of AI models regarding the source material used. He firmly believes that when these networks do not offer provenance for how they collect source material and data, they are very inappropriate and problematic.
Presner shared that he came by this project while considering the big data challenge of the scale of various testimony archives, both in the number of testimonies and transcripts, but also in the metadata attached to those testimonies. He argues that with this number of testimonies, one has to employ tools to both find new elements to research and to comprehend the data presented, which then presents ethical and epistemological choices. Presner posed the possibility of a ‘generative’ abstraction that doesn’t reduce testimonies to ‘bare data,’ the deliberate and nonconsensual reduction of human beings to data points and calculations for singular means. He argues that this is a way algorithms and computational tools can be complicit in persecution, something we need to be constantly aware of. One way to addressing this challenge is creating reparative data frameworks, which he discusses in the book, all of which work toward deepening understanding without reifying victims or resubjecting them to violence of numeracy.
In terms of proposing methods and imagining practices, Presner asserted that we don’t have all the answers, and the answers are not black and white. He listed several methods and ways to discover and study differences in large testimony archives using digital tools, all of which require human interpretation of the data to discover the meaning of the differences at hand. Presner gave some illustrative examples of examining an archive at a macro-level and the new questions that analysis raises about narrative patterns and the standardization of testimony.
Returning to his discussion of agency, Presner argued that tools like Natural Language Processing can help produce counter indexes and contribute to deepening our understanding of agency in testimony and locating lesser-discussed topics. He highlighted one example using semantic triplets to find specific themes of violence and agency at a granular level across a single testimony or group of testimonies, culminating in a new reading experience. Presner emphasized the ethical responsibility of always linking back to the source material. He then moved on to discuss the questions asked by interviewers in several archives, including the USC Shoah Foundation, and what information can be gleaned by examining these questions as a larger corpus. By employing a clustering algorithm, questions are sorted into semantic similarities and topics, which allows the researcher to notice similarities and differences in questions and how they change over time and setting. This model allows the user to sort by topic and see the question in context, which Presner argues prevents problematic decontextualization of the data.
Finally, Presner turned to AI and the challenge of authenticity. Returning to the USC Shoah Foundation’s Dimensions in Testimony project, he highlighted the use of non-generative AI to link questions with survivors’ answers as an example of authenticity since the answers given are still authentic to the survivor’s story. He contrasted this with lack of transparency and authenticity found in generative AI chat models where users can interact with figures like Anne Frank. He believes strongly that an ethical algorithm has to provide as much data and documentation as possible and be as transparent as possible with what choices were made and where. Presner does think that AI has a place in research, particularly the promise of using Large Language Models, but the model needs to be able to answer how it came to the conclusions it does. Ultimately, human judgment is key.
In a lively Q&A, Presner fielded many questions, with topics ranging from how to make these sources broadly available without reducing to bare data, his driving force for pursuing this project, how the project defines active versus passive when studying semantic triplets, diversity in language, and the percentage of possible answers, rarely asked questions, and failed questions for Dimensions in Testimony.
Read more about the book Ethics of the Algorithm here.