I am Assistant Professor for the History and Theory of the Digital Humanities at the University of California, Santa Barbara. My research and teaching focuses on the epistemology, aesthetics, and politics of artificial intelligence: I study how machine learning models represent culture and what is at stake when they do. You can find a selection of my publications below. Email me, or get in touch on Twitter or Bluesky.

My work is situated at the intersection of critical artificial intelligence studies, the digital humanities, media studies, and the history of technology. It is supported by the Volkswagen Foundation through the AI Forensics project and by the UC Humanities Research Institute through the Critical Machine Learning initiative. At UCSB, I am affiliated with the Department of German, the Media Arts and Technology program, the Comparative Literature program, the Mellichamp Initiative in Mind & Machine Intelligence, and the Center for Responsible Machine Learning. My technical work includes the imgs.ai project, hosted at Deutsches Dokumentationszentrum für Kunstgeschichte. Before joining the faculty at UC Santa Barbara, I worked for a number of German cultural institutions, including ZKM Karlsruhe and Goethe-Institut New York. I have published and lectured widely on questions of the digital, including 25+ first author papers and 50+ lectures worldwide. My current book project focuses on "Machine Visual Culture" in the age of foundation models. Other research interests include the history, theory, exhibition, and preservation of digital art.


There Is a Digital Art History.

Visual Resources, 2024

We revisit Johanna Drucker's question, “Is there a digital art history?” in the light of the emergence of large-scale, transformer-based vision models. Such models have “seen” huge swathes of online visual culture, biased towards an exclusionary visual canon, and they continuously solidify and concretize this canon through their already widespread application in all aspects of digital life. We use a large-scale vision model to propose a new critical methodology that acknowledges the epistemic entanglement of neural network and dataset. We propose that digital art history is here, but not in the way we expected: rather, it has emerged as a crucial route to understanding, exposing, and critiquing the visual ideology of contemporary AI models. Work with Leonardo Impett.

Can We Read Neural Networks? Epistemic Implications of two Historical Computer Science Papers.

American Literature, 2023

Can we, as humans, rely on our capability to decode systems of representation, such as artistic descriptions of the world in text or image, to understand neural networks? The two technical papers at the center of this essay shed some light on this fundamentally humanist question. Concretely, they suggest that we are currently witnessing a turn toward postsymbolic computation, a paradigm under which nothing is language and everything is language at the same time.

Perceptual Bias and Technical Metapictures: Critical Machine Vision as a Humanities Challenge.

AI & Society, 2021

We propose that machine vision systems are inherently biased not only because they rely on biased datasets but also because their perceptual topology, their specific way of representing the world, gives rise to a new class of bias that we call perceptual bias. We show how perceptual bias affects the interpretability of machine vision systems in particular, by means of a close reading of a visualization technique called feature visualization. We conclude that dataset bias and perceptual bias both need to be considered in the critical analysis of machine vision systems and propose to understand critical machine vision as an important transdisciplinary challenge, situated at the interface of computer science and visual studies. Work with Peter Bell.

A Sign That Spells: Machinic Concepts and the Racial Politics of Generative AI.

Journal of Digital Social Research, 2024 [preprint]

We examine how generative artificial intelligence produces a new politics of visual culture through the technique of semantic compression. Semantic compression, we argue, is an inhuman and invisual technique, yet it is still caught in a paradox that is ironically all too human: the consistent reproduction of whiteness as a latent feature of dominant visual culture.using Open AI's failed efforts to "debias" their DALL·E system as a critical opening to interrogate how the system dissolves and reconstitutes politically and economically salient human concepts like race. Work with Thao Phan.

On the Concept of History (in Foundation Models).

Thinking with AI, 2024 [preprint]

Any sufficiently complex technical object that exists in time has, in a sense, a concept of history: a particular way that the past continues to exist for it, with contingencies and omissions specific to its place and role in the world. Computation is no exception to this, and indeed takes its very efficacy from a particular technical relation to the passing of time. Meanwhile, the emergence of so-called “foundation models” promises to significantly change what it means to “compute” in the first place, and especially what it means to compute the past. This essay thus asks: what is the concept of history that emerges from foundation models, and particularly from large visual models? Do such models conceptualize the past? What is the past for them? An earlier version of this essay appeared in the journal IMAGE.

Latent Deep Space. GANs in the Sciences.

Media & Environment, 2021

The recent spectacular success of machine learning in the sciences points to the emergence of a new artificial intelligence trading zone. Within this trading zone, one machine learning technique warrants particular attention from the perspective of media studies and visual studies: the generative adversarial network (GAN), a type of deep convolutional neural network that operates primarily on image data. In this paper, I argue that GANs are not only technically but also epistemically opaque systems: where GANs seem to enhance our view of an object under investigation, they actually present us with a technically and historically predetermined space of visual possibilities. I discuss this hypothesis in relation to established theories of images in the sciences and recent applications of GANs to problems in astronomy and medicine. I conclude by proposing that contemporary artistic uses of GANs point to their true potential as engines of scientific speculation.

Manufacturing Visual Continuity. Generative Methods in the Digital Humanities.

Computational Humanities, 2024

In this paper, we propose that the statistical distinction between generative and discriminative approaches can not only inform the methodological discourse in digital art history and digital visual studies but also provide a starting point for the exploration of previously disregarded generative machine learning techniques. While computational literary studies and related sub-disciplines of the digital humanities have already implicitly embraced generative methods, the visual digital humanities lack equivalent tools. Here, we propose to investigate generative adversarial networks as a machine learning architecture of interest, and suggest that the manufactured continuity that a GAN provides through advanced techniques like latent space projection, can guide our interpretation of an image corpus. Work with Peter Bell.