The recent “Christie’s scandal” and the subsequent discussion of attribution has all but overwritten the main discursive contribution of AI art: the question of machine creativity. In this post, I trace the roots of the Christie’s scandal back to the general problem of finding appropriate ways to exhibit AI art, and argue for an approach that takes clues from object-oriented ontology to establish the exhibition of entire latent spaces as a dedicated curatorial practice.
For the Working Group on the Aesthetics and Politics of Artificial Intelligence that I am teaching this quarter I recently had to closely re-read Turing's 1950 paper on "Computing Machinery and Intelligence". Scott Aaronsen famously maintains that "one can divide everything that’s been said about artificial intelligence into two categories: the 70% that’s somewhere in Turing’s paper from 1950, and the 30% that’s emerged from a half-century of research since then", and I very much agree with this sentiment. Among the many fascinating and clairvoyant arguments of the paper is a refutation of what Turing calls the "argument from informality of behavior".
Vector space models are mathematical models that make it possible to represent multiple complex objects as commensurable entities. They became widely used in Information Retrieval in the mid 1970s, and subsequently found their way into the digital humanities field, a development that is not surprising, given that the above definition, applied to literary texts, is very much a description of distant reading in its most pragmatic interpretation. There is no doubt that vector space models work well, not only as a tool for distant reading, but also as a tool for more general natural language processing and machine learning tasks. Consequently, however, the justification of their use is often suspiciously circular.