In the past few days, a disturbing image of a “white face” Obama has been making the rounds on Twitter. The image was created with the help of a new method for upscaling images called PULSE. The researchers behind the method describe PULSE as a new way to achieve super resolution on images of human faces, i.e. to produce high resolution images of human faces from low resolution images of human faces.
While my current research is concerned with interpretable machine learning, my background is in experimental music theater. Surprisingly, this kind of trajectory, from theater to computation, or at least from performance art to computer art, is somewhat common, and a number of my academic mentors and peers have lived versions of it. In this brief post, I explore some of the intersections of theater and computation in general, and theater and machine learning in particular that I suggest enable this trajectory. Based on this exploration, I present some speculative thoughts on potential future developments at the interface of theater and machine learning.
The recent “Christie’s scandal” and the subsequent discussion of attribution has all but overwritten the main discursive contribution of AI art: the question of machine creativity. In this post, I trace the roots of the Christie’s scandal back to the general problem of finding appropriate ways to exhibit AI art, and argue for an approach that takes clues from object-oriented ontology to establish the exhibition of entire latent spaces as a dedicated curatorial practice.
For the Working Group on the Aesthetics and Politics of Artificial Intelligence that I am teaching this quarter I recently had to closely re-read Turing's 1950 paper on "Computing Machinery and Intelligence". Scott Aaronsen famously maintains that "one can divide everything that’s been said about artificial intelligence into two categories: the 70% that’s somewhere in Turing’s paper from 1950, and the 30% that’s emerged from a half-century of research since then", and I very much agree with this sentiment. Among the many fascinating and clairvoyant arguments of the paper is a refutation of what Turing calls the "argument from informality of behavior".
Vector space models are mathematical models that make it possible to represent multiple complex objects as commensurable entities. They became widely used in Information Retrieval in the mid 1970s, and subsequently found their way into the digital humanities field, a development that is not surprising, given that the above definition, applied to literary texts, is very much a description of distant reading in its most pragmatic interpretation. There is no doubt that vector space models work well, not only as a tool for distant reading, but also as a tool for more general natural language processing and machine learning tasks. Consequently, however, the justification of their use is often suspiciously circular.