Aesthetics and Politics of Artificial Intelligence

MAT AIWG | Spring 2018, W/F 2-4pm | Elings Hall 2003

Description

This iteration of the MAT Artificial Intelligence Working Group starts from a basic hypothesis, put forward by Philip Agre in the late 1990s: "AI is philosophy underneath". Given the rapid development of the field since 2012, does this hypothesis hold?

When we talk about artificial intelligence today, we talk about highly specialized machine learning models. Unlike in the 1990s, the primary function of these models is not the mechanization of reason but the mechanization of perception, most prominently the mechanization of vision. As a consequence, the tasks that many machine learning models operate on are aesthetic tasks, ranging from the classification of images in regard to their content and form to the generation of completely new images.

Imaginary, GAN-generated celebrities from Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, "Progressive Growing of GANs for Improved Quality, Stability, and Variation," arXiv preprint 1710.10196

At the same time, the technical opacity of many machine learning models makes it inherently difficult to properly evaluate their results. This is complicated even further whenever a model is deployed as a product and opacity becomes a desirable property. In fact, the interpretability of machine learning models — their ability to generate and/or facilitate explanations of their results — has not only become an independent field of research within computer science but has also grown into an increasingly important legal challenge. Hence, the once speculative phenomenological question "how does the machine perceive the world" suddenly becomes a real-world problem.

Contemporary machine learning models thus raise a set of issues that are completely independent of the ones raised by the possibility of a future general artificial intelligence. Most prominently, they are real-life socio-technical systems that have politics. Adapting Agre's hypothesis: AI is aesthetics and politics underneath.

Participants in the working group meet twice weekly to investigate this peculiar nexus of aesthetics and politics in contemporary machine learning through equal parts of critical reading and technical reviews (of technical papers and code examples).

Syllabus

Please note that this syllabus is subject to change and will be updated before and during the spring quarter.

Week 1: Artificial Intelligence as a Philosophical Project

Wednesday, April 4, 2018

Friday, April 6, 2018

Optional

Week 2: The Limits of Deep Learning

Wednesday, April 11, 2018

Friday, April 13, 2018

Optional

Week 3: Deep Dreaming

Week 4: GANs I

Week 5: GANs II

Week 6: Interpretability I

Week 7: Interpretability II

Week 8: FAT and Bias

Week 9: FAT and Interpretability

Week 10: TBD

  • Possible topics: adversarial examples, RNNs, reinforcement learning, general AI

Further Resources