Event box

Research Reflections: Dr. Alexander Monea

Research Reflections: Dr. Alexander Monea In-Person

Presentation:  "I Know It When I See It" - An Overview of Google's Safe Search & the Politics of Automating Judgment

Abstract: In his 1964 concurrence in Jacobellis v. Ohio, Potter Stewart noted that while he could not define hard-core pornography, he knew it when he saw it. In this presentation, I refer to such I-know-it-when-I-see-it concepts as extra-linguistic concepts because they contain an intuitive, inductive, and/or felt component in the classificatory logic that affords their generalization. This paper argues that contemporary machine learning applications have successfully operationalized this classificatory logic at mass scale, and looks to Google’s work to filter Not Safe For Work (NSFW) images as a particularly compelling success story. I argue that this constitutes not only the computational production of extra-linguistic concepts, but the automatic mediation of the visual world. This presentation traces the history of SafeSearch, with particular attention paid to the introduction of Cloud Vision in 2016, which Google promised would leverage machine learning for the detection of labels, logos, landmarks, optical characters, faces, image attributes, and explicit content in images. The resulting machine learning apparatus was composed not only by material technologies and communications infrastructures, but also by scientists, engineers, and programmers conducting research and development, the diverse bodies of international laborers sitting in cubicles reviewing flagged and reported content, and the hordes of citizen surveillance agents reporting offensive content as they browse the web. In a sense, this machine learning apparatus automates the production of subjective constructs, though it produces three problematic operations: (1) the big data paradigm is probabilistic, and thus designed to tolerate a certain percentage of misclassification without adequate adjudication mechanisms for redress; (2) the extra-linguistic nature of computational concepts make them an opaque medium for supporting human judgment, and thus delimits our capacity to adequately assess their accuracy and critique their parameters; and (3) the awe we feel at such technological feats makes it easy to simultaneously fetishize and depoliticize machine learning apparatuses, and thus obscures their increasingly prominent role in shaping our subjectivities and communities.

About the presenter: Dr. Alexander Monea is an Assistant Professor serving jointly in George Mason's English Department and Cultural Studies Department. He received his PhD in Communication, Rhetoric, & Digital Media from North Carolina State University after completing a project that traced the historical entanglement of computation, big data, and governmentality in the United States. His recent publications range from analytical work focused on specific computational apparatuses, like Google's Knowledge Graph, to more theoretical critiques of speculation, to methodological meditations on doing politically meaningful media studies research.

Date:
Thursday, March 29, 2018
Time:
2:00pm - 3:30pm
Time Zone:
Eastern Time - US & Canada (change)
Location:
2001 Fenwick Library, Main Reading Room
Campus:
Fairfax Campus
Categories:
  Author/Speaker/Artist Event  

Event Organizer

Profile photo of Jen Fehsenfeld
Jen Fehsenfeld