Rachel, Lin, and Ryan Abstract

Project Proposal

Given this class’s goal of approaching and exploring the spaces where algorithms and the production, reception, and repurposing of cultural texts meet, we would like to explore the intersection between the various valences of legibility and oppositional reading strategies in the consumption of popular media texts, and the limits and possibilities of making these vital human uses of media legible to computational processes. Given that one of the largest obstacles to “reading” the affect or emotional valences of social media texts is understanding inflections of irony on the part of the author, we intend to explore this apparent limitation as it functions in relation to another type of reading – the cluster of different viewing practices and strategies which might generally be termed “ironic.” What tools might we be able to create to make legible to algorithmic processing occurrences of ironic enjoyment or engagement with a given cultural text, in opposition to what might be termed “innocent,” “straightforward,” or even “uncritical” enjoyment? In her exploration of the divide between such viewer positions in relation to the film Disco Dancer, Neepa Majumdar conceives of this divide as “queer” vs. “straight” reading positions. These more general terms can be broken down into practices of camp, irony, and oppositionality, all of which raise critical questions of politics, class, taste, and cultural capital which are deeply ingrained in the act of media reception.

In exploring these multiple layers of language and legibility (aesthetic, political, technological), we hope to generate interesting questions about both human and computational “reading” by thinking critically about the limits of algorithmic potential to make legible human strategies of cultural use (and even “making do,” to cite de Certeau) in which pleasure, desire, and affect are so deeply entwined. We will be using a data-set of tweets collected through R-Shief on a specific cultural text which invites both ironic and straight viewing practice and positions, which are also either recent or popular enough to be able to generate a large enough data-set (for example, shows like Ancient Aliens, Vanderpump Rules, or even Alex Jones’ Infowars, or franchises like The Fast and the Furious). By adapting pre-existing open-source tools built to examine text and determine an emotional or affective reading of the author, we hope to propose strategies which attempt to make possible (or legible) gaining an understanding of the author’s underlying reading positionality or postures. By thinking critically along these boundaries, we hope to expand our understanding of both the computational limits and possibilities of legibility, as well as the function of such cultural reading strategies and practices as they intersect with the specificities of social media platforms.

Advertisements

Week 3 – Undecidable Problems and Big Data Conclusions

In “What is Computable,” MacCormick gives a proof for why a program that can decide whether any other program can crash cannot exist. He relates this to the halting problem and explains that although it is not as important in practice as one might think, it raises important philosophical questions about what computers and people are capable of.

In “What Happens When Big Data Blunders,” Logan Kugler explains the reasons that David Lazer and Ryan Kennedy discovered for the failure of Google Flu Trends to predict the 2013 flu outbreak. Furthermore, he discusses reasons for over predicting the spread of Ebola. In both cases, the problem was based on making assumptions based only on big data that left out changing dynamics. In the Google case, the algorithm did not account for changes in the Google search algorithm itself and in the Ebola case, the CDC and WHO did not account for the initial efforts of people working to contain the disease.

It is an interesting and challenging idea to combine the themes of these two articles. One exercise that comes to mind is coming up with our own theoretical questions about what is possible with big data and whether these questions can be answered. Some questions might be:

  1. Is it possible to determine whether a big data algorithm is sound by some definition of sound? If not, can we derive bounds on acceptable error?
  2. Is it possible to prove that a particular problem cannot be decided by any big data algorithm?

The first question is quite challenging. The goal of a “big data” algorithm is typically to make some prediction given some large quantity of data. To think about this problem, we might think about solving one of these problems without the aid of a computer. Imagine you were able to think fast enough or live long enough to process all of the data. What are some issues that might arise? Is the data relevant? Is there enough non-overlapping information in the data to arrive at an answer? We would need to answer these questions. The answer to our question involves the relationship between the question, the data itself, and the operations we can perform over the data.

For the second question, we must first decide what it means to be decidable. Clearly if we give no data, and the problem requires data, it will be possible to prove this. On the other hand, if we supply all of the data about everything, will it then be able to solve it? This is somewhat philosophical, in fact. If we knew everything, could we predict the future?

Week 2: Small Decisions with Big Impacts

In the first article, Hamish Robertson and Joanne Travaglia talk about how the first explosion of data in the 19th century was used in ways to categorize people and motivate “social change”. Whether that change was good change became irrelevant once certain assumptions, including negative social categories were built into the data collection process. Often this was used to oppress various categories. In this article, the authors express a concern that this same situation will be carried over to the “big data” revolution of the 21st century. Lev Manovich suggests that similar to the field of social science, computer scientists exploring social media use probabilistic models to analyze big data. Ted Striphas explores how common cultural words have changed due to the use of computing to produce, hold, and analyze cultural data.

I think the articles raise an interesting question since there is so much data now through social media and other online systems where people supply information that the criteria for categorizing people will be so much richer in the new data era. Furthermore, with the application of algorithms to data, errors, for example from an algorithm taking a string of information out of context, will be extremely likely and it is concerning what kind of influence such “mistakes” could have on our understanding of people and society. To what extent will small decisions in the way algorithms are designed and used impact society in ways we don’t understand? Since most of the decisions made by algorithms are probabilistic and our concepts of society are influenced by these decisions, how will we ensure that we are not causing societal damage by relying on the decisions of algorithms? These are especially issues because the large scale of big data magnifies a small decision made early on.