Notes for INT200: Algorithms and Culture on 2/16/2017

Presentation by Ryan Leach

Article “Examining the Impact of Ranking on Consumer Behavior and Search Engine Revenue” by Anindya Ghose, Panagiotis Ipeirotis and Beibei Li

-Critical approach to impact of consumer behavior on the search engine
-Looking at hotel search websites
-Study illustrates how the internet can be structured to encourage certain user pathways— particularly consumer pathways
-Parallel with the structure of cities— Psychogeography which maps the relations between late capitalism and structure of urban areas
-Derive— “high theory parkour”— a digital derive? Applying this idea to internet structures, pathways a user can take through hyperlinks,
-Alternative models:
-Neutral Vessel model
-Actor-Network model
-Technodeterminist model— Kittler— the category of human is produced by technologies
-Technologies aren’t all powerful
-In the article humans aren’t really mentioned, the subject is reduced to a series of clicks and purchases
-Fragmentation of human subjectivity
-How might the algorithms themselves play a role in their application to increase consumption and profit?

Advertisements

Rachel, Lin, and Ryan Abstract

Project Proposal

Given this class’s goal of approaching and exploring the spaces where algorithms and the production, reception, and repurposing of cultural texts meet, we would like to explore the intersection between the various valences of legibility and oppositional reading strategies in the consumption of popular media texts, and the limits and possibilities of making these vital human uses of media legible to computational processes. Given that one of the largest obstacles to “reading” the affect or emotional valences of social media texts is understanding inflections of irony on the part of the author, we intend to explore this apparent limitation as it functions in relation to another type of reading – the cluster of different viewing practices and strategies which might generally be termed “ironic.” What tools might we be able to create to make legible to algorithmic processing occurrences of ironic enjoyment or engagement with a given cultural text, in opposition to what might be termed “innocent,” “straightforward,” or even “uncritical” enjoyment? In her exploration of the divide between such viewer positions in relation to the film Disco Dancer, Neepa Majumdar conceives of this divide as “queer” vs. “straight” reading positions. These more general terms can be broken down into practices of camp, irony, and oppositionality, all of which raise critical questions of politics, class, taste, and cultural capital which are deeply ingrained in the act of media reception.

In exploring these multiple layers of language and legibility (aesthetic, political, technological), we hope to generate interesting questions about both human and computational “reading” by thinking critically about the limits of algorithmic potential to make legible human strategies of cultural use (and even “making do,” to cite de Certeau) in which pleasure, desire, and affect are so deeply entwined. We will be using a data-set of tweets collected through R-Shief on a specific cultural text which invites both ironic and straight viewing practice and positions, which are also either recent or popular enough to be able to generate a large enough data-set (for example, shows like Ancient Aliens, Vanderpump Rules, or even Alex Jones’ Infowars, or franchises like The Fast and the Furious). By adapting pre-existing open-source tools built to examine text and determine an emotional or affective reading of the author, we hope to propose strategies which attempt to make possible (or legible) gaining an understanding of the author’s underlying reading positionality or postures. By thinking critically along these boundaries, we hope to expand our understanding of both the computational limits and possibilities of legibility, as well as the function of such cultural reading strategies and practices as they intersect with the specificities of social media platforms.

Project Twitterbot Abstract: Leavell/Mackey

When Twitter released its IPO several years ago, much information about the company became public as well. Most relevant to this discussion was the surprising revelation that much of Twitter is run by automated accounts, aka “twitterbots”. In 2014, Twitter admitted that as many as 23 million, or 8.5%, of its users were fake.  For our project, we are curious about the prevalence and influence of automated accounts, those exhibiting behavior outside of human-like patterns of content generation, in political events and narratives with a large presence on Twitter.

We propose using R-Shief to sample Twitter activity during a popular political event, such as the Women’s March last month in Washington DC and around the country, or possibly an upcoming event, such as Donald Trump’s Supreme Court nominee process. With the help of pre-existing machine learning-based classification software, BotOrNot , we can make reasonably confident guesses as to whether a Tweet sent a specific Twitter handle is an automated account, one of the so called “twitter bots”; while the identification of twitterbots can be done almost as efficiently through sampling, we are also interested in more nuanced aspects of their behavior, creating a need to ‘fish’ for large number of twitterbots to perform further analysis on. With these automated techniques for twitterbot gathering, we hope to characterize twitterbot activity during political events and present our findings visually, using tools provided by the R-Shief software and others.

While the exact list of characterizations is still being developed, we have targeted a few questions which are both theoretically interesting and realistically observable based on the methods of data collection and analysis we have at our disposal.

Demographics: Do communities on different positions on the political spectrum have significantly different proportions of automated accounts? While it has been shown that the number of automated followers differs for certain political candidates, observing the ecosystems of automated accounts which exist for different political camps would yield insight into the social media activity of these ideological groups.

Thresholds: Is there a threshold of trending activity on a hashtag that triggers twitterbot activity? If so, what is that threshold? R-Shief conveniently segregates collected tweets by the time interval in which they were collected, leading us to wonder if there existed an identifiable threshold of authentic activity at which twitterbots begin to participate heavily? This threshold would be dependent on both the structure of the Twitter social network and specific twitterbot attributes. Additionally, can different thresholds be found for different events?

Impact: Is there any discernable goal of twitterbots on the social media conversation around the given topic (ie, dissemination of fake news, targeting of any specific people on twitter, promoting alternative analyses or politics to the majority of hashtag users, etc.). Although these accounts are widely assumed to be harmful to the user experience, little is known about the intentions of those who develop and deploy these accounts. While some accounts simply bolster the activity of those they follow (magnifying the perception of a public figure or idea), others hijack popular hashtags or intentionally spread misinformation.

Project Abstract: Huynh / Moore

Final Project Slides: https://docs.google.com/presentation/d/1gN0y5BoAiCQjCHKCdz-uIiKDFmoRM2ytTTVuuPeeqlQ/edit?usp=sharing

Final Project Code: https://github.com/animekraxe/fake-news-analyzer

Given the contemporary ubiquity for charged terms such as ‘fake news’ and ‘post-truth politics,’ this project seeks to work toward a more usefully quantitative metric for measuring the veracity of allegedly factual news stories spread through social media platforms. We acknowledge the inherent politicization of inaccuracies posing as legitimate reporting in a landscape wherein value judgments too often inform factual exposure rather than vice versa, but instead of finding an excuse to dismiss ideological components from our metric here we instead see an opportunity to fold them into our process of identification and observation. Further, we speculate that in making ‘fake news’ more tangible there must exist additional commonalities, both across those social media users that proliferate these stories and within the content of the stories themselves, that could reinforce the accuracy of an identification metric.

Therefore, this project seeks to question which factors of purportedly factual news stories and the users that share them might in fact serve as reliably high indicators of ‘fake news.’ Through our consideration of available data from social media sites like Twitter and consideration of previous research and scholarship of online credibility detection we hope to propose both the most and least effective indicators of inaccurate news stories. Additionally, observations regarding the spread of the stories through user activity will inform a metric for measuring the likelihood of any given user’s propensity for fake news proliferation. From this research we hope to produce an algorithm that might begin to evaluate news stories and social media users for our stated concerns in real time.