On Tuesday I had the fantastic opportunity of presenting my research on algorithms and language to Landscape Surgery. I also received some incredibly useful, far-reaching and challenging feedback on the draft of my paper ‘Language in the Age of Algorithmic Reproduction’. I learnt so much from the session, and am currently re-working the paper in order to present at Prof. Louise Amoore and Dr Volha Piotukh’s ‘Thinking with Algorithms: Cognition and Computation in the work of N. Katherine Hayles’ workshop at Durham later this month.
I have so much to think about from the LS session, but the main realisation for me is that for this paper at least I need to focus less specifically on search algorithms and start looking further into how language is also affected by things like plagiarism software, keywords and firewalls. Also in terms of my wider thesis, I really need to work on my methodology to try and measure some of the things that concern me, perhaps by looking deeper into Search Engine Optimisation, Google as ‘curator of knowledge’, and the materialities, practices and ethnographies of the processing of words through computers.
Using Walter Benjamin’s seminal essay The Work of Art in the Age of Mechanical Reproduction (1936) as inspiration, contrast and critical framework, this paper seeks to examine what happens to writing, language and meaning when processed by algorithm, and in particular, when reproduced through search engines such as Google. Reflecting both the political and economic frame through which Benjamin examined the work of art, as mechanical reproduction abstracted it further and further away from its original ‘essence’, the processing of language through the search engine is similarly based on the distancing and decontextualization of ‘natural’ language from its source. While all algorithms are necessarily tainted with the residue of their creators, the data on which search algorithms can work is also not necessarily geographically or socially representative and can be ‘disciplined’ (Kitchin & Dodge, 2011) by encoding and categorisation, meaning that what comes out of the search engine is not necessarily an accurate (or entirely innocent) reflection of ‘society’. Added to, and inseparable from these technologically influencing factors, is the underlying and pervasive power of commerce and advertising. When a search engine is fundamentally linked to the market, the words on and through which it acts become commodities, stripped of material meaning, and moulded by the potentially corrupting and linguistically irreverent laws of ‘semantic capitalism’ (Feuz, Fuller & Stalder, 2011), and “by third parties in the pursuit of gain” (Benjamin). With the now near total ubiquity of the search engine (and particularly monopoly holders Google) as a means of extracting information in linguistic form, the algorithms which return search results and auto-predict our thoughts have a uniquely powerful and exponentially increasing agency in the production of knowledge. So as “writing yields to flickering signifiers underwritten by binary digits” (Hayles, 1999), this paper will question what is gained and what is lost when we entrust language, knowledge and the interpretation of meaning to search engines, and will suggest that the algorithmic processing of data based on contingent input, commercial bias and unregulated black-boxed technologies is not only reducing and recoding natural language, but that this ‘reconstruction’ of language has far-reaching societal and political consequences, re-introducing underlying binaries of power to both people and places. Just as mechanical reproduction ‘emancipated’ art from its purely ritualistic function, the algorithmic reproduction of language is an overtly political process.
Many thanks to all who came and contributed on Tuesday!