The Paranoid Machine: Five Theses on Digital Cultures

by Clemens Apprich

In the following article I want to propose the idea of a paranoid machine, in order to reflect on some of today’s developments in digital cultures. Starting from the assumption that we experience a shift from mass media to social media, I will show how paranoia can provide a diagnostic tool to analyze such a transformation. Paranoia as a method helps to shed light on the epistemological, technological, ethical, political, and aesthetical implications of a (post-)digital world. The paranoid machine exposes hidden assumptions in computer code and calls for intervention in the narcissist and homophilic disposition of digital cultures.

The discussions heard today about “post-truth politics” and “alternative facts” indicate a rupture in the public debate. Even though the modern mass media (press, radio, television) have always been suspected of manipulating the public sphere, their position as a common frame of reference—one that the public could affirm, silently accept, or openly contest—was hardly ever called into question. With the rise of social media, this situation has fundamentally changed: attempts to re-singularize and deconstruct the mass media into “new collective assemblages of enunciation” (Guattari 1996, 263) have given way to new media practices, which are characterized by a desire for participation, immediatism, and connectedness (see Dijck 2013). As a consequence, the public sphere, previously mediated by mass media, has dissolved into a wide range of primarily internet-based media outlets. While such a shift from mass media to social media was for a long time hailed as a necessary step towards more democracy, recent critiques have warned of a resulting “decline of symbolic efficiency” (Dean 2010, 5)—that is, a collapse of a common frame of reference, central to democratic negotiation processes. In this sense, an emerging tension can be witnessed between the idea of an open public and homophilic tendencies, fostered by algorithmic technologies that work by subdividing people into closed sets of personal interests, political views, or sexual orientations (see Chun 2019). The downside of participation-based media seems to be a society that has been splintered into fragmentary, networked publics (see Varnelis 2008).

The increasing significance of affect, emotion, and sentiment in decision-making processes is symptomatic for an era characterized as a time of information overload (see Andrejevic 2013, 15). Paranoia, in this regard, can be seen as a specific mode of knowledge, which disengages from prevailing opinions and previous experiences. The word is a composition of the Greek words παρά (para), meaning “beside, next,” and νόος (noos), that is “mind”; so paranoia literally translates into “being next to the mind,” thereby reflecting its use to describe a mental state of delusional belief. Through collaborative filtering systems (e.g. Facebook, YouTube, Twitter) affect-laden opinions are on the rise, yielding personal belief systems that differ from so-called expert knowledge. As can be seen from the example of climate change denial, collaboratively based filter systems produce a paranoia that helps users to form their views despite indisputable evidence to the contrary. In fact, the “feeling” of being in possession of a “subversive” or “controversial truth” is algorithmically incited and maintained by these systems (see Chun 2020). Now, paranoid delusions also function as a self-healing mechanism, a reparative process to compensate for a loss of symbolic order due to an overproduction of meaning. This is because the idea of being threatened by someone or something (e.g., the climate change lobby) is often easier to endure than the frightening feeling of not knowing what is going on around oneself (i.e., climate change itself). Hence, paranoia can arguably be seen as an appropriate response to radical insecurity—as an “information-processing technique” (Chun 2006, 257) it refers to a critical reassessment of the world around us. As a tool of diagnosis it offers a possible approach to explore and experience the “cultural disturbances” (Innis 1964, 31) currently induced by digital media technologies.

Paranoia as a “way of thinking” has emerged with (post-)modernism. It gets to the heart of a new “suspicion” in Western thought: that there is a meaning underneath the meaning, or a better interpretation of traditional knowledge (see Foucault 1998). According to Jacques Lacan paranoia is an activity of the unconscious, which, in contrast to dreams, provides interpretations out of itself (see Lacan 1975, 293). It operates at the interface of discursive functions, human subjects, and technical media. As a self-revealing knowledge, it ultimately subverts the modern idea of a reason-based, mass-mediated, all-encompassing public, and instead produces a “postmodern audiencehood” (Ang 1996, 67). Consequently, the imaginary realm gains the upper hand over symbolic processes, a shift that has tremendous consequences on how we see and act in the world. “Crucially, the same image can foster both belief and mistrust,” as Wendy Chun explains in relation to pictures that have been used to register the effects of global climate change (see Chun 2018). What we are dealing with today is a paranoid reading of facts previously believed to be certain. The result can be seen in a rise of uncertainty in digital cultures: first, on the part of humans, who, in the face of datafication processes, machine learning and networked media, actually see themselves confronted with the danger of disappearing “like a face drawn in sand at the edge of the sea” (Foucault 1994, 387). Second, on the part of machines, which are ultimately trapped in the same imaginary as we humans. Paranoia as a diagnostic tool could help explore this connection between the human and the machine in the technological unconscious.

Paranoia as Method

Even though the transition from one media system to another is not characterized by a clear-cut break, but rather by a long-term process, during which the “old” media are sublated and transformed by “new” media formats (see Bolter/Grusin 1999), it can have drastic effects on the cultural formation of society, and, consequently, on our understanding of the world around us. Friedrich Kittler, in this regard, was interested in how digital, that is discrete, media emerged at the end of the nineteenth and beginning of the twentieth centuries and pushed against the traditional European understanding of a lettered society: “Perhaps the most striking thing about Kittler’s analysis of the discourse network of 1900 is what it is said to replace. The discourse network of 1800 is centred on the universal principle of what might be called oralized writing” (Connor 2015, 129). Kittler’s attack on our traditional understanding of reading, writing, and literacy is intended as a liberation from conventional hermeneutics, which imposes meaning on all forms of communication. For him, the capacity of digital technologies to automatically store data on a large scale has undermined the “auditory hallucination” (Kittler 1990, 137) of early modern times, thereby completing the (not so) new media system.

With the introduction of electronic media, the last remains of a fictional speech system were transformed into an automated writing system, yielding a paranoid machine that “operates like an integrated system of all the data-storage devices that revolutionized recording circa 1900” (Kittler 1990, 298f.). Within this machine, everything that can be recorded gains meaning and, as a consequence, loses any specific (that is hermeneutic) form of meaning. The machine performs functions similar to the psychic apparatus, which constantly registers and processes sensory—that is data—input (see Freud 1960). It is therefore no coincidence that the clinical diagnosis of paranoia came up at the same time as psychoanalysis discovered the unconscious mind. In psychoanalysis, which aroused public attention with the prospect of being able to capture and interpret formerly unintelligible phenomena, such as dreams, delusions, or human sexuality, the unconscious is understood as a relentless writing system, constantly registering the cultural manifestations of the outside world (see Freud 1961). By taking part in this world, the human subject gets split into a subject of enunciation and a subject of utterance, that is a subject which is not merely constituted by an inner consciousness, but also by an external other that is speaking and writing within us.

Technically speaking, such an Aufschreibesystem (inscription or writing-down system), a term Kittler borrowed from the world’s most famous paranoiac, Daniel Paul Schreber, comes to fulfillment with electronic media, which make it possible to automatically record, process, and transmit media content. And with computer-mediated communication the bifurcation of media into different formats (text, audio, images) comes to a logical end, since the computer as universal machine allows for “a total media link on a digital base” (Kittler 1999, 2). It is, therefore, a well-known fact that Kittler’s larger narrative is geared towards a potential discourse network of 2000, which proceeds to integrate all of the former media systems and, as a corollary of it, replaces human agents as mediators of technological communication. In this line of thought, recent debates about (post‑)digital cultures are characterized by the fear that human subjects begin to disintegrate in algorithmic circuits: “The displacement of the human as the primary agent of change in the world is thus coincident with the increasing extension of technical environments that manage social and economic life” (Rossiter 2017). In a world of machinic control and governance, information is filtered out of an ever-growing data stream, while the algorithms required for this are largely unintelligible to human subjects—with the effect that their paranoia increases accordingly.

Following this, the concept of paranoia may serve as a method to analyze the cultural shift caused by digital media technologies. According to Ned Rossiter, paranoia can be deployed “as a diagnostic device that might assist our political and subjective orientation in worlds of algorithmic governance and data economies” (2017, 90). In light of Big Data, Smart Cities and the Internet of Things, cybernetic control is no longer science fiction, but part of our everyday reality. Hence, new coping strategies have to be found in order to deal with an increasingly complex entanglement between human and non-human entities. Not only does the existing coordinate system get turned upside down, but new realities emerge—a process that corresponds to the creative and productive momentum of paranoia (see Lacan 1933). Paranoia is therefore not limited to its diagnostic function, but can also be seen as a technique to apprehend and grasp the crumbling of the longstanding dichotomy between (human) culture and (non-human) technology. It might help us to accept the technological unconscious, which, by virtue of the aforementioned self-revealing mechanism of paranoia, has already shown itself in a number of psychopathological cases, media reports, court decisions, and literary works over the last 200 years (see Sconce 2019). Such a broad reading opens up a new perspective: for Lacan, all knowledge is eventually injected with paranoia, because knowledge is founded on rivalry and competition. This is also the reason why “all human knowledge stems from the dialectic of jealousy, which is a primordial manifestation of communication” (Lacan 1993, 39). By desiring the object of the other’s desire, we gain knowledge of the world, a process that eventually helps us to overcome the self-referential nature of our individual world views. It is important, however, to keep in mind that such a paranoiac knowledge, which is constitutive to knowledge-production, differs from paranoid inquiry as only “one kind of cognitive/affective theoretical practice among other, alternative kinds“ (Sedgwick 2003, 126). In such a narrow reading, paranoia, which often remains puzzling and incomprehensible to the individual subject, presents merely one among many other ways of producing knowledge.

The downside of a one-dimensional understanding of communication processes can currently be seen in our dealing with social media. The often-quoted “filter bubbles” and “echo chamber” indicate a deeply narcissistic way of thinking digital cultures—narcissistic because of the regressive identity politics underlying not only media practices, but also our everyday interaction:

In this world the quest for pleasure—not collective happiness—has replaced the aspiration to truth. And because psychoanalysis is committed to the search for self-truth, it has come into contradiction with the dual tendency towards hedonism, on the one hand, and retreat into identity, on the other. (Roudinesco 2014, 3)

Instead of using media technologies to actually get in touch with each other, we seemingly remain trapped in an imaginary of being connected. Unable to experience the collective structure of social networks, we are assumed to be caught in our respective realities. However, the question remains whether technical media have truly been responsible for isolating human subjects, as many cultural pessimistic accounts claim (cf. Morozov 2011; Carr 2011; Turkle 2013), or if this perceived estrangement is in fact an effect of the very discourse around media technologies. In this sense, the technology suffers from the same alienating effects caused by a restrictive media understanding, not able to realize its full potential. Assuming that “[t]he symbolic world is the world of the machine” (Lacan 1991, 47), a paranoid thinking machine might be a suitable tool to explore some of the issues arising from this alienation and to hold out the prospect of a critical re-evaluation of our socio-technical world. In the following, I therefore want to propose five theses on today’s media paranoia in relation to digital cultures.

Five Theses

(1) The first is epistemological and delineates media theory’s fascination with paranoid modes of knowledge. Paranoia has been an object of study in numerous media theoretical works and has thus been an influential undercurrent in this branch of science (e.g., Kittler 1990; Chun 2006; Krause/Meteling/Stauff 2011; Sconce 2019). It can, in fact, be seen as “the first topos of media studies” (Angerer/Holl 2014), especially because the discourse around media technologies is related to the epistemological problem of being unable to provide a conclusive answer to the question of whether (inter-)subjectively experienced (i.e., mediated) knowledge is not ultimately contrived and flawed, thereby introducing an uncertainty and ontological suspicion, which is constitutive for media theory (see Groys 2012). In consequence, if, in an increasingly mediated time, all attempts to establish truth are driven by a paranoid mode of knowledge, then the question of how we can take a critical distance towards it arises. In other words, how can we define paranoia and, at the same time, accept our own paranoid position? This is a central problem in Lacan’s theoretical work, which distinguishes itself by the attempt to find a definition of delusion that includes the one defining it (see Kittler 2013, 76). The dialectical relation between knowledge and paranoia cannot be solved with reference to a superior truth, because such an assertion is in itself delusional (see Kupke 2012). Hence, for Lacan, all knowledge is fundamentally paranoiac—and not simply paranoid—because it is always haunted by the unknown, or rather, the not-yet known. This Lacanian paradox also subverts the conventional understanding of science: paranoiac knowledge does not constitute universal, but different, interpretations of reality (see Schmidgen 2013). It is therefore only consistent that a discipline which wants to go beyond Science and Technology Studies by revealing the many hidden layers of communication processes depicts media technologies as unconscious, yet historical, ordering systems of discourse networks and, as a consequence, fuels the flames of paranoid speculation (see Tuschling 2011, 101ff.).

(2) Secondly, an examination of paranoia allows us to analyze current media technological developments in digital cultures, such as machine learning, network analytics, and Big Data. Even if the latter has attracted a lot of attention in recent years, it does not explain much without the other two. Filtering algorithms are largely built on premises taken from network theory and become automated according to self-learning processes, all of which are well studied and documented. The mere reference to black box technologies does not suffice as an explanation, and paranoia can offer an incentive to look behind the “computational cultural phantasms” in relation to digital cultures, in particular artificial intelligence and machine learning algorithms (see Harrell 2013, 198–201). So, given the psychoanalytic idea that the discourse of the (technological) Other is the discourse of the unconscious, we need to deploy existing analytical tools to look into the “sub-medial space” (Groys 2012, 17ff.) The ability to scrutinize the technological unconscious, which is increasingly inhabited by a whole range of algorithmically generated agents (e.g., neural networks, internet bots, logistical media), is deemed important if we want to better understand today’s “high-tech paranoia” (Jameson 1991, 38). While techno-capitalism tries to bypass the tedious and often annoying process of social negations (e.g. by gaining “direct access” to human desires), the revival and adaption of a “psychoanalysis of things” (Sartre 1992, 765) may be a first step into the discursive order of an increasingly data-driven and networked world. This is even more important, since humans and machines are both subjects of the same symbolic realm. Machines are not neutral, because they, too, process the social world with all its shortcomings. What is more, they themselves can become paranoid, as can be seen in early computer programs (see Colby 1975) as well as current neural networks (see Google’s DeepDream engine).

(3) If paranoia leads to suspicion and, at the same time, entails a will to knowledge, we are, thirdly, confronted with an ethical challenge. Filtering information out of data has always been an essential part of human (and non-human) cognition. However, the criteria to decide what to include and exclude are more and more hidden behind computational, but also internal, rules within the big tech companies developing current systems. Pattern discrimination, in this context, is used as a technical term to describe the imposition of identity on input data, in order to filter (i.e., to discriminate) information from them. But far from being a neutral process, the delineation and application of patterns is in itself a highly political issue (see Apprich et al. 2019). Instead of providing a more “objective” basis for decision-making, filter algorithms reinstate old forms of social segregation, such as class, race and gender, through defaults and paradigmatic assumptions about the homophilic “nature” of connection. Hence, algorithmically enhanced systems of pattern recognition pose the somehow impossible question of how to discriminate without being discriminatory. How can we filter information out of data without reinserting racist, sexist, and other prejudiced beliefs? Like human subjects, these systems are subordinated to language, without always having access to its meaning. This becomes clear in natural language processing, where automatically derived semantics have been shown to typically contain human-based biases (see Caliskan/Bryson/Narayanan 2017). Hence, the question of “Who speaks,” which is central to the analysis of paranoia (see Lacan 1993, 54), becomes paramount in a time of sexist bots, racist recommendation systems, and lethal drones. Here, well-researched concepts of the humanities, such as identity, meaning, or subjectivity, may shed some light on our digitized and networked world and allow for an ethical positioning within it.

(4) Fourthly, the increasing level of cyberbullying, hate speech, racism and scapegoating on social media can be seen as indicative of a political problem, namely that of the decline of the symbolic order as a common frame of reference. With the shift from mass media logic (see Altheide/Snow 1979) to social media logic (see Dijck/Poell 2013), the media-technological conditions of what we used to call “the public” have dramatically changed. With the affective nature of social media, we find ourselves thrown back into an imaginary sphere of participation, in a world of images and affects that threatens to abolish political processes as collective processes of decision- and world-making. Due to the overemphasis of the imaginary in digital cultures the “social” gets reduced to a primarily dyadic structure with the effect that the other becomes the object of my libidinal desire. Not a collective “we” but a constant invocation of “you(s)” is constitutive for the current state of digital media (see Chun 2016, 27f.). The libido as the mental energy that is expressed in the form of attention, interest and recognition is constantly being exploited by internet platforms, which by developing ever-more (or rather less) sophisticated templates (e.g., the like button, emojis, or voting systems) play on our urge to impress others. And even though we should guard against any simplifying comparisons between psychopathological and social structures, the parallels are striking: similar to the psychotic phenomenon of amorous paranoia (see Lacan 1993, 86ff.), love in social media can easily change into hate, precisely because the other is only an imaginary double of myself. This profoundly narcissist behavior, which tries to avoid real connections and therefore frustrations by overemphasizing the object of love (or hate), corresponds to a paranoid mode of thinking insofar as it is based on jealousy. Hence, the paranoid logic in digital media is driven by the constant rivalry of human subjects (see Chun 2006, 251ff.), who are predefined as “social atoms” within a given network, thereby thwarting the idea of collectivity understood as “social assemblages” based on media technologies.

(5) It remains unclear, however, whether we are dealing here with an irreversible decline of symbolic efficiency—that is, the collapse of a common frame of reference—or if new forms of communication and expression under digital conditions are still to evolve. Thus, the fifth thesis is an aesthetical one: the paranoid moment of our daily experience with media technologies is distinguished by an excess of networking, by a sort of a “total network system” within which the ubiquitous media create elaborate connections between political, social, and media relations that can no longer be traced back to simple matters of cause and effect (see Jameson 1990). The certainty in reality does not necessarily come from the evidence of facts, but from the fact that everything is somehow connected to everything else. Paranoia, along these lines, is not so much caused by a lack of information, but by an overproduction of meaning. We may therefore ask to which degree cultural, in particular artistic, practices might help to reconstitute a signifying structure. The interest here lies in forms of expression, which promise to create “new attentional forms” (see Stiegler 2012) that could help to reassemble a symbolic understanding of digital cultures and their networked publics. For instance, the theoretical concept of “cognitive mapping” (see Jameson 1991, 51–54) could be juxtaposed with data retrieval, machine learning, and network analytical approaches. If you look at John Albright’s Micro-Propaganda Machine, which revealed the deep interwovenness of social media platforms and political news pages (see Albright 2018), you will immediately be reminded of Mark Lombardi’s work. While the latter was considered and actually brushed off as a paranoid oddity, Albright has gained the attention of all big media outlets (The New York Times, Washington Post, The Guardian, Wired, etc.), which begs the question: Does paranoia simply lie in the eyes of the beholder, or, to be more precise, in the time the beholder lives in?

Conclusion

As Wendy Chun writes in her book Control and Freedom: Paranoia in the Age of Fiber Optics, paranoia increases when visibility decreases (Chun 2006, 268). With the dissemination of digital and networked media into everyday life, the possibility of relying on discrete information processing becomes crucial, while the infrastructure sustaining these media becomes more and more hidden. In this regard, current forms of machine learning algorithms are not only dependent on huge material infrastructures (e.g., Google’s Tensor Processing Units), but also on outsourced labor to prepare the data (see Apprich 2018). As a consequence, the feeling of being at the mercy of something else becomes intensified, not because the technology is overly powerful, but, in contrast, because it raises suspicions about its alleged omnipotence: “Thus paranoia does not respond to an overwhelming, all-seeing power but rather to a power found to be lacking—rotten and inadequate, always decaying. Paranoid knowledge similarly responds to technologies’ vulnerabilities, even as it denies them“ (Chun 2006, 268). Faith in technological superiority always includes doubts about it. The more the technologies automate and thus evade our consciousness in their functioning, the more eerie they seem to us. However, this uncanniness only emerges because what we experience in technology is our very own reality. What we obtain through machine-learning processes is what we already expected to be found there: patterns of centuries-old social behavior and all-too-common biases—which is the reason why (artificially) intelligent machines are driven by just the same racist and sexist beliefs as human beings (see Alang 2017).

In accordance with the racist interpretative framework described by Judith Butler in relation to the Rodney King case (see Butler 1993), the racializing episteme of “white paranoia” is actively shaping digital cultures. Hence, it is no longer a secret that algorithms filter data along the lines of racial, but also sexual and socio-economic categories, even if they do not have to ask for them directly. The result of using proxy data (e.g., zip codes, buying behavior, proximity to other customers) is still the same: the data get sifted according to their applicability, with most people being blanked out (see Steyerl 2016). The already mentioned homophilic disposition of digital cultures constitutes a framework that codifies what is being seen and therefore experienced. By dividing users into “neighborhoods,” the social, political, and historical issue of segregation becomes naturalized in computer code. Yet, the code isn’t simply a black box, unreadable to the human eyes, but merely a crystallization (i.e., abstraction) of human reality. It is therefore important to rearticulate Butler’s call for an “aggressive” re-reading of the hegemonic episteme (see Butler 1993, 17), not least because the segmentation of people poses one of the biggest challenges in digital cultures. Here, the idea of a paranoid machine, precisely because it processes human—that is, biased—data, can help us to map existing power relations. The machine code makes visible what was previously hidden, be it in political institutions, bureaucratic apparatuses, or informal networks. And paranoia, understood as “knowledge in the form of exposure” (Sedgwick 2003, 138), might serve as an analytical tool to uncover power dynamics enacted in the code.

For Sedgwick, the problem with such a paranoid project of exposure remains in its quietistic stance. With regard to the racist prison-industrial complex in the United States, she asks: “Why bother exposing the ruses of power in a country where, at any given moment, 40 percent of young black men are enmeshed in the penal system?” (Sedgwick 2003, 140). Hence, it is not a problem of invisibility, but it is about the acceptance of what is right in front of us. The same is true for digital cultures. It’s not as if the ethical, political, and social consequences of data mining, machine learning, or platform capitalism weren’t being discussed. On the contrary, the often-quoted black box gets unpacked in manifold ways, as a quick glance at the bestseller lists shows (see, for example, Srnicek 2017). The problem is rather that no one seems to care. Why bother about homophilic tendencies in search engines, recommendation systems, or other forms of information retrieval, if the only thing we as users are really interested in is receiving the “best” results? Well, because such a narcissistic approach, namely an approach that leads back to an imaginary alienation, ultimately destroys the very basis of digital cultures, that is, the possibility to connect with each other. The deployment of a paranoid thinking machine therefore has to be twofold: by processing the symbolic order, the machine can help to expose the ugly assumptions behind the smooth and shiny surface of digital cultures. The paranoiac knowledge thus gained must, in a second step, inform what Eve Sedgwick (with reference to Melanie Klein) calls “reparative critical practices” (2003, 128)—practices that do not simply cover the exposed wounds, but actively work on healing them. Reconfiguring the media technological environment that conditions our individual and collective experiences will be crucial if we want to move beyond the individualistic alignment of social media platforms and their focus on self-representation towards an “attention ecology” that helps us to become more attentive to one another (see Citton 2017).

Particular thanks to Matthias Koch for his helpful comments, especially on the difference between the paranoid and the paranoiac.

References

Alang, Navneet. August 31, 2017. “Turns Out Algorithms Are Racist.” In The New Republic. https://newrepublic.com/article/144644/turns-algorithms-racist.

Albright, Jonathan. November 4, 2018. “The Micro-Propaganda Machine.” In Medium. https://medium.com/s/the-micro-propaganda-machine.

Altheide, David L. and Robert P. Snow. 1979. Media Logic. Thousand Oaks: Sage.

Andrejevic, Mark. 2013. Infoglut: How Too Much Information Is Changing the Way We Think and Know. New York/London: Routledge.

Ang, Ien. 1996. Living Room Wars: Rethinking Media Audiences for a Postmodern World. London/New York: Routledge.

Angerer, Marie-Luise and Ute Holl. 2014. “Paranoia.” Video interview by Martina Leeker/Centre for Digital Cultures [German]. Accessed October 19, 2019, https://vimeo.com/94499128.

Apprich, Clemens. 2018. “The Corrupt State of Machine Learning.” In Texte zur Kunst 109: 136–141.

Apprich, Clemens, Wendy Chun, Florian Cramer, and Hito Steyerl. 2019. Pattern Discrimination. Lüneburg/Minneapolis: meson/University of Minnesota Press.

Bolter, Jay David and Richard Grusin. 1999. Remediation: Understanding New Media. Cambridge, MA: MIT Press.

Butler, Judith. 1993. “Endangered/Endangering: Schematic Racism and White Paranoia.” In Reading Rodney King: Reading Urban Uprising, edited by Robert Gooding-Williams, 15–22. New York/London: Routledge.

Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. 2017. “Semantics derived automatically from language corpora contain human-like biases.” In Science 356 (6334): 183–186.

Carr, Nicholas. 2011. The Shallows: What the Internet is Doing to Our Brains. New York: W.W. Norton.

Chun, Wendy Hui Kyong. 2006. Control and Freedom: Power and Paranoia in the Age of Fiber Optics. Cambridge, MA: MIT Press.

Chun, Wendy Hui Kyong. 2016. Updating to Remain the Same: Habitual New Media. Cambridge, MA: MIT Press.

Chun, Wendy Hui Kyong. 2018. “On Patterns and Proxies, or the Perils of Reconstructing the Unknown.” In e-flux. Accessed July 31, 2019. www.e-flux.com/architecture/accumulation/212275/on-patterns-and-proxies.

Chun, Wendy Hui Kyong. 2019. “Queerying Homophily.” In Pattern Discrimination, co-authored by Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer and Hito Steyerl, 59–97. Lüneburg/Minneapolis: meson/University of Minnesota Press.

Chun, Wendy Hui Kyong. 2020. “Filter System.“ In The Oxford Handbook of Media, Technology, and Organization Studies, edited by Timon Beyes, Robin Holt and Claus Pias, 238–245. Oxford: Oxford University Press.

Citton, Yves. 2017. The Ecology of Attention. Cambridge/Malden: Polity Press.

Colby, Kenneth M. 1975. Artificial Paranoia: Computer Simulation of Paranoid Processes. New York: Pergamon Press.

Connor, Steven. 2015. “Scilicet: Kittler, Media and Madness.” In Kittler Now: Current Perspectives in Kittler Studies, edited by Stephen Sale and Laura Salisbury, 115–130. Cambridge: Polity Press.

Dean, Jodi. 2010. Blog Theory: Feedback and Capture in the Circuits of Drive. Cambridge: Polity Press.

Foucault, Michel. 1994. The Order of Things: An Archeology of the Human Sciences. New York: Vintage Books.

Foucault, Michel. 1998. “Nietzsche, Freud, Marx.“ In Aesthetics, Method, and Epistemology: Essential Works of Foucault, 1954–1984, edited by James D. Faubion, 269–278. New York: The New Press.

Freud, Sigmund. 1960. “Project for a Scientific Psychology (unfinished manuscript).” In The Origins of Psychoanalysis: Letters to Wilhelm Fliess, Drafts, and Notes, 1887–1902, edited by Anna Freud and Ernst Kris, 345–445. New York: Basic Books.

Freud, Sigmund. 1961. “A Note Upon the ‘Mystic Writing-Pad’.” In The Ego and the Id and Other Works, translated by James Strachey, 227–234, London: Vintage.

Groys, Boris. 2012. Under Suspicion: A Phenomenology of Media, translated by Carsten Strathausen. New York: Columbia University Press.

Guattari, Félix. 1996. “Remaking Social Practices.” In The Guattari Reader, edited by Gary Genosko, 262–272. Oxford/Cambridge: Blackwell.

Harrell, D. Fox. 2013. Phantasmal Media: An Approach to Imagination, Computation, and Expression. Cambridge, MA: MIT Press.

Innis, Harold Adams. 1964. The Bias of Communication. Toronto: Toronto University Press.

Jameson, Frederic. 1990. “Cognitive Mapping.” In Marxism and the Interpretation of Culture, edited by Cary Nelson and Lawrence Grossberg, 347–360. Champaign, IL: University of Illinois Press.

Jameson, Frederic. 1991. Postmodernism, or the Cultural Logic of Late Capitalism. London: Verso.

Kittler, Friedrich. 1990. Discourse Networks 1800/1900, translated by Michael Metteer. Stanford: Stanford University Press.

Kittler, Friedrich. 1999. Gramophone, Film, Typewriter, translated by Geoffrey Winthrop-Young and Michael Wutz. Stanford: Stanford University Press.

Kittler, Friedrich. 2013. “Flechsig/Schreber/Freud. Ein Nachrichtennetzwerk der Jahrhundertwende.“ In Die Wahrheit der technischen Welt: Essays zur Genealogie der Gegenwart, edited by Hans Ulrich Gumbrecht, 76–90. Berlin: Suhrkamp.

Krause, Marcus, Arno Meteling, and Marcus Stauff, eds. 2011. The Parallax View: Zur Mediologie der Verschwörung. München: Wilhelm Fink Verlag.

Kupke, Christian. 2012. “Von der symbolischen Ordnung des Wahns zum Wahn der symbolischen Ordnung.“ In Wahn: Philosophische, psychoanalytische und kulturwissenschaftliche Perspektiven, edited by Gerhard Unterthurner and Ulrike Kadi, 107–135. Wien: Turia + Kant.

Lacan, Jacques. 1933. “Le Problème du Style et la Conception Psychiatrique des Formes Paranoïque de l’Expérience.” In Le Minotaure, http://espace.freud.pagesperso-orange.fr/topos/psycha/psysem/style.htm.

Lacan, Jacques. 1975. De la Psychose Paranoïaque dans ses Rapports avec la Personnalité. Paris: Seuil.

Lacan, Jacques. 1991. The Ego in Freud’s Theory and in the Techniques of Psychoanalysis, Seminar II, translated by Sylvana Tomaselli. New York/London: W.W. Norton & Company.

Lacan, Jacques. 1993. The Psychoses, Seminar III, translated by Russel Griff. New York/London: W. W. Norton & Company.

Morozov, Evgeny. 2011. The Net Delusion: The Dark Side of Internet Freedom. New York: Public Affairs.

Rossiter, Ned. 2017. “Paranoia is Real: Algorithmic Governance and Shadow of Control.” Media Theory 1 (1).

Roudinesco, Élisabeth. 2014. Lacan: In Spite of Everything. London/New York: Verso.

Sartre, Jean-Paul. 1992. Being and Nothingness: A Phenomenological Essay on Ontology, translated by Hazel E. Barnes. New York: Washington Square Press.

Schmidgen, Henning. 2013. “Eine originale Syntax: Psychoanalyse, Diskursanalyse und Wissenschaftsgeschichte.“ In Mediengeschichte nach Friedrich Kittler, edited by Friedrich Balke, Bernhard Siegert, and Joseph Vogl, 27–43. München: Wilhelm Fink Verlag.

Sconce, Jeffrey. 2019. The Technical Delusion: Electronics, Power, Insanity. Durham, NC: Duke University Press.

Sedgwick, Eve Kosofsky, Michèle Aina, Jonathan Goldberg, and Michael Moon. 2003. “Paranoid Reading and Reparative Reading, or, You’re So Paranoid, You Probably Think This Essay Is About You.” In Touching Feeling: Affect, Pedagogy, Performativity, 123–151. Durham/London: Duke University Press.Srnicek, Nick. 2017. Platform Capitalism. Cambridge/Malden: Polity Press.

Steyerl, Hito. April, 2016. “A Sea of Data: Apophenia and Pattern (Mis-)Recognition.” e-flux 72. www.e-flux.com/journal/72/60480/a-sea-of-data-apophenia-and-pattern-mis-recognition.

Stiegler, Bernard. 2012. “Relational Ecology and The Digital Pharmakon.Culture Machine 13. Accessed December 21, 2017. www.culturemachine.net/index.php/cm/article/viewArticle/464 [archived version: https://web.archive.org/web/20180506012941/ www.culturemachine.net/index.php/cm/article/viewArticle/464].

Tuschling, Anna. 2011. “Deutungswahn und Wahnanalyse: Die Paranoia ein Medienapriori?” In The Parallax View: Zur Mediologie der Verschwörung, edited by Marcus Krause, Arno Meteling, and Markus Stauff, 89–104. München: Wilhelm Fink Verlag.

Turkle, Sherry. 2013. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.

van Dijck, José. 2013. The Culture of Connectivity: A Critical History of Social Media. Oxford/New York: Oxford University Press.

van Dijck, José and Thomas Poell. 2013. “Understanding Social Media Logic.” Media and Communication 1 (1): 2–14.Varnelis, Kazys, ed. 2008. Networked Publics. Cambridge, MA/London: MIT Press.