Chapter 6: Proximity and Conflict


Proximity and Conflict


Networked Space

Harish lives in Chennai, India. He works for a U.S. company that has outsourced most of its operations. The company’s clients are located in North America, while those who provide them with services, like Harish, are in India. His daily routine is not atypical for someone in similar circumstances. After spending the day training new recruits, the other part of his job begins: “At seven-­thirty in the evening, when it’s 9 a.m. in New York, he confers with the American banking clients for whom he tailors his training, to insure that he is emphasizing the right skills. And then he turns to a slew of computer-­programming challenges that may show management his greater gifts. He often goes home after midnight.”[1]

Harish’s rhythm of life has to accommodate two environments: Chennai and New York. Both environments can be said to be equally relevant to Harish, and thanks to digital networks, both can be said to feel equally immediate or real to him. However, the coexistence of these two geographic spaces does not come without tensions. Harish worries that what feels near to him is becoming increasingly disembodied, detached from his immediate surroundings: “Already, we are half of the time in New York, just our bodies are left behind . . . I worry that nowadays anything near us seems unimportant, while anything we can’t see becomes larger than life.”[2] Harish’s participation in these intersecting networks shapes his perception of social belonging, making it more conceptual and less determined by geographic location: “Lately, he considered community less a function of roads and roofs and tea shops than of imagination. Even the solid presence of his grandmother could dematerialize at the late-­night ring of his cell phone, the urgent summons of American clients. And while his parents rolled their eyes at the constant needs of the world beyond Chennai, Harish saw the calls as tidings of cultural integration.”[3]

Detachment from one kind of nearness (the immediate environment) is accompanied by attachment to another kind (the mediated environment), and Harish attempts to integrate the benefits of one while not letting go of what is important for him to retain of the other.

Eliot (a blogger’s pen name) lives in Charlottesville, a city in the United States. She is a professional website designer, and one of her leisure activities is to coauthor a blog that used to be called Red Inked, now defunct. According to her personal blog posts, digital networks have also fundamentally redefined Eliot’s relationship to the near but in a different way than they have for Harish. Commenting on the fear that the Internet replaces face-­to-­face with mediated interaction, that it makes distant people and places accessed via the Internet more important than one’s immediate surroundings and that it foments antisocial habits, she writes, “I’m not chatting with people in New Delhi; nor am I stuck at the computer, turning pale and cutting my wrist to Emo music. Because of the following lists, all on Yahoo Groups, I’ve gotten connected to and made friends with people in my local geographical area I would not have otherwise met.”[4]

She then lists online discussion groups related to recycling, church activities, and networking with working moms. Instead of severing her connections to the near, digital networks have augmented Eliot’s links to what is socially proximal: “So my very busy social life, my identity with the town in which I live, and my sense of community—­all have been enhanced if not completely created through the weaving of various strands of the web. I have made more linkages and ties to the people in my immediate vicinity than I ever have done in my whole life.”[5]

Of course, Harish and Eliot are—­literally and figuratively—­thousands of miles apart. It would take a lengthy study to discuss the differences between these two cases and their significance. One could start by considering the history and present position in the world’s economy of India and the United States and the particular effect that globalization has had in each location. One could then go on to discuss Eliot’s and Harish’s social class, cultural background, gender, family structures, professional and personal goals, and so on. All this information would perhaps eventually help us understand what accounts for the distinct impact digital networks are having in each case. We might look at Harish’s case and conclude that the spatially near is becoming irrelevant, and digital networks are to blame. But then Eliot’s case would prevent us from making such broad accusations. We would realize that we also need to take into account the way our use of these technologies engenders new types of nearness or social relevancy within our immediate surroundings, and how this can contribute to new understandings of the world.

Digital networks have fundamentally transformed our sense of what is near and far. As Silverstone argues, “This dialectic of distance and closeness, of familiarity and strangeness, is the crucial articulation of the late-­modern world, and is a dialectic in which the media are crucially implicated.”[6] And yet there is anything but certainty about the values that are emerging from this process. We are familiar by now with arguments from both sides: those that praise the new social relevancy that digital networks give to the spatially far (the “death of distance” arguments) and those that critique the loss of social relevancy that digital networks impose on the spatially near (the “devaluation of the local” arguments). Through technological mediation, digital networks make it possible to increase our social inclusivity beyond the normal reach of what our bodies and senses allow. But as the cases of Eliot and Harish suggest, different circumstances can yield qualitatively different results when the kind of mediation that digital networks apply to the spatially far is applied to the spatially near. In some cases, that mediation might engender a decreased relevancy of what is spatially near (a form of social exclusivity), and in others it might engender an increased alignment to it (a form of social inclusivity).

Thus digital networks are reshaping social realities by redefining what counts as proximal or relevant (as Heidegger would say, “The frank abolition of all distances brings no nearness. . . . Everything gets lumped together into uniform distancelessness”[7]). But it would be premature to conclude that people are less socially inclined or have fewer social needs than before. People continue to fulfill their social desires, but they do so through new communicative practices, through new mediations of their social realities. The notion of the near as what is spatially proximal is being remodeled into a notion of the near as what is socially proximal—­what we feel is relevant to us socially, regardless of whether it is spatially near or far. For people on the privileged side of the digital divide, the near is no longer bound by space, but instead is something that is constructed through our participation in digital networks. These networks are not antisocial, but highly social. They do not necessarily attempt to do away with the spatially near (the local) but in fact promise us a renewed relationship with it (in addition to new relationships with the spatially far or the global).

Networked proximity reconfigures distance rather than eliminating it. As Borgmann points out, “Information technology in particular does not so much bring near what is far as it cancels the metric of time and space.”[8] Within nodocentric logic, nearness is defined in terms of almost-­zero distance within the network and farness in terms of almost-­infinite distance outside it. What we have then is a shift from physical proximity to informational availability as the principal measure of social relevance.

What kind of social significance does the local acquire under this redefinition of the near? Surely the body and its surroundings cannot simply vanish, even in the spacelessness of the network. Latour[9] observes that a network remains local at every node. The body is thus the node where the network becomes locally situated; it is what remains after the digital network has been shut off. Even the most immersive virtual reality simulation requires the physicality of the body as interface, a body that remains attached to a material environment from which it derives its sustenance. But although it is not possible to completely disentangle the body from the social forces exerted on it by the local, it is true that “physical closeness does not mean social closeness.”[10] In other words, we are capable of denying the local a particular significance, acting as if something nearby is not relevant to us. This is what happens when nearness comes to be defined in terms of informational availability and network inclusion, not physical proximity: the local acquires social significance only to the extent that it can be situated within the network, and only aspects of the local that can be rendered by the algorithm of the network acquire social relevancy.

Fears that a mediated or networked proximity might completely replace the local have been blown out of proportion, and in most cases digital networks have augmented or enhanced (or, at least, become entirely integrated with) the local. In the best case scenarios, what was once far can now be near, and what is near can be reapproached through the digital network; nearness, in other words, encompasses not only new forms of global awareness but also rediscovered local solidarities as well. However, questions regarding who gets access to the network or who gets to control its protocols will force us to continue to ask whether networked proximity can or should be disrupted.

This is important because currently network logic is being used to rationalize a model of progress and development where those elements that are not in the network acquire meaning only by becoming part of the network to the point where bridging the digital divide is normalized as a goal across society. The problem is that this form of network assimilation, as a strategy for creating nearness, has commodification as its principal motive, since the function of the digital network in a capitalist society is to collect knowledge from its local sources, transform it into more portable information, and generate value by its exchange beyond the local sites. This was already clearly evident in the knowledge management movement, which relied on technology to extract knowledge from individuals and make it applicable across diverse communities of practice by eliminating the information related to the local context and retaining only what was deemed “functional” (i.e., what could be applied regardless of location).[11]

Since the network derives its meaning from the number and diversity of its nodes, the economy of the network is oriented toward converting more things into nodes (commodification), which can exchange information. As a way to counter this assimilation, unnetworked space can function as a paralogy, a site where the network encounters resistance and friction. The outside thus acts as a barrier to the exchange of information by reminding us that not everything can or should be converted into a node. Thus the answer to the problem of network inefficiencies or digital divides is not to add more nodes to the network, or even to lower the cost of access, but to find ways of unmapping it.

We must therefore watch against uncritical impulses to make the network universal and all-­inclusive, which is what disciplines such as pervasive and ubiquitous computing are attempting to do in order to “empower” humans. Anne Galloway summarizes the ethos of ubiquitous computing: “[U]biquitous computing was meant to go beyond the machine—­render it invisible—­and privilege the social and material worlds. In this sense, ubiquitous computing was positioned to bring computers to ‘our world’ (domesticating them), rather than us having to adapt to the ‘computer world’ (domesticating us).”[12]

The digital network, however, cannot and should not be rendered invisible. If anything, it should be made more noticeable because it is precisely when we pretend it is not there that we are most prone to surrendering our agency, domesticating ourselves to conform to the networks’ epistemological exclusivity. Conditioning ourselves to ignore the unnetworked (by believing that anything in the local can be turned into a node) means that we make the network as invisible as the water in which the fish lives. It is the ultimate surrender to technological determinism and the commodification of knowledge: the ultimate narrative of exchange value as the most meaningful measure of things.

The premise behind the discourse of the digital divide also needs to be challenged. Unnetworked space functions as the border in the digital divide, as the limit to how far nearness can be technologized (to ask whether something should be networked or not is to encounter the digital divide). Under the logic of the network, however, the digital divide is seen merely as something to be overcome. Most of the arguments surrounding the digital divide[13] center on the “problem” of those who have no access to technology and are therefore not on the network, and what the role of those who do have access should be in addressing this problem. The digital divide has become a metanarrative in its own right, establishing that the inevitable goal is more network technology that is  applied to more aspects of our social lives and available to more people. Only then will the playing field be leveled and true progress achieved, we are told. I do not mean to suggest that some of the problems of our age could not be alleviated with more technology or, more accurately perhaps, with a more even distribution of technology. But we should take a closer look at the meaning invoked by the word divide.

The discourse of modernity relies heavily on a divide between modern societies and premodern societies to establish a primacy of the former over the latter, a primacy defined to a large extent in terms of technological progress that premodern societies must strive to achieve. Doreen Massey has argued that this dynamic enacts in space what is assumed to be a lag in time: “When we use terms such as ‘advanced,’ ‘backward,’ ‘developing,’ ‘modern’ in reference to different regions of the planet what is happening is that spatial differences are being imagined as temporal . . . The implication is that places are not genuinely different; rather they are just ahead or behind in the same story: their ‘difference’ consists only in their place in the historical queue.”[14]

Thus unnetworked space is construed as a place behind the times (lagging in terms of progress). Unless the digital network manages to incorporate it into its fold, it shall remain infinitely distant in time and space.

The imperative of network logic demands that the digital divide must be overcome by converting nonnodes into nodes. The result is what Lyotard calls a “hegemonic teleculture,” always working to bring what is outside the network into the network, to convert unmediated experience into mediated experience. To be clear, this is not a hegemonic teleculture because—­as Lyotard argues—­only distant things are experienced in the digital network. The network is not antilocal, and it does not “abolish local and singular experience.”[15] Rather, the digital network is a hegemonic teleculture because things that take place in proximity are treated the same way as things that take place at a distance, ensuring that uniform distancelessness reigns.

While networks can no doubt facilitate new forms of engaging the local, the local approached or mediated through the network is not the same local as before, since only elements in the local that are available through the network are rendered as near. While networks are extremely efficient at establishing links between nodes, they embody a bias against anything that is not a node in the network. This is not the same as saying that the network is antisocial or antilocal; in fact, as was established earlier, the network thrives on connecting nodes, and it does not discriminate on the basis of where those nodes are located (in our proximal or nonproximal environment). But when it comes to mediating our relationship with the local, nodocentrism introduces a form of epistemological exclusivity that discriminates against that which is not part of the network.

Nodocentrism can be applied to space to produce a form of hyperlocality that filters out the unnetworked elements in our environment, making them irrelevant. But it can also be applied in a similar manner to a political conflict. The filtering process whereby those elements that are not in the network acquire relevance only by becoming part of the network can both empower and threaten networked actors engaged in organizing action against authority.


Networked Activism versus Networked Surveillance

As Castells[16] suggests, notions of class struggle are being replaced to some extent by notions of a struggle over self-­determination between the individual and the network. In most instances, the most effective response in the struggle against networks has been other networks. Because of the scalability and adaptability that is required in a globalized, fast-­paced world, the network model has been recognized as the most viable and effective option for confronting disproportionally powerful opponents (as when, for instance, grassroots networks confront corporate or state networks). Framing political struggle in terms of networks fighting networks—­pitting one kind of node against another—­makes sense from an “evolutionary” perspective, since networks emerged in response to the power of bigger players:[17] speaking in very broad terms and allowing for some historical generalizations, during the last century we saw how political struggles evolved from power blocks fighting other power blocks (as in the case of the Allies fighting the Axis in World War II, or the USA confronting the USSR during the Cold War), to an intermediate stage where distributed networks organized themselves to fight power blocks. Sovereign states found themselves confronting network actors such as guerilla groups, terrorists, or organized criminals employing new distributed tactics that a traditional army or police (even if stronger in manpower or possessing more advanced technologies) was not prepared to confront. This, in turn, developed into a state of affairs where traditional power blocks had to reorganize themselves into networks in order to engage their opponents effectively, resulting in a new era of netwars. This form of warfare is accompanied by increased opportunities to conduct aggression not only through the application of the network as organizing model but also through the use of digital networks as weapons or means of conducting warfare. Examples include actions performed by both state and nonstate actors, ranging from the blocking of access to digital networks (in short term, like the Internet shutdowns during protests in Burma, Iran, the Middle East, and North Africa; or in long term, like Israel’s refusal to allocate wireless licenses to Palestinian companies[18]), to other acts of cyberwarfare such as espionage, propaganda, vandalism, and the targeting of public services (such as hacking into power plants).[19]

At the same time, authors such as Hardt and Negri[20] have observed that netwar (networks fighting against monoliths or other networks) has evolved to encompass not only military struggles but also struggles for social justice. To give but one example, consider the Zyprexa Kills[21] campaign in which citizens, journalists, and activists used new collaborative communication technologies such as Wikis to organize themselves into a network that opposed a more powerful network of corporate lawyers, researchers, and executives from pharmaceutical company Eli Lilly attempting to cover up the hazardous side effects of their popular neuroleptic product. In cases such as these, it is hard to argue against using digital networks as an effective (and in some cases, the only viable) tool for activism. But while it is politically necessary at times to oppose networks with networks, the application of this tactic is problematic because it can engender new instances of network logic that make it possible for monopsonies to control the subversive networks.

This is obviously evident in the application of digital networks for surveillance. It should not come as a surprise to most people that we are living in an era in which our online movements are recorded in logs that specify what websites we visit, what we search for, what we buy, who we interact with, and so on. Most of the time, these data are used for commercial and advertising purposes only. But it can also be collected and analyzed for security purposes by governments and authorities. Every online utterance on the Internet thus becomes searchable data that artificial intelligence agents can parse for signs of potential threats. Computational approaches such as the Online Behavioral Analysis and Modeling Methodology (OBAMM)[22] can be employed to track a user’s behavior, establish normative patterns, and detect deviations that could signify malicious intent, such as when the account has been compromised or the user has gone rogue. Even when we are not on the web, our bodies can continue to be tracked through digital networks. In the United Kingdom alone, for instance, there are now more than four million surveillance cameras in use.[23] Governments might not have the money to staff enough people to monitor all these cameras, so artificial intelligence systems are being perfected that can identify individuals who look threatening or recognize individuals by their facial features, their manner of walking, and so on (all of which might involve some kind of racial profiling). U.S. defense contractors are helping to develop a video surveillance system in China that can identify and track any individual at any given time within an entire city.[24] In the Netherlands, intelligent systems can listen in on ambient sound in public spaces, such as trains, for signs of angry or alarmed speech.[25] And whereas before the police had to worry about placing a wiretap near potential threats to hear what they were saying, now authorities can turn your cell phone into a live microphone and listen to your conversations without your awareness, even if the cell phone is off.[26] In democratic societies, all this happens with our consent because—­we tell ourselves—­we have done nothing bad and have nothing to hide. But what happens when the criteria for what constitute “bad” behaviors changes in the future and the technology is already in place?

The point is that for every new form of dissent that digital networks make possible, more forms of surveillance also become available. And while digital networks allow activists to quickly recruit thousands of adherents to a cause, it has also become easier to dismiss their collective impact and significance. It is not surprising that governments have become (or were always) immune to online petitions, e-­mail letters to representatives, or other forms of online activism. The more responsive governments have merely automated the reply to the automated or form letters their citizens send them, resulting in a perpetual cycle of automated democracy.

But to be fair, as tools for activism, digital networks can be used in ways much more powerful than simply sending an e-­mail to government representatives. Digital networks extend the opportunities for dissent that are available to the wired citizen, and the organization and expression of voice and action against authority acquires an unprecedented scale: civic groups can not only recruit online supporters in a short time but also actually place them on the street, focusing their attention on an issue as it develops. Taking advantage of mobile technology, mobs become smart participants in protests and can react in real time to developments on the street. Furthermore, the distributed power of collaborative research transforms regular citizens into journalists as they investigate, correct, expose, publish, and republish information before traditional media knows what is going on. The use of portable multimedia devices that can upload data to the network instantaneously also makes it less possible for authorities to act with impunity while assuming that no one is watching.[27] It would appear as if, in an effort to make a quick buck, monopsonies are providing us with the very same tools that could potentially undermine them.


The Activist as Information Aggregator

In most instances, however, activism is reduced to information sharing. This sharing via digital networks can indeed become an act of civil disobedience, especially if the information negatively impacts the interests of corporations or the state (in some cases, the line between information sharing and copyright infringement or plain criminal action is becoming increasingly contested). But the question is how effective as a form of dissent is the sharing of information, particularly when it sees itself as an ends, not a means. In other words, by reducing activism to information sharing through proprietary network technologies, do we further freedom of speech or simply strengthen the authorities’ control over the channels of communication and means of action?

A pertinent case to analyze revolves around the distribution of “the number.” In early 2007, somebody cracked and published an encryption key to unlock high definition DVDs, allowing for the unrestricted copying of the discs. The key or code started appearing on various websites. The Motion Picture Association of America (MPAA) and the Advanced Access Content System Licensing Administrator (AACS LA) began issuing Digital Millennium Copyright Act (DMCA) violation notices against these websites, demanding that they remove any mention of the number. Some for-­profit social media websites, like social bookmarking service Digg, were served with these notices because their users were publishing the encryption key on their posts or comments. The companies attempted to curtail the publication of the number, but there was a massive reaction from users toward this apparent act of censorship: in typical viral fashion, the more the code was being “suppressed,” the more it appeared on social media sites, blogs, T-­shirts, videos, and so on.

Companies operating under the Web 2.0 business paradigm (capitalizing on their users’ social sharing of information) suddenly realized they were in a vulnerable position: they could not afford to alienate their source of free labor, the members of their network. Digg, for instance, reversed its initial decision to block the publication of the encryption key and in a public relations move said that it would rather “go down fighting than bow down to a bigger company.” Given its business model, the company (worth at that time around $200 million) might not have had a choice, as it would be nothing without the free labor of its users. As Andrew Lih writes, “This is quite unprecedented—­you basically have a multi-­million dollar enterprise intimidated by its mob community into taking a stance that is rather clearly against the law.”[28]

There are two interesting observations to draw from this example. First, there is the idea of disseminating information using digital networks, in this particular case social media, as a form of activism or protest. This controversy might have had at its core something rather trivial—­a code to hack DVDs. But this did not stop some people from asking whether we could extrapolate some of the lessons and techniques learned to a larger social justice context. For instance, Ethan Zuckerman asked, “What would it take to harness this sort of viral spread to harness the net in spreading human rights information? Can activists learn from the story of the number and find ways to spread information that otherwise is suppressed or ignored in mainstream media?”[29]

This is basically what Julian Assange would be doing a few years later with WikiLeaks. But at the time, Zuckerman’s comment seemed to suggest that, since the network infrastructure was already in place, what was missing to turn the dissemination of information into a mobilizing force of dissent in society were both the right kind of information and the right kind of audience.

In the encryption key case, it is clear that the “activists” (described by Bloomberg Businessweek as predominantly male, in the IT sector, between their twenties and thirties, and earning around $75,000 a year[30]) were more concerned with issues having to do with technology and freedom of speech than with other social issues. As one blogger remarked, “While most of the blogosphere was atwitter over the tantrums being thrown at Digg, real injustice in Los Angeles was being ignored. After watching this video [of police oppression during the May 1st immigration reform march] I was ashamed to be part of a community (the designers and evangelists of Web 2.0?) which sanctimoniously promotes ‘people power’ among the spoiled and entitled while disregarding the tightening grip of authority on the poor and disenfranchised.”[31]

The question is whether the problem is with the type of activist involved in the number controversy, or with the broader framing of an activist as someone who simply manages information, engaged in what Dreyfus[32] would call a nihilism of endless reflection, which never materializes into action. When activism is defined solely in terms of the exchange of information, we are reducing—­not increasing—­the options available for shaping the world. The activist goes from being a social actor to a mere intersection of data flows. She possesses more information than ever before (about encryption keys as well as about all sorts of social injustices), but all she can do is replicate and pass on the information.

This brings me to the second observation related to the number. In the end, I think the establishment realized that it would be impractical to try to go after Digg, and that doing so might publicize the controversy even further. This case thus signaled a shift in focus from legally prosecuting social media companies for what their members produce and publish to using the social data generated by these sites to monitor for genuine security threats. No one is naïve enough to conclude that social media corporations are really at the mercy of subversive revolutionaries (despite taunting users who posted the number along with comments such as “Hahahaha! I am breaking federal law! Hahahaha!”). Instead, I believe the lesson learned from this case is that authorities will ultimately recognize the sanctity of capitalism: they will go after individuals rather than companies, and instead of trying to censor speech in online social networks, they will promote it because this gives them more opportunities to monitor dissent. We are back to Deleuze’s observation about control societies: “Repressive forces don’t stop people expressing themselves but rather force them to express themselves.”[33]

The Activist as Street Protester

We have recently seen how activism via participatory media has taken up more consequential causes than making it easier to copy DVDs. In her New York Times article “Revolution, Facebook-­Style” (published in 2009, before the uprisings of the Arab Spring), Samantha M. Shapiro helps the public visualize what it means “to have a vibrant civil society on your computer screen and a police state in the street.”[34] Specifically, she reports on the use of Facebook in Egypt as a means to organize acts of political dissent.

Since 1981, Egypt was ruled under a state-­of-­emergency law, which severely limited freedom of speech and movement. In 2009, an estimated eighteen thousand citizens were in prison because of this law.[35] In a country where the expression of dissent has such severe social consequences, it is not surprising that citizens gravitated toward a virtual forum where expression was perceived to be more free, and where they did not have to deal with the rigid hierarchies of political groups. Which is probably why, as the article points out, Facebook attracted a new generation of Internet-­savvy young people; it was the first foray into political protest for an otherwise disenfranchised segment of the population. By 2008, one particular Facebook group, the April 6 Youth Movement, had about seventy thousand (mostly young and educated) members. The article recounts the story of how the members of this group used Facebook to organize plans to join a march in solidarity with workers protesting high rates of inflation and unemployment. But since the group’s online activities were open and visible to all, members of the Egyptian security forces joined the group and tried to dissuade its civilian members from participating in the protest. In spite of this, organizers decided that the march would go ahead.

During the preparations, a thirty-­year-­old woman named Esraa Abdel Fattah Ahmed Rashid not only disclosed on Facebook the specifics of where she intended to meet some of her peers before joining the protest but also posted full details about the time, what she intended to wear, and even her cell phone number. With all this information, it was very easy for the security forces to arrest Rashid and others during the events. There were at least three casualties that day during the protest. Afterward, people used the same Facebook group to mount a campaign demanding her release, which fortunately happened quickly. However, to the disappointment of many who felt she did not reflect the conviction of her fellow Facebook activists, she appeared on television in tears to apologize for her involvement in the protest (she later withdrew that apology).

Figuring out what to disclose or not to disclose in their digital networks was (and is) a dangerous lesson for young activists to learn. It is undeniable that the use of social media platforms in 2008 (and even before) contributed to the momentum leading to the Arab Spring in 2011, and to a turning point in the involvement of the public (before the Arab Spring, 67 percent of young people in Egypt were not registered to vote, and 84 percent had never participated in a public demonstration[36]). The question is how much of a contribution the technologies made and what were the after effects of their application. After some initial fascination with the concept, there now appears to be more skepticism than support for the idea that tools like Twitter and Facebook are single-­handedly responsible for igniting the Arab Spring movements. As we witness the immense effort and cost in human lives that has gone into uprisings in Algeria, Bahrain, Egypt, Iraq, Jordan, Kuwait, Lebanon, Libya, Mauritania, Morocco, Oman, Saudi Arabia, Sudan, Syria, Tunisia, Western Sahara, and Yemen, we recognize that it takes much more than a social media platform to organize and sustain a grassroots protest movement. Yet the liberal discourse behind the trope of a “Twitter Revolution” (a revolution enabled by digital technologies, which empower oppressed groups) continues to function—­especially in Western media and academia—­as a utopian discourse that conceals the role of communicative capitalism in undermining democracy. In other words, the meme of the Twitter Revolution may have come and gone, but the ideology that gave rise to it continues to color our ideas about participation and democracy.

Digital networks can aid in the defense of human rights, improve governance, and empower the disenfranchised. But that is not the point. The point is that while presenting these technologies as the agents of revolution, a critique of the capitalist institutions and superstructures in which these technologies operate—­and the manner in which they generate inequality—­is obscured. Indeed, the use of social media by activists not only increases opportunities for participation and action but also makes it easier for authorities, with help from corporations, to operate a repressive panopticon. According to a report by the OpenNet Initiative, during the Arab Spring around twenty million users in the Middle East and North Africa experienced the blocking of online political content, which was carried out with the help of Western technologies.[37] To the extent that grassroots movements all over the world continue to rely on corporate technologies to organize and mobilize, we can expect inequality (through participation) to take some of the following forms:

Surveillance and loss of privacy. States can monitor activity within digital networks to identify dissenters and learn of (and obstruct) their plans. This is often accomplished through deep-­packet surveillance, filtering, and blocking technologies provided to repressive regimes like Iran, China, Burma, and Egypt by companies like Cisco, Motorola, Boeing, Alcatel-­Lucent, McAfee, Netsweeper, and Websense.[38] Recently, a group of Chinese citizens even filed a lawsuit against Cisco, claiming that the technology that allowed the government to set up the Great Firewall of China led to their arrest and torture.[39] That the U.S. government pays lip service to the importance of a “free Internet”[40] around the world, and finances circumvention technologies for activists abroad,[41] all while supporting these companies at home through tax breaks and lax regulation is a serious contradiction.

PSYOPs and propaganda. The U.S. Army is developing artificial intelligence agents that would populate social networking platforms and dispense pro-­American propaganda.[42] Dozens of these “sock puppets” could be supervised by a single person, and their profiles and conduct would be indistinguishable from those of a real human being (apparently, because of legal issues, these sock puppets could only be targeted to non-­U.S. citizens). A low-­budget version of this strategy has already been put into action by the Syrian government, which released an army of Twitter spambots to spread proregime opinions.[43]

Loss of freedom of speech. Companies, unlike states, are not obliged to guarantee any human rights, and their terms of use give them carte blanche to curtail the speech of any user they choose. For instance, Facebook (one assumes under the direction of the British authorities) recently removed pages and accounts of various protesters belonging to the group UK Uncut just before the wedding of Prince William and Kate Middleton.[44] UK Uncut is not a violent terrorist organization but a group that opposes cuts to public services and demands that companies like Vodafone pay their share of taxes.

Suspension of service. For more drastic measures, states (in collaboration with corporations) can simply “switch off” Internet and mobile phone services for whole regions in order to terminate access to the resources activists have been relying on. Vodafone, for instance, complied with the Egyptian government’s directive to end cell phone service during the January 25 revolution.[45]

Remote control of devices. Modern cell phones have, for some time, provided the authorities with the ability to use them as wiretapping devices without their owner’s knowledge, even when the power is off.[46] They can also be used to track individuals and report their locations. An indication of what else we can expect in the future is a patent, filed by Apple, that allows for authorities to remotely disable a phone’s camera.[47] While this is intended to prevent illegal recording at concerts, museums, and so on, we can imagine how effective it would be at protests.

Crowdsourced identification. One reason authorities may want to leave the cameras on is because user-­generated media can greatly aid in the identification of subversive agents. At the recent Vancouver riots (which had nothing to do with correcting social injustices and everything to do with sports hooliganism), Facebook, Twitter, and Tumblr users were enlisted in a crowdsourcing attempt to identify miscreants using digital photos and videos posted by onlookers.[48] Similar practices were employed by the Iranian government during the postelection riots of 2009. Websites like were setup to allow regime sympathizers to identify protesters and report them to the authorities.[49]

These kinds of practices confirm Morozov’s observation that social media can be used by both sides, not just the side we agree with, and that the sacrifices in privacy may not be worth the gains.[50] This perhaps explains why, at least in the Gulf countries, Facebook usage seems to be diminishing.[51] But as regimes—­repressive as well as democratic—­learn how to use social media to influence the popularity of certain viewpoints, monitor communication, and detect threats, it seems as if dissent will become possible only in the excluded, nonsurveilled spaces of what is outside the network, away from the participation templates of the monopsony.

Nonetheless, something compels people—­including at-­risk activists—­to continue to participate. As Christian Fuchs’s research with a student population demonstrates, there is a sharp discrepancy between people’s negative opinions of electronic surveillance and their simultaneous willingness to enter into contracts with corporate providers who do not even make a pretense of guaranteeing the privacy of users. In explaining this form of denial, Fuchs writes, “Although students are very well aware of the surveillance threat, they are willing to take this risk because they consider communicative opportunities as very important. That they expose themselves to this risk is caused by a lack of alternative platforms that have a strongly reduced surveillance risk and operate on a non-­profit and non-­commercial basis.”[52]

From this perspective, governments benefit greatly from the process of media conglomeration that their own deregulation policies promote: the more monopsonies become the only game in town—­enticing users with the promise increased freedom of expression and organization—­the less options for secure or private communication citizens have and the more they will be exposed to surveillance.

And yet some believe that monopsonies actually provide a degree of protection to small dissenting groups. The reasoning is that if these groups were to create and use their own digital networks (e.g., by running open-­source software on their own Internet servers), they could be easily targeted and shut down by the authorities. In contrast (the argument goes), targeting an activist group that uses corporate digital networks is a very visible act that would presumably attract a lot of scrutiny and would require the corporation to do a lot of explaining to the public. Zuckerman calls this the “cute-­cat theory of digital activism,” because according to him “[a]uthoritarian regimes can’t block political Facebook groups without blocking all the ‘American Idol’ fans and cat lovers as well.”[53] Unfortunately, this defense of social networking services is faulty because authorities do not need to shut down the whole network but can target—­more easily than ever before—­only specific groups and members, as described in the examples discussed earlier.


Digital Networks as Consensus Democracies

Another way the digital network handles dissent is through various mechanisms for processing difference of opinion. The algorithms of the digital network can give form to a consensus democracy that manages dissent, instead of engaging it as a complex form of disagreement. The network as a model for organizing sociality engenders a kind of homogenizing consensus that, while embracing and thriving on diversity and innovation, obstructs a true measure of otherness, of alternatives. It processes difference algorithmically instead of allowing for the airing of grievances that the agonism of difference produces.

To illustrate this, we can look at normative models for handling conflict in some collaborative spaces of the digital network, such as the pages of Wikipedia. These discursive spaces are often portrayed as ones that embody and promote diversity of opinion and consensus. Wikipedia pages are “social” texts representing a variety of opinions, all the while achieving consensus through mechanisms such as open editing and collective monitoring and correction. According to its how-­to pages, Wikipedia enjoins contributors to adopt a neutral point of view (NPOV), “representing fairly, proportionately, and as far as possible without bias, all significant views that have been published by reliable sources.”[54] This is intended to promote an environment where a bias expressed by one user motivates another user to challenge it or try to reframe it by substantiating it with facts. The outcome of these kinds of mechanisms is a text where all difference of opinion can be managed through equal representation. But as Rancière suggests, sometimes it is the opportunity for differences and grievances to be openly expressed and not managed through consensus that creates a democratic environment, one where an authentic (if not equal) encounter with the otherness of the opponent can take place: “Democracy is neither compromise between interests nor the formation of a common will. Its kind of dialogue is that of a divided community.”[55] Democracy is the many represented as the many in all their inequality, not the many represented as one consensual whole.[56]

What is detrimental to democracy, therefore, is not the absence of difference but the subordination of difference to consensus. Rancière identifies consensus as a state where the rejection of diversity and authentic otherness is more likely to occur because grievances are repressed instead of aired out in the open: “Grievance is the true measure of otherness, the thing that unites interlocutors while simultaneously keeping them at a distance from each other . . . When the apparatus of grievance disappears, what takes over in its stead is simply the platitude of consensus . . . the pure and simple rejection of the other.”[57]

Thus for Rancière a rejection of the other is not the result of a lack of consensus, but of its very presence. Consensus makes the meaningful expression of grievances impossible. Without the opportunity to claim that a wrong has been committed, there is no opportunity to negotiate an attempt to correct it. Consensus, then, is the loss of meaningful otherness in the sense that it leads to a total rejection of the other in the political arena, for “otherness can only be political, that is, founded on a wrong at once irreconcilable and addressable.”[58] Digital networks have a bias toward creating consensus and eliminating grievances through the management of dissent because this creates information and environments that are more efficient and easier to use. But in doing so, networks also have a bias toward a rejection of authentic otherness, epitomized in the incapacity of nodes to recognize anything but a node. Networks can manage difference only as long as that difference is subordinated to the template of the node, but this leads to a total rejection of the only site—­the outside of networks—­from which authentic grievances against nodocentrism can be expressed. And thus, in order to secure the network, the outside must be declared a threat.

Networked Security

If participation within the digital network creates inequality, this inequality does not give rise to much protest or violence. Rather, inequality is produced and accepted peacefully and consensually by network participants. One reason, as we have seen, is that participation—­even when accompanied by inequality—­is experienced as pleasurable. The other reason is that inequality in the digital network is rationalized and justified through the fear that the real threat to the node comes from outside the network, not from within. Insecurity lurks beyond the borders or limits of nodes. The threat of this insecurity is so great that it makes participation in and of itself enough of a privilege, and enough of a reward to put up with inequality.

What is it about the outside that motivates such fear and makes inequality so readily acceptable? For one thing, the outside represents the unknowable, that which cannot be rendered in terms of network logic and that which has not been (or cannot be) assimilated by the network. To paraphrase Donald Rumsfeld, the outside represents not just the known unknown but the unknown unknown—­the things the network does not even know it does not know. Another reason for this fear is that the network is threatened by difference. It thrives on diversity and inclusion as long as they can be managed internally, but difference outside the established paradigm leads to a loss of control. The difference embodied by the outside is not simply an affirmation of diversity but an affirmation of grievances, which point to authentic otherness. Finally, the network fears contamination—­in particular, contamination by paralogical modes of thinking different from nodocentrism. Minor contamination by the outside is allowed because it lets the system build some defenses against it. Contamination is also allowed because the unnetworked contributes resources that benefit the network, even if this is not openly acknowledged. But apart from these instances, a system of security is put in place because of the threat that, if unchecked, the foreign agent that is the outside can infiltrate, run amok, and subvert the system. The outside represents an idea that is dangerous because it can propagate, contaminate, and challenge the status quo.

But perhaps what nodes fear the most, and what keeps insecurity in such sharp relief for them, is the precariousness of their status within the network. Here, interestingly, we find that the lower the threshold for joining the network, the more pronounced the fear of what remains outside. If total inclusion allows for total exclusion,[59] and “the goal of [network] protocol is totality, to accept everything,”[60] what could possibly be the nature of that which chooses to remain outside? The outside must thus be eyed with suspicion, even (or specially) if our identities were formed there. The fact that the outside opts out of totality does not reflect well on the decision to join the inside. If the barriers of entry are relatively low, the reasons why the outside refuses to become a node are nothing short of infuriating (e.g., I personally have been accused, in all seriousness, of being irresponsible for not joining Facebook). Freud’s narcissism of minor differences could be at play here: ontologically the node and the nonnode are perhaps not so different, but each thinks they are unique, and since structurally they are worlds apart, their rejection of each other becomes a fundamental divide, not least of all because they call into question each other’s existence. Otherness is reduced to a few superficial features. But ascribing such fundamentalist views unto the other similarly pushes one’s identity to an extreme. We end up reducing our identities to a few superficial and nodocentric features as well.

Such extremism or reductionism impels networks to attempt to secure themselves against radical otherness by strengthening their borders—­whatever or wherever they might be. Except that in a network, borders no longer exist only at the edges. Rather, they have been distributed and disseminated. The border is everywhere. The barbarian is not at the gate, but standing next to us. Thus a fear of the outside is transformed into a fear of the inside: generalized insecurity. The most dangerous threats to network security are always internal, not external. They come from citizens, not foreigners. We must recall that the unnetworked is not just outside the network but within it. The terror of this “outside” is the fear that immigrant multitudes will undermine the network from inside. Thus as Sützl writes, “[S]ecurity can only be secured by insecurity, that is, its self-­affirmation is identical with its self-­negation.”[61] This means that for security to be validated as a goal, insecurity needs to remain a real and constant threat, which means security is an unattainable objective that necessitates the never-­ending production of insecurity. As far as the network is concerned, since there is no longer an outside (because the outside is everywhere), insecurity is an ever-­present or ubiquitous threat. The way to “secure” the network is therefore to create a perpetual state of surveillance. To the extent that digital networks have become templates of sociality, they have also modeled the management of security and insecurity. Innovation in methods to exploit network vulnerabilities goes hand in hand with innovation in methods for protecting the network, which is why security experts and hackers do more to secure than to jeopardize each other’s line of work (meanwhile, the outside of networks escapes firewalls and refuses authentication, tracking, or encryption because it is masked by the node; it eludes the network by creating something the host cannot rid itself of because it might not even be aware of its presence).

One kind of threat that the nonnodal poses to the network involves things like identity theft, service disruption, or denial of service attacks. But these represent instances of networks fighting networks. Another kind of threat is instead epitomized in the confrontation between the surveillance camera and the veiled face of a Muslim woman, which makes identification impossible and is justified on the grounds of human rights, like freedom of religion (although, strictly speaking, veiling is not a practice ordained by the Qur’an[62]). The confrontation between the high-­tech surveillance camera and the low-­tech veil exposes the tensions in Western discourses between individual freedom and the need to detect “threats,” and between voluntary and compulsive participation (in monitored spaces, in the practice of veiling, etc.). In places like France and Barcelona,[63] this tension has been resolved by attempts to ban the veil in public spaces. The message is perfectly clear: in this age of perennial insecurity, the need to monitor presumed threats trumps individual liberties such as religious freedom.

The question then is whether the outside of the network constitutes an authentic threat to the sovereignty of the network, or whether it exists in a symbiotic or parasitic relationship with it. Is the outside merely the network’s standing reserve of otherness, ready to be assimilated at a moment’s notice, or does it represent an alternative model of identity that could undermine its essence? Perhaps by looking at the use of the network in modern warfare we can discern some answers to these questions.



As discussed earlier, the character of warfare has generally shifted from centralized blocks fighting more or less similar opponents, to blocks fighting decentralized networks, to—­more recently—­networks fighting networks. This is a model of asymmetrical warfare because it allows smaller, weaker groups (such as terrorist or insurgent groups) to fight stronger opponents. Needless to say, this has not made war any more palatable or “fair.” On the contrary, netwar has become increasingly inhumane. When asymmetrical opponents confront each other not only on the battleground but also everywhere the network is, the result is disastrous for civilians. A decentralized form of warfare between unequal opponents is one of the factors that could explain why the casualty rate for civilians has gone up from approximately 10 percent in World War I to about 90 percent in the U.S.–­Iraq wars.[64] But the hope is that since network technologies have facilitated the practice of war, unthinking network logic might also represent a strategy to evade or resist netwar. Then again, it might just represent a new stage in decentralized warfare. As a lieutenant general in the U.S. Army observed, “Many of our enemies have learned that the way to fight us is not to use technology.”[65]

Participatory War 2.0

A distributed or networked war means that individual computer terminals can be recruited into the war effort. While digital networks are providing many ways for organizing resistance to war, they are also providing plenty of ways—­from passive to active—­to join the war. Social networking services can be used to conduct sophisticated propaganda campaigns, as in the case of the Facebook app that asked users to donate their status bar to alert others about how many Qassam rockets Hamas was firing from Gaza into Israel during the 2009 conflict[66] (in response, a pro-­Palestinian group created a similar Facebook app). Likewise, viral video games can be distributed to help one side in a conflict promote their viewpoint (to give but two examples: the game Muslim Massacre involves an American fighter killing Osama bin Laden, the prophet Muhammad, and Allah, while Raid Gaza! shows the disproportionate effect of the war against the Palestinians).

But if propaganda and video games are not enough, more active forms of involvement are also available. Thanks to software that is easy to download and install, any civilian with a computer and access to the Internet can participate in attacks to the web infrastructure of an enemy country. In an article subtitled “How I Became a Soldier in the Georgia-­Russia Cyberwar,”[67] Morozov describes how in less than a day he was able to follow simple instructions and use freely available software on his computer to participate in “distributed denial of service” (DDOS) attacks and other acts of vandalism in the 2008 South Ossetia War. A DDOS attack involves overwhelming a web server (hosting, for instance, a government’s website) with individual requests or “hits” in order to make it crash and stop working. Just like the volunteer computing projects that make use of donated computer power to help scan outer space for signs of intelligent life (SETI@home), solve complex mathematical problems (ABC@home), or render sophisticated 3D computer animations (RenderFarm@home), new distributed computing software is allowing people to lend their computers to an effort to bring down enemy networks. There are instances of this kind of software to fit all positions across a political spectrum: a group of Muslim hackers designed a DDOS program called al-­Durra (named after Mohammed al-­Durra, a Palestinian child shot and killed by Israeli soldiers in 2000), while the Israeli group Help Israel Win developed a voluntary botnet called Patriot.

John Robb suggests[68] that this method of cyberwarfare has two main advantages: (a) there is an immense pool of talent willing to participate from the comfort of their homes; and (b) the military, while benefiting from the efforts, can officially distance itself from the actions of civilian militants. According to Robb, while the United States is lagging behind in adopting such trends, Russia and China are embracing them fully and have developed strong relationships with organized crime that allow them to deploy such attacks while at the same time disavowing their participation.[69] To bring the severity of this form of warfare into context, it should be pointed out that causing a web server to collapse is not as innocuous as the inconvenience of getting a “Server busy. Try again later” error. As blogger Jonah Boswitch points out, the disruption of information infrastructures can result in cascading failures affecting systems that support hospitals, air traffic, financial institutions, and so on.

Telesthetic War and Networks

The Internet had its humble beginnings as a military experiment, but information technologies, networks, and war have a long and common history. One of the primary goals of warfare has been to maximize harm to the enemy while minimizing risk to the self, an effort that requires the capacity to inflict damage at an ever-­increasing distance from the enemy. Today, we have perfected the technologies to do this, and in the process redefined what it means to go to war (does dropping a bomb from half the world away constitute going to war?). The ability to conduct telesthetic warfare (i.e., inflicting damage at a distance) requires a speed in the coordination of resources that digital networks and network logic have been developed to provide.

In the heels of the dot-­com boom, and using corporations like Walmart and Cisco as models, Vice Admiral Arthur Cebrowski[70] provided the intellectual argument for the idea of network-­centric warfare. According to him, information technology networks would revolutionize warfare by bringing digital networks to the military: GPS devices would be ubiquitous, and every soldier would be linked to the network while commands and reports were wirelessly transmitted across the globe. The “fog of war” (an expression that describes the uncertainty that surrounds the battlefield) would finally be lifted. Although initially met with skepticism, this doctrine was vigorously (some say unquestioningly) embraced after 9/11 by the Bush administration. The relative speed and success with which initial missions in Afghanistan and Iraq were accomplished seemed to corroborate this model: thanks in part to superior networks of communication and information, less troops and resources were needed to accomplish preliminary goals (overthrowing Saddam Hussein, for instance). But as initial occupation devolved into participation in lengthy civil wars, the efficacy of the network-­centric model began to be contested. As was demonstrated time and again, netwar was not immune to malfunctions: computer systems tended to crash in the heat and the dust, and sometimes there were not even enough battery packs around to power the network. Furthermore, while one can account for all the nodes in one’s network, accurately accounting for the forces of the enemy is harder to accomplish. Because of this, a return to telesthetic warfare seems to have displaced the idea of an on-­the-­ground, network-­centric warfare.

As the cases of Afghanistan and Pakistan currently demonstrate, the latest shift in the United States’ approach to networked warfare revolves around the application of robotic technologies, in particular unmanned aerial vehicles or drones. These aircraft cost a fraction of what jet fighters cost and can be operated by shifts of pilots thousands of miles away who do not get tired or sleepy. These weapons also depend on very sophisticated digital networks for their operation and guidance. Although a detailed account of the technology is beyond the scope of this text, I do want to at least briefly establish a connection between this technology, network logic, and the ethical repercussions of targeting at a distance. During a drone mission in Afghanistan, a general—­in what seems to be a routine episode—­ordered a group of civilian houses to be destroyed after video images from a Predator drone showed armed insurgents coming in and out. In the reasoning of the general, not only “was the compound a legitimate target, but any civilians in the houses had to know that it was being used for war, what with all the armed men moving about.”[71] The decision to target children, women, and the elderly because they “had to know” that members of their own family are deemed terrorists is a matter that is apparently more expeditiously decided thousands of miles away by looking through a monitor. From a nodocentric perspective, only nodes deserve to be accounted for.

Networks, Social Computing, and Counterinsurgency

Another trend in netwar is to approach the problem of insurgency as a behavior that can be modeled and predicted with social computing algorithms. The intelligence community is asking, “[H]ow can insurgency information best be researched, defined, modeled and presented for more informed decision making?”[72] Social computing, as reviewed earlier, seems perfectly suited for this task since it is concerned with analyzing a social context using algorithms in order to identify patterns and help predict outcomes (in other words, in order to generate a model of the behavior). To date, various approaches are in development, and one of them is STOP, an acronym for SOMA Terror Organization Portal (SOMA stands for Stochastic Opponent Modeling Agents[73]). This online portal allows analysts to access data about terrorist groups worldwide. By hypothesizing a certain state, the system can help analysts predict the behavior of insurgent groups. STOP is composed of the SOMA Extraction Engine (SEE), the SOMA Adversarial Forecast Engine (SAFE), and the SOMA Analyst NEtwork (SANE). A brief description of each component illustrates how network science and social computing can be used for counterinsurgency efforts.

SEE, the extraction engine, uses real-­time sources to derive SOMA rules about a particular group. These rules are basically calculations that a computer can perform regarding the actions of a group. SOMA rules take the form of

<Action>:[L,U] if <Env—­Condition>,

where <Action> is an act (such as kidnapping, arms trafficking, armed attacks, etc.) that the group can undertake, [L,U] is the probability range that this action will take place, and <Env—­Condition> is a conjunction of environmental attributes under which the action is likely to take place. In essence, the rule states that “when the <Env—­Condition> is true, there is a probability between L and U that the group took the action stated in the rule.”[74] For instance, the following rule was derived for the group Hezbollah:

KIDNAP: [0.51,0.55] if solicits-­external-­support & does not advocate democracy.[75]

The rule states that when Hezbollah both solicited external support and did not promote democratic institutions, the probability that they would engage in kidnapping as a strategy was between 51 percent and 55 percent. Similar rules describing behavioral patterns have been extracted from data entered into SEE for twenty-­three insurgent groups, including “8 Kurdish groups spanning Iran, Turkey and Iraq, (including groups like the PKK and KDPI), 8 Lebanese groups (including Hezbollah), several groups in Afghanistan, as well as several other Middle Eastern groups.”[76] The data for these rules were derived from the larger Minorities at Risk (MAR) dataset developed by the University of Maryland, which tracks the political behavior of 284 ethnic groups worldwide.[77]

Using the data entered into the extraction engine, SAFE (the forecast engine) acts as an online environment where, through the use of drop- down menus, analysts can select a particular group, choose one of the actions available for that group, and select a set of conditions that apply to the hypothetical scenario. For instance, an analyst could ask, “What is the probability at a given time that ‘PKK’ (group) will engage in ‘theft of commercial property’ (action) if it ‘does not advocate wealth distribution’ and it ‘solicits external support’ (conditions)?” The system then generates the respective probabilities. The last component of STOP and SANE acts as an online social network where analysts can share and discuss various scenarios generated with the system, along with latest news and corresponding background information about the insurgent group from Wikipedia.

The main concern, of course, is how this information will be used. If probability crosses a certain threshold, will certain preemptive actions with “unavoidable” civilian casualties be justified? Incidentally, these concerns are not just relevant to conflicts in the Middle East or Southeast Asia. The U.S. Army has pointed out that SOMA systems can be used domestically to model the behavior of gangs instead of terrorists groups, if one simply substitutes insurgency actions with gang activities.[78] In the aftermath of 9/11, in which we saw the definition of “terrorist” expand to include certain kinds of environmentalists, academics, and other social activists, and at a time when war will increasingly move into urban areas, this does not paint a reassuring picture for voices of dissent even in democracies.

As all these examples show, the real asymmetry in the coming wars will not be between state armies and insurgents but between networks and civilians—­or more precisely, between the use of network models to conduct war by states as well as insurgents on the one hand and the civilians that get trapped in this war of network against network on the other. Under these circumstances, the ability to flee or unthink the network will be crucial, and it will become necessary to extend the efforts to disrupt the network to emerging models of collaboration and liberation.

  1. Boo, “The Best Job in Town,” 40.
  2. Ibid., 65.
  3. Ibid., 59.
  4. Eliot, “Why the Web Won’t Ruin the World,” para. 9.
  5. Ibid., para. 16.
  6. Silverstone, Why Study the Media?, 151.
  7. Heidegger, Poetry, Language, Thought, 164.
  8. Borgmann, “Information, Nearness, and Farness,” 98.
  9. Latour, We Have Never Been Modern.
  10. Yus, “The Linguistic-­Cognitive Essence of Virtual Community,” 87.
  11. This emphasis on the functionality of knowledge “is traceable in its lineage to the popular belief . . . that tacit knowledge can be converted into explicit knowledge through IT systems. By capturing knowledge, it can be more widely replicated and shared. . . . Henceforth, knowledge is transformed into a more tangible commodity.” Chan and Garrick, “The Moral Technologies of Knowledge Management,” 291.
  12. Galloway, “Intimations of Everyday Life,” 397.
  13. Sassi, “Cultural Differentiation or Social Segregation?”
  14. Massey, Power-­Geometries and the Politics of Space-­Time, cited in Rodgers, “Doreen Massey,” 287.
  15. Lyotard, The Inhuman, 64.
  16. Castells, The Rise of the Network Society.
  17. See Arquilla and Ronfeldt, Networks and Netwars; Galloway and Thacker, The Exploit.
  18. Hass, “Holding on Tight to the Frequencies.”
  19. See Carr, Inside Cyber Warfare; Andress and Winterfeld, Cyber Warfare.
  20. Hardt and Negri, Empire; Hardt and Negri, Multitude.
  21. One website related to the campaign is
  22. Sliva et al., “The SOMA Terror Organization Portal (STOP).”
  23. McCahill and Norris, On the Threshold to Urban Panopticon?
  24. Klein, “China’s All-­Seeing Eye.”
  25. Wilkinson, “Non-­Lethal Force.”
  26. Abelson, Ledeen, and Lewis, Blown to Bits.
  27. For a detailed discussion of some of these strategies, see Rheingold, Smart Mobs; Gillmor, We the Media; Burgess and Green, YouTube; Shirky, Here Comes Everybody.
  28. Lih, “What Does Cyber-­Revolt Look Like?,” para. 39.
  29. Zuckerman, “Does the Number Have a Lesson for Human Rights Activists?,” para. 12.
  30. Bloomberg Businessweek, “Valley Boys.”
  31. The author was Ryan Shaw. The blog post is no longer available, but I decided to keep the quote because it is quite powerful, in my opinion.
  32. Dreyfus, “Nihilism on the Information Highway.”
  33. Deleuze, Negotiations 1972–­1990, 129.
  34. Shapiro, “Revolution, Facebook-­Style,” para. 44.
  35. Shapiro, “Revolution, Facebook-­Style.”
  36. Ibid.
  37. Norman and York, “West Censoring East.”
  38. York, “This Week in Internet Censorship”; Mayton, “U.S. Company May Have Helped Egypt Spy on Citizens.”
  39. Abbott, “Torture Victims Say Cisco Systems Helped China Hound and Surveil.”
  40. MacKinnon, “Internet Freedom.”
  41. Glanz and Markoff, “U.S. Underwrites Internet Detour around Censors Abroad.”
  42. Fielding and Cobain, “Revealed.”
  43. York, “Syria’s Twitter Spambots.”
  44. Malik, “Facebook Accused of Removing Activists’ Pages.”
  45. Shenker, “Fury over Advert Claiming Egypt Revolution as Vodafone’s.”
  46. McCullagh and Broache, “FBI Taps Cell Phone Mic as Eavesdropping Tool.”
  47. Mack, “Patent Application Suggests Infrared Sensors for iPhone.”
  48. Wong, “Social Media Play Big Role in Riot Probe.”
  49. Tehrani, “Iranian Officials ‘Crowd-­Source’ Protester Identities.”
  50. Morozov, “Testimony to the U.S. Commission on Security and Cooperation in Europe”; Morozov, The Net Delusion.
  51. Khatri, “Facebook Usage Falls in GCC, Including in Qatar, Saudi Arabia.”
  52. Fuchs, Social Networking Sites and the Surveillance Society, 115.
  53. Shapiro, “Revolution, Facebook-­Style,” para. 29.
  54. Wikipedia, “Neutral Point of View.”
  55. Rancière, On the Shores of Politics, 103.
  56. Virno, A Grammar of the Multitude.
  57. Rancière, On the Shores of Politics, 104.
  58. Ibid.
  59. Kothari and Mehta, “Cancer.”
  60. Galloway, Protocol.
  61. Sützl, “Tragic Extremes,” para. 2.
  62. Barlas, “Believing Women” in Islam.
  63. Heneghan, “French Muslim Council Warns Government on Veil Ban”; Heneghan, “Barcelona to Ban Veil in Public Buildings.”
  64. Solomon, War Made Easy.
  65. Singer, Wired for War, 268.
  66. Serrie, “Propaganda War Rages Online.”
  67. Morozov, “An Army of Ones and Zeroes.”
  68. Robb, “Open Source Warfare.”
  69. Organizations like the Russian Business Network are experts in mounting Denial of Service attacks for extortion. See “RBN—­Extortion and Denial of Service (DDOS) Attacks.”
  70. Singer, Wired for War, 179, 184.
  71. Ibid., 348.
  72. Bronner and Richards, “Integrating Multi-­Agent Technology,” 28.
  73. Sliva et al., “The SOMA Terror Organization Portal (STOP).”
  74. Ibid., 39.
  75. Ibid.
  76. Ibid., 12.
  77. Mannes et al., “Stochastic Opponent Modeling Agents.”
  78. Bronner and Richards, “Integrating Multi-­Agent Technology,” 29.