Chapter 3: Computers as Socializing Tools

Computers as Socializing Tools

IN HIS BOOK Images of Organization, Gareth Morgan proposes a way of understanding organizations through the metaphors employed to describe them. One can imagine organizations, he argues, as machines that process inputs and outputs, organisms that interact with their environments, brains that learn from their experiences, cultures that enact the shared reality of their constituents, political systems that manage conflicting interests, or psychic prisons that impose restrictions on our actions and thoughts. Each one of these metaphors provides us with a different vision of what social organizations are, what our role within them is, and how they should be managed. If the book had been published a few years later, after the Internet and other digital technologies had become part of our lives to the extent that they have, Morgan would no doubt have had to consider what is now perhaps the dominant metaphor for describing social systems: the network.[1]

A network is a system of linked elements or nodes. It is a concept that can be used to describe and study all sorts of natural and social phenomena. In fact, the concept of the network has become such an abstract trope that it can be used to describe almost everything that consists of two or more associated entities. For the present purposes, however, we are concerned primarily with digital technosocial networks. Digital networks, to reiterate, are social systems linked by digital technologies. Borrowing from a standard definition of a social network that is mediated by some form of computer technology,[2] we can broadly define a digital network as an assemblage of human and technological actors (the nodes) linked together by social and physical ties (the links) that allow for the transfer of information among some or all of these actors. Digital networks are complex structures reflecting in some measure each of the metaphors described previously by Morgan: they are part machine (their backbone is digital information and communication technologies), part organism (they are powered by the actions of living beings), and part brain (the combination of people and machines produces a form of collective intelligence that is, supposedly, greater than the sum of its individual parts). Obviously, these networks also easily illustrate the metaphors of cultural and political systems. And depending on who you ask, they either exemplify the metaphor of the technocratic psychic prison or represent the only sustainable and scalable alternative to revitalizing what some describe as the failed institutions of our times.

But unlike Morgan’s images, networks should not be treated just like any metaphor. As Galloway[3] argues, using the network as a cultural metaphor to signify notions of interconnectedness is limiting and misleading, given that the network in our age is not just a metaphor but a material technology that is a site for concrete practices, actions, and movements. And this is key to understanding the impact of digital networks: as social networks are enabled by digital technologies, they become templates or architectures for organizing the social. If the network was a useful metaphor to describe society before, now it has become a (for-­profit) model or architecture for structuring it. Eventually, this architecture becomes an episteme—­a way of organizing our theories about how the world works. As pointed out earlier, the shift from metaphor to model to episteme signals a transition from using networks for describing society to using networks for managing society, facilitating or obstructing certain kinds of knowledge systems about the world.

Given that digital technologies are cybernetic technologies, to talk about networks as templates is to talk about networks as computer models. The technological part of digital networks is made up of computer code or algorithms. To be sure, these algorithms do not appear out of thin air. They operationalize behaviors according to their author’s understanding of how society works, frequently informed by a relatively new branch of science known as network science. In conjunction, computer science and network science help transform social signs and meanings into technological templates that organize reality. In this chapter, we look at how these sciences give shape to a network that structures sociality for its users, solidifying patterns, making some things possible and other things impossible, some things knowable or near, and other things unknowable or far.

Social Computing: Making People Usable

Phil Agre suggests that while information technology is said to be revolutionary, in reality it is quite frequently the opposite, as “the purpose of computing in actual organizational practices is often to conserve and even to rigidify existing institutional patterns.”[4] If this is the case, we should ask which social patterns computing has solidified at the expense of which other ones.[5] In other words, in celebrating the hypersociability that digital networks open up to us, we would do well to keep in mind, as Paul Dourish reminds us, that “[o]ur experience using computers reflects a trade-­off that was made fifty years ago or more.”[6] The trade-­off, to put it simply, is that the computer’s needs are valorized over our own. This trade-­off was not the result of secret agendas or ulterior motives, but one that emerged out of simple necessity: access to computers at the beginning of the cybernetic revolution was expensive; computer time was scarce and therefore more valuable than people time. This meant that interaction with computers had to be done in a way that made it easy for the computer, even if it made it difficult for us. We had to speak its language.

Since then, of course, things have changed to a certain degree. Computers are faster, cheaper, and smaller, which means they have moved out of the research lab first and into the home and the office; and now, they practically travel with us everywhere we go. Concurrently, there have also been significant improvements in trying to make it easier for us to interact with the computer in a more natural (or “human”) way—­to make the computer speak our language. Disciplines like human–­computer interaction (HCI) are addressing this problem by trying to move away from designing procedures (a series of algorithms that perform assigned tasks) and toward analyzing interactions, the fluid interplay between machines and humans.

Nevertheless, we can basically look at the five major innovations in computing—­high-­level programming languages, real-­time computing, time sharing, graphical user interfaces, and networking[7]—­and see them not just as a history of the computer becoming better suited to humanity but as a history of the changes imposed on average individuals to make them better suited to the computer. Perhaps this sounds too much like the technological determinist joke about humans being merely technology’s way of replicating itself. But under this alternate reading of the history of computing, we can see how these innovations (programming languages with more “natural” language commands and structures, graphical interfaces that lowered the barriers of entry so that masses of nonexperts could operate the machines, etc.) also make sense from the point of view of conserving those behavioral patterns that make the technologizing of the social—­the conforming of humans to computers—­much more effortless.

As forward-­looking and revolutionary as concepts like social computing and social media may seem, the paradigm that established that we must accommodate the computer, and not the other way around, continues to influence the way we structure the integration of computers and humans. Social computing, after all, is approached by its practitioners as a way to use computer science to model, replicate, and predict social behaviors: “Social computing is an area of computer science at the intersection of computational systems and social behavior. Social computing facilitates behavioral modeling, which provides mechanisms for the reproduction of social behaviors and subsequent experimentation in various activities and environments.”[8]

A critique of the premises behind social computing (the idea that computers can model complex social behaviors, describing a nodocentric world where we are able to model and predict the behavior of nodes) is not difficult to find, both outside and even inside the field. Such critiques are often reminiscent of Jaron Lanier’s remarks about artificial intelligence (AI) in which he suggests that AI does not make computers smarter but people more stupid: “[P]eople are willing to bend over backwards and make themselves stupid in order to make an AI interface appear smart.”[9] Likewise, social computing seems to invert the priorities, proposing that a reductive model of social behavior is more realistic than the real thing, and that humans do behave like computer programs. Giving voice to a sentiment that many Web 2.0 gurus would probably embrace, not realizing it is a critique, Trebor Scholz argues that the purpose of the sociable web (where many ideas from social computing have come to bear fruit) is basically to make people “easier to use.”[10]

By virtue of the limits of computational models, the digital network does not facilitate all kinds of social behaviors equally, it merely conserves or solidifies those behaviors that can be observed, measured, and quantified. The implications that follow from the application of social computing and network science to social behavior need to be explored more closely.

Network Science and Network Scientism

If networks are indeed material structures (and not mere metaphors), it is still difficult to perceive or grasp them. This is because they are complex and distributed assemblages of things and people, spanning across multiple scales of time and space, which we are often not able to perceive with our direct senses. Therefore, it seems logical that we rely on the abstract reasoning of science to detect and measure networks. But network science does not merely describe networks. As Peter Monge and Noshir Contractor[11] state, it also provides the instructions for their design.

Network science can be defined as the organized study of networks based on the application of the scientific method. The scientific study of networks is not new. It began, arguably, with the branch of mathematics known as graph theory founded by Leonhard Euler in 1736. Since then, the principles of network science have been used to discover and describe relationships among everything from proteins to terrorists. Of course, the principles of network science have not remained static since the eighteenth century. More sophisticated tools for data collection and processing have translated into more complex network models. According to Albert-­László Barabási,[12] science has recently contributed two important concepts to our understanding of networks. The first one is that the distribution of links in most networks found in the natural and social domains is not random but determined by logarithmic power laws, meaning that a few nodes (the ones acting as hubs, or central connectors) have many links, and conversely that many or most nodes have only a few links. This gives form to what is known as a “scale-­free network,” a network that can grow or expand easily. The second concept is that as these scale-­free networks grow, they exhibit a form of “preferential attachment” whereby new nodes tend to link to older or bigger nodes, meaning that rich nodes get even richer in terms of the number of links connecting them to other nodes.

There are two other important concepts that science has contributed to our understanding of networks: “node fitness,” an indicator of a node’s ability to attract more links than others even if it has not been around for as long as older nodes in the network; and “network robustness,” an index of how many nodes within the network would need to fail before the whole network would stop functioning altogether.[13]

There are also a variety of metrics that have been developed to quantify the behavior of networks. These metrics can describe the properties of the network as a whole, the behavior of individual nodes themselves, or the properties of the ties or links that connect the nodes.[14] For instance, properties of the network as a whole may include

  •  size and density, which indicate respectively the number of nodes in the network, and the ratio of actual links to possible links that could exist in the network;
  • centralization, which measures the difference between the centrality score (a combination of closeness and betweenness—­see further in the text) of the hubs and the rest of the nodes in the network;
  • transitivity, which describes the degree in the network to which a triad of actors are connected in a close loop (whenever A is connected to B, B is connected to C, and A and C are also connected); and
  • inclusiveness, which is a network metric that actually attempts to deal with the excluded by comparing the number of actors in the network to those not included in the network.

As far as metrics that describe the properties of nodes themselves, these may include

  • in and out degree, which indicates the number of incoming or outgoing links to and from a node;


  • diversity, or the number of links to nodes that have been classified as belonging to separate categories;


  • closeness, or the average distance or degrees of separation of a particular node;


  • betweenness, which measures the degree to which the node is in the path of one node to another; and
  • prestige, or the degree to which a node receives links instead of being the source of outgoing links.

Properties that describe the links that connect nodes include

  • direction, which indicates whether the link flows to and/or from the node;


  • indirect links, which describe a connection that involves more than one degree of separation;


  • frequency, which indicates how many times a link occurs;


  • stability, or the endurance of a link over time;


  • multiplexity, which describes more than one kind of link between two nodes;


  • symmetry or reciprocity, which indicates whether a link is bidirectional; and
  • strength, which describes the intensity of the link.

These metrics can be applied equally to the study of natural networks as well as social networks; one can speak, for instance, of the betweenness of a protein or a person to describe how it acts as an intermediary. Needless to say, this has changed the way we study and define social phenomena, insofar as people become nodes and social ties become links. Network science operates under the assumption that every social formation can be mapped and studied using the metrics described previously.

The branch of network science that uses networks as frameworks for understanding the structure of social systems is known as social network analysis. Throughout its seventy-­year history, social network analysis has been used to study systems as small as families and as large as the world. Its goal has been to explain how the nodes in these networks make use of the links connecting them to exchange resources, ideas, and messages. In essence, social network analysis attempts to shed light on the mystery of how community is formed and maintained—­a task made increasingly more complex by modern communication technologies, since they make it possible to establish communities no longer confined to one location in space.

According to Barry Wellman, technology has allowed communities to evolve from homogenous, “densely knit, geographically bounded groups” to “far-­flung, loosely-­bounded, sparsely-­knit and fragmentary” groups.[15] Thanks to technology, individuals can move in a single moment between multiple, amorphous communities that occupy both local and global dimensions, and engage in interactions of varying intensity (from full engagement to a passing ambient awareness) with diverse peers. The study of these exchanges within networks is framed by Wellman in what he calls the two aspects of the “community question”: “How does the structure of large-­scale social systems affect the composition, structure, and contents of interpersonal ties within them?” and “How does the nature of community networks affect the nature of large-­scale social systems in which they are embedded?”[16] In other words, how does the network influence the node, and how do the nodes influence the network? This resonates with Van Dijk’s[17] observation about the double affordances of networks, which can facilitate two processes at once: the algorithms of the network can influence the social behavior of the users at a macro level, while at the same time the aggregate of interpersonal exchanges become the social content of the networks. Social network analysis can help us map this dynamic as we try to answer the community question.
However, we must remain critical of the way these scientific concepts are applied when it comes to the modeling of networks as templates for certain kinds of sociality.

There are two main concerns in the application of network science to the study of digital networks: the assumption of scarcity as the determining factor in interaction, and the constraining of research questions by the available metrics for the study of networks. I will outline each one briefly.

Social network analysis, whose history obviously predates digital networks, has always assumed a scarcity of resources in society. Thus social network analysis focuses on the “structural integration of a social system and the interpersonal means by which members of this social system have access to scarce resources.”[18] One of the concepts in social network analysis that attempts to explain the importance of ties or links to overcome scarcity is the concept of social capital.[19] Nodes with more links (more social capital) are believed to have a greater chance of overcoming scarcity. Not surprisingly, a lot of effort has been expended in figuring out how to design social networks where nodes can conduct the transaction of social capital favorably. For instance, Monge and Contractor[20] have identified eight simple rules of communication that govern the exchange of social capital in all social networks. The rules are as follows (each rule is based on scientific theories that are beyond the present goal to describe, so they have just been listed for reference): nodes try to keep the cost of communication at a minimum (theory of self-­interest); nodes try to maximize the collective value of their communication (theory of collective action); nodes try to maintain balanced interactions among those they communicate with (balance theory); nodes are more likely to communicate with someone who has what they need or need what they have (resource dependency theory); nodes are more likely to communicate in order to reciprocate for past exchanges (exchange theory); nodes are more likely to communicate with others who are similar and not with others who are different (theories of homophily); nodes are more likely to communicate with others who are physically near or electronically accessible (theories of proximity); and nodes are more likely to communicate with others in order to improve their individual fitness or the fitness of the network (coevolutionary theories).

Directly or indirectly, these rules to overcome scarcity have been incorporated into the design of digital networks. Through the algorithms of social computing, an image is presented of a homo economicus determined to manage this scarcity. But these rules also restrict our understanding of actors in a network: in its quest to overcome the perceived scarcity through the management and control of flows, social network analysis reduces sociality to a set of prescribed network relations. In other words, the design of digital networks has taken these scientifically derived descriptive observations of behavior in networks and, by programming them into the code that regulates interaction among nodes, transformed them into normative rules of behavior. That this translation from descriptions to rules has taken place is not surprising, since—­to a certain extent—­the application of scientific knowledge in the creation of systems is what technology, as a practice, is all about. What should be open to critique, however, is the deployment of these rules in such a way that they become tools of domination, presenting obstacles to the creation of alternative forms of social organization.

While this kind of critique is the object of the second part of the book, what should be made clear at this point is that a critique of networks needs to transcend the boundaries of a network epistemology. In short, we must ensure that the questions we ask about networks are not subordinated to the solutions network science can provide. Part of the problem is that the practice of science as an exercise through which the measurable properties of nature or society are revealed has many advantages but at least one frequent disadvantage: focus tends to be placed on formulating research questions that can be answered with the models, laws, and theories already at our disposal instead of developing new questions whose solutions might not be as readily available.[21]

In describing nodes, links, and networks in terms of specific metrics (betweenness, transitivity, etc.), or analyzing behavior in terms of particular communication theories (self-­interest, balance, etc.), we might neglect to consider other important dimensions to the study of the digital network that might not be as easy to quantify or measure (such as questions about the degree to which participation increases inequality, or questions about the degree to which the outside of nodes represents an ethical resistance to network logic). This contributes to what Manuel DeLanda describes as the illusion promoted by scientism: “[T]hat the actual [measurable] world is all that must be explained.”[22] Nodocentrism is thus a form of scientism, a belief that only nodes are real and only nodes deserve to be explained, and that we need only quantifiable measurements to describe and predict their behavior. Questions about what alternative ways of looking at digital networks might look like are silenced before they can be asked, because we are only interested in solutions that can be measured with the metrics and rules network science has identified. The process through which alternatives are generated is irrevocably arrested.

The reasons why there is such an investment in network science as the study of unchanging principles used to build templates for organizing society are not difficult to discern. A report on network science commissioned by the U.S. military states, “Network science consists of the study of network representations of physical, biological, and social phenomena leading to predictive [my emphasis] models of these phenomena.”[23] This predictability is necessary because networks are seen as having weaknesses that can be easily exploited by disruptive forces. The same report goes on to explain that these disruptions can only be addressed through superior network design: “Large infrastructure networks evolve over time; society becomes more dependent on their proper functioning; disruptive elements learn to exploit them; and society is faced with challenges, never envisaged initially, to the control and robustness of these networks. Society responds by adapting the network to the disruptive elements, but the adaptations generally are not totally satisfactory. This produces a demand for better knowledge of the design and operation of both the infrastructure networks themselves and the social networks that exploit them.”[24]

In short, we are in a race to build better and more resistant networks before they become overrun by disruptive elements such as terrorists or hacker collectives. As what might be expected, the race to design and control these improved digital networks starts with the algorithms.

Social Allegories and Algorithms

Network science, social network analysis, and social computing provide the frameworks not only for understanding but also for building the complex assemblages that are digital networks, the assemblages that in turn act as determinants of social behavior. To better understand how these frameworks are actually codified into the architecture of digital networks, I will attempt to establish a link between the computer algorithms of digital networks and the social allegories contained in them.

First, a word about allegories: traditionally, we think of allegories as literary or artistic devices that are used to convey a meaning that is other than the literal meaning. Thus we can think of works like Fritz Lang’s Metropolis or William Golding’s Lord of the Flies as allegories (their surface narratives hint at deeper insights about technology, human nature, etc.). Here, however, I will be using the concept of the allegory more loosely, simply to imply a device that is used to transfer meaning through symbolism from one context to another. In essence, I will be arguing that computer algorithms can communicate meaning through allegories between the realm of social behavior and the realm of network architecture. My exploration of algorithms and allegories in digital networks is motivated by two questions: If algorithms are formulas or processes for solving problems, what are the social problems that the algorithms of digital networks intend to solve? And if allegories communicate meaning through symbolism, do the algorithms of digital networks function as allegories that convey a message about the social in the act of “solving” these problems?

Digital networks are computer programs, so they contain algorithms. These algorithms transform user actions (like clicking a “Like” button in Facebook) into a series of predefined operations in the network (increasing the number of likes of the digital object in question by one, adding the object to the list of things the user likes, and so on). But in doing so, they assign a slightly different meaning to what it means to “like” something (or to “friend,” “chat,” “recommend,” “join,” and so on) in the digital network. This meaning, however, is not entirely new. It is based on common understandings of what it means to like, friend, chat, recommend, or join something outside the network, in the so-­called real world. This way, the algorithm serves as an allegory of sorts by establishing a correspondence between two operations—­albeit with the same name—­in two different realms of social meaning (e.g., to like something in the digital network, and to like something in “real life”). To “friend” someone in a social networking site therefore implicates an algorithm that references the social act of forming a friendship in an allegorical way and codifies that act as a set of computer processes (establishing a correspondence between two records in a dataset, for instance). In this manner, network metrics (such as the centrality of the friend, the frequency with which the friend links to us, etc.) not only are used to construct algorithms that serve as allegories of social acts but also redefine or give new meaning to those acts in the process.

Much like a video game player discovering the meaning of certain actions in the game, and discovering which sequence of actions has what set of consequences, the digital network user learns to play the algorithms of the digital network. A digital network, like a video game, is a complex system of meaning that assigns a quantifiable value to the elements within it, which therefore establishes an economy that can be discovered through participation. By playing the algorithms of the network, the user discerns the mechanics of the economy (e.g., how acquiring more friends, gaining more incoming links, or contributing more content might result in more visibility within the system).

To repeat a point made earlier, the design of digital networks takes scientifically derived observations of social behaviors and, by converting them into computer code that regulates interaction, transforms these observations into algorithms that facilitate certain forms of action (and obstruct others). In the process, social acts are given new meaning, although they continue to allegorically reference the original act outside the network. Thus to talk about a “recommendation” in the realm of digital networks is to reference the act of one person suggesting an object (like a book or movie) they think another person might enjoy. But within the network system, a recommendation is nothing more than the application of algorithms like collaborative filtering, naïve Bayes classifiers, decision-­tree classifiers, or k-­nearest neighbors[25] to calculate the probability that a user will be interested in a particular object. The algorithmically derived recommendation is only an allegory of an interpersonal recommendation because it is made by a machine, not a human (even if the machine is merely aggregating lots of human opinions); it is derived from preferences the user has disclosed to the network, not from personal knowledge. Insofar as they allegorically stand for human interactions, these computer operations are only possible to the extent that we allow our behaviors to become legible to the algorithm.

The algorithms of digital networks enforce certain modes of social conditioning. Nodocentrism means that things not rendered as nodes are practically unintelligible to the network, which suggests that being in the network requires nodes to continuously participate in ways that makes their behavior legible to other nodes, in alignment with network logic. Although much has been said on how decentralized networks spell the end of censorship, we are only just beginning to understand how participation in networks fosters certain kinds of self-­censorship: we have to learn which behaviors we want to highlight so that they can be seen by the network and which ones we want to avoid so that they cannot be misinterpreted by the algorithm.[26] To behave in a way that does not conform to the logic of the network means to render oneself invisible, to cease to exist. The economics of the network are such that a node’s existence depends on its ability to obtain attention from others, to allow its movements to be monitored and its history to be known.

The Agency of Code

While algorithms will probably continue to afford increasingly sophisticated social operations, it is important to realize that a lot of what we now consider the social revolution that is the Internet has occurred as a result of connecting computers and humans in relatively simple ways and letting complexity emerge out of the aggregation of lots of simple social operations. As a way of providing a brief illustration of how the code of digital networks can assume social agency and control in this simple manner, I will discuss the example of a type of Web 2.0 application called a social tagging system.

A social tagging system allows a network of users to classify resources by assigning descriptive tags or keywords to them (e.g., if I upload or encounter a picture of a cat, I might want to tag it with the keyword “cat” or any other keywords that I choose). Some of the most popular social tagging systems include social bookmarking applications like and photograph annotation sites like The most important feature of social tagging system is that they do not impose a rigid classification scheme. Instead, they allow users to assign whatever classifiers they choose. Although this might sound counterproductive to the ultimate goal of a classification scheme, in practice it seems to work rather well. There is no authority—­human or algorithmic—­passing judgment on the appropriateness or validity of tags, because tags have to make sense first and foremost to the individual who assigns and uses them. While tags serve primarily a personal purpose, facilitating the retrieval of resources by the individual at a later time, the use of the same tag by more than one person engenders a collective classification scheme known as folksonomy (a portmanteau of folk and taxonomy). The whole point of a social tagging system is that the aggregation of inherently private goods (tags and what they describe) has public value: when people use the same tag to point to different resources they are organizing knowledge in a folksonomy that makes sense to them and others like them. In other words, the tag is the object that brings a resource and a social group together via the shared meaning of a word.

We can say, then, that the social tagging system functions at the intersection of individual choices and the shared linguistic and semantic norms of a social group (the folks in folksonomy). The code of social tagging systems may not directly force users to employ certain kinds of tags, but by indirectly raising the expectation that tags might be useful to others, it shapes social activity in the process of aggregating individual tagging choices into collective information.

The Delegation of Meaning

The greatest strength of social tagging systems is also perhaps their greatest weakness: the way in which the negotiation of meaning during the process of classification is delegated from humans to code. Decisions regarding how to classify things, which used to be undertaken by humans in collectivity are now carried out by humans individually, while the code aggregates and represents those decisions. If we see this as a replacement for oppressive systems of classification in which one group of people used to impose their classification scheme on the rest, this might be seen as an improvement. If we see this as a replacement for democratic systems in which equals used to negotiate and collaborate on the definition of a classification scheme (and in the process gave shape to what defined them as a group), then the outcome might not be as positive. This is because this process is now conducted by the code, without some of the opportunities for negotiation and collaboration that other paradigms afford. As is always the case with technology, where the line is drawn between the open affordances of social tagging systems (what they facilitate and what they constrain) depends on how the technology is applied.

In order to understand how code assumes social agency in social tagging systems, we must first contextualize the manner of classification that these systems embody. There are two ways in which a classification system allows for meaning construction. One is in the use of the system to search for resources already in the system. The other is in the contribution of new resources to the system. A traditional classification system, based on a structured taxonomy, guides users in search of resources by moving from the general to the specific, at each branch presenting clearly defined options. Imagine you wish to find a resource using the Yahoo! Directory. Does the resource have to do with arts and humanities, business and economy, or one of the other categories? If it is related to arts and humanities, does it have to do with photography, history, literature, or one of the other categories? Yahoo! decides what those categories are, and individuals use their familiarity with the classification structure to find things. Now imagine you wish to add a resource to the system. In that case, you would use the same categories to find the appropriate place for the resource. If such a category does not exist, then the administrators of the system must decide whether it needs to be created, and where in the overall scheme it needs to be added.

Folksonomies differ from this structured taxonomy approach in significant ways. The most obvious one is that any user of the system can create tags or categories without permission from any kind of authority. Another important difference is that tags need not be arranged in any particular way. If the tag or category cat is close to the tag or category car it is because of alphabetical reasons, and not because the proximity of cat and car says something about any of the two signified elements. Because categories do not occupy a specific location in a structure, folksonomies allow for the association of an infinite number of tags to a resource. In other words, a picture of a cat driving a car can be marked with both tags and included in both sets, as well as any others that the user chooses.

Another difference between folksonomies and structured taxonomies that might not be so obvious is the role of human collaboration in their definition. Structured taxonomies require consensus in the form of at least two collaborating human subjects (whether this consensus is achieved democratically or hegemonically is another topic). If a category is defined but no one adheres to it, can it be said to exist? Folksonomies, on the other hand, do not require consensus as much as they measure the consensus already established around the use of certain words. In other words, folksonomies assume consensus without involving humans in the process. Social tagging system users have no discussion whatsoever about how categories should be defined, what they mean, or their relation to each other. Instead, what the code cares about is that if two people used the tag cat, it will aggregate and display the resources associated with that tag, regardless of whether one user meant the furry feline and another the Center for Alternative Technology. Of course, if the latter user had employed that tag CAT instead of cat, the code would react differently (which perhaps means, as Clay Shirky suggests, that there are no such things as synonyms in a folksonomy[27]).

In essence, the code of social tagging systems removes the need for humans to negotiate meaning around classification. This can be liberating as well as alienating: it is liberating because, as I suggested earlier, there is no governing body dictating what the classification scheme should be; and alienating because, without the mechanisms for deliberation, meaning becomes atomistic, a reflection of what the software has parsed and aggregated from detached individuals, not what has emerged through consensus and deliberation.

By this I do not mean to imply that social tagging systems do not open up all kinds of new social operations heretofore impossible (they are, after all, social media). I merely want to call attention to this different way in which we are defining and constructing sociality—­a sociality that is the result of code doing things to the resources of detached individuals. There are plenty of social transactions that can be carried out in social tagging systems, such as being able to see different items classified by different people with the same tag, or the same item classified by different people with different tags, or the resources of a particular individual, and so on. But the scope of these affordances is defined by the code, and the community willingly relinquishes a large part of its agency in exchange for individual freedom and the scale of access that only the Internet can provide.

While the benefits of this freedom and scale are obvious, some people rightfully point out the risks of surrendering agency in the process of negotiating how knowledge should be structured. Shirky, representing arguments focusing on freedom and scale, states in reference to that “aggregate self-­interest creates shared value. . . . By forcing a less onerous choice between personal and shared vocabularies, shows us a way to get categorization that is low-­cost enough to be able to operate at internet scale, while ensuring that the emergent consensus view does not have to be pushed onto any given participant.”[28]

On the other hand, Matt Locke describes the functions relinquished by the community and how the code assumes those functions in some form or another: “There are no politics in folksonomies, as there is no meta-­level within the system that allows tagging communities to discuss the appropriateness or not of their emergent taxonomies. There is only the act of tagging, and the cumulative, amplified product of those tags.”[29]

It is in discussing this “appropriateness” that social groups in fact define themselves. Clearly, there are politics in folksonomies, but we need to uncover them by asking not only what kind of social agency the code assumes on behalf of the networked subject but also how this conforms the networked subject itself.

  1. Morgan, Images of Organization. Morgan does talk about networks at various points in his book, but not as one of his archetypal metaphors for describing organizations.
  2. Wellman, Networks in the Global Village.
  3. Galloway, Protocol.
  4. Agre, “The Practical Republic,” 201–­23, para. 22.
  5. Michael Mahoney, “The Histories of Computing(s),” 53, in this vein asks, “In seeking to do things in new ways with a computer, it is useful to clarify how we do them now and how we came to do them that way and not otherwise.”
  6. Dourish, Where the Action Is, 1.
  7. Campbell-­Kelly and Aspray, Computer, xv.
  8. Tangney and Lytle, “Preface,” v.
  9. Lanier, “Digital Maoism,” para. 4.
  10. Scholz, “What the MySpace Generation Should Know about Working for Free.”
  11. Monge and Contractor, Theories of Communication Networks.
  12. Barabási, Linked.
  13. Ibid.
  14. Monge and Contractor, Theories of Communication Networks.
  15. Wellman, “Little Boxes,” 11–­12.
  16. Wellman, “Networks in the Global Village,” 2.
  17. Van Dijk, The Network Society.
  18. Wellman, “Networks in the Global Village,” 3.
  19. See for instance Bourdieu, Distinction; Coleman, Foundations of Social Theory; Putnam, Bowling Alone; Lin, Social Capital.
  20. Monge and Contractor, Theories of Communication Networks, 88.
  21. DeLanda, Intensive Science and Virtual Philosophy, in this vein advocates for a “reversal of the problem-­solution relation,” insofar as “subordinating problems to solutions may be seen as a practice that effectively hides the virtual.”
  22. Ibid.
  23. Committee on Network Science for Future Army Applications, Network Science, 28.
  24. Ibid.
  25. A collaborative filtering algorithm mines a large dataset looking for other users who have expressed similar interests or looked at the same objects one has. It then proceeds to identify things they have liked that one has not accessed and puts together a list of suggestions ranked according to which items come from users with closely related interests. A naïve Bayes classifier is an algorithm that is optimal for highlighting the things one is interested in or eliminating the things one finds uninteresting. It works by parsing a file and assigning a probability value to each element (such as a word) within it. This probability value represents the likelihood that one is, or is not, interested in that single element. Then the algorithm calculates an overall likelihood that you would or would not be interested in the document as a whole, and highlights or obscures it accordingly. A decision-­tree classifier works by recursively partitioning a dataset until each partition contains examples from one class only, and it is ideal for organizing things into unique categories. To provide a simplified example, if a user is performing a query for pets that are not only cute but also kid friendly, the system could provide results according to the pets users have ranked as most cute and most kid friendly. Thus finding a match is achieved by eliminating those objects that do not fit the parameters established by the aggregated opinion of users. Lastly, a k-­nearest neighbor algorithm includes or excludes objects based on the quantity of other objects of a known class found in the proximity. For instance, if there are more blue objects than red objects in the vicinity, the object in question is classified as blue and inferred to have characteristics in common with blue objects. At the same time, the algorithm can be adjusted to consider various degrees of proximity (referred to as k). For instance, if k is broadened (which basically means extending the range of what is considered to be proximal, or how extensively the algorithm is told to look) and a new calculation results in more red objects found in the vicinity than blue objects, the object is reclassified as red. For more information on any of these algorithms, see Segaran, Programming Collective Intelligence.
  26. According to Dave Boothroyd, “The Ends of Censorship,” para. 22, “[C]ensorship could become, perhaps already is becoming, an internal feature and control mechanism of socio-­technological systems of governance.”
  27. Shirky, “Folksonomy.”
  28. Shirky, “Matt Locke on Folksonomies,” para. 10.
  29. Locke, “The Politics of the Playful Web,” para. 2.