Making sense of content moderation, some foundations
To understand why content moderation and algorithmic choices at Facebook, Twitter, and YouTube are so loathed, it is important to understand a little psychology first.
Humans have a really bad habit of trying to make sense of the world. We see patterns where they don’t exist and grant intentionality where it shouldn’t be. There are lots of terms for it. Pareidolia, apophenia it is called. But it makes sense under error management theory. It is better to err on the side of caution, to see a threat, than to underestimate a threat and get killed.
Beginning in 1944 with the seminal paper by Heider and Simmel, psychologists have built up a body of work showing the ease with which individuals assign agency to inanimate objects. Attributions of psychological and social properties can be applied to objects that don’t even vaguely resemble humans or animals. In that first paper, triangles and discs were given beliefs, desires, emotional states and genders.
It seems that the ascription of agency is different from the ascription of subjective experience. Apparently people find it more natural to give a corporation agency, than they do subjective experiences. Take a second and consider these sentences:
- Apple intends to release a new iPad in August.
- Google wants to change its corporate image.
- Microsoft is now experiencing great joy.
That last sentence doesn’t really make sense. Why? Following the lead of Thomas Nagel, most philosophers now say that experiences are phenomenal. There is a there there when it comes to experience. What it is like to be hungry is different from what it is like to be angry or melancholy. Moreover, there is a difference between hunger as an internal state which must be experienced as compared to anger, which can be inferred from someone’s actions. As Edouard Machery and Justin Sytsma explained the importance in an article from The Philosopher’s Magazine,
People distinguish the sheer possession of an emotion from the experience of what it is like to have that emotion. The idea is that ordinary people hold that while a corporation can have an emotion, it cannot experience what it is like to have that emotion, and as a consequence it cannot properly be said to feel the emotion.
In other words, corporations are considered agents that have emotions, but do not subjectively experience those emotions. Apple could intend to release a new iPad, but for Microsoft to experience joy, it would need some form of interiozied emotion. People know that Microsoft doesn’t have that. This lack of feeling is read as a form of psychopathy in the traditional sense. Psychopaths don’t have the voice saying no. They lack consciences. Not surprisingly, in research settings people tend to believe organizations are more unethical than individuals, even when both engage in identical behaviors.
Content moderation and changes in the algorithm are moments when these platforms become agents and they are agents with an unknowable interior process. Yet again, research finds that people notice when tech goes awry and give it agency. This is the contrapositive to Mark Weiser’s famous two lines, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Technologies that do appear become distinguishable as other life.
In other contexts, I’ve called this technoanimism. As I see it, animism should be understood as a knowledge system, a way of relating to the world, which stands in contrast to our normal modes of knowing with a subject and an object.
Botanists might collect a specimen to categorize it, sort it, and place it within a larger system of knowledge. Here, the botanists are the subject and the trees are the objects. But that isn’t the only way of understanding. Animists “talk with trees” to understand the tree’s relationship in their world. Instead of subject and objects, trees are understood as subjects of their own. As anthropologist Nurit Bird-David explained it, against the common Western understanding of ‘‘I think, therefore I am,’’ stands in contrast the animist which might say ‘‘I relate, therefore I am’’ and ‘‘I know as I relate.’’
The technoanimist world is one made by lines of code, where both real persons and technological artefacts interact and relate. And in the technanimist world, code enchants. Code breathes life into the digital.
Users of platform giants like Google and Facebook hold all sorts of theories about the mechanics and politics driving platform decisions. As a user, it is difficult to understand how content is moderated, how stories, pictures, videos, and ads are ordered, and just how much platform operators know about their product. As researcher Sarah Myers West explained, the opaque system drives users regardless of their political opinions to “make sense of content moderation processes by drawing connections between related phenomena, developing non-authoritative conceptions of why and how their content was removed.” Because of this opacity, users believe platforms are “powerful, perceptive, and ultimately unknowable.” They are the most powerful demiurges in technoanism.
All of this is a long leadup to say that it is incredibly hard to parse out or describe a single coherent ideology of content moderation systems. There isn’t a Facebook algorithm as much as there are Facebook algorithms. Since every experience is personalized, every understanding of Facebook or Google is also personalized. It leads to individualized understandings of these platforms as agents. We see content moderation as an ideological choice and not the output of a bureaucratized institutional process.
First published Jun 19, 2020