Abstract
In our analysis, we examine whether the labeling of social media posts as misinformation affects the subsequent sharing of them by social media users. Conventional understandings of the presentation of self and work in cognitive psychology provide different understandings of whether labeling misinformation in social media posts will reduce sharing behavior. Part of the problem with understanding whether interventions will work hinges on how closely social media interactions mirror other interpersonal interactions with friends and associates in the offline world. Our analysis looks at rates of misinformation labeling during the height of the COVID-19 pandemic on Facebook and Twitter, and then assesses whether sharing behavior is deterred by misinformation labels applied to social media posts. Our results suggest that labeling is relatively successful at lowering sharing behavior. We discuss how our results contribute to a larger understanding of the role of existing inequalities and government responses to crises such as the COVID-19 pandemic.
Did you know that COVID-19 was a conspiracy by Bill Gates to profit from the creation of a vaccine? That the virus has undergone mutation in laboratories in Iceland so that vaccine development will be stopped? That the pandemic was a global conspiracy against the Trump administration? That the virus escaped from a chemical weapons factory in China? These and a variety of other dubious and downright harmful stories have been circulating on social media for months.
The spread of dubious or downright false information (sometimes referred to as fake news, referred throughout this document as misinformation) is a growing social, cultural, and scientific dilemma, and the situation is especially troubling when it comes to information about medicine and public health (see Ross 2008; Vogel 2011). The most recent manifestation of the consequences of dubious medical information is the spread of measles and its link to anti-vaccination websites and memes (Glenza 2018). This, however, is only the most recent manifestation—others include the peddling of conspiracy theories and fake cancer cures (Ghenai and Mejova 2018; Ross 2008; Vogel 2011), organized misinformation about stem cell research (see Marcon, Murdoch, and Caulfield 2017), and the spread of dubious claims about alternative medicines (see Barratt 2018). Further evidence indicates that some of this dubious information is deliberately produced for financial gain or to fuel cultural discord (Ross 2008; Broniatowski et al. 2018; Kavanagh and Rich 2018).
Sadly, the situation is no better when it comes to COVID-19 pandemic and its ongoing effects on the world’s population and social order. The pandemic has provided a perfect storm in which misinformation thrives, as seen in the rise of QAnon, which brings together the various threads of conspiracy theories and COVID-19 to produce a constant source of dangerous rumors and accusations. The actual source of the virus is a matter of some contention (Suciu 2020). The lack of an obvious cure or magic bullet to treat the virus is also a major catalyst of misinformation (Brennan et al. 2020). A now long-standing subset of U.S. citizens has little confidence in American mainstream institutions, including governments, the media, and the scientific community (Twenge, Campbell, and Carter 2014). The overall uncertainty of the pandemic situation increases the temptation to blame others and look for outside scapegoats for problems (Schild et al. 2020). Finally, some evidence suggests that active influencers are taking advantage of the general confusion to deliberately sow discord and institutional disintegration (Jurkowitz and Mitchell 2020).
This project seeks to understand how COVID-19 misinformation spreads and, especially, what effect social media labels have on the sharing of misinformation on social media sites.
COVID-19 BACKGROUND
As of April 2022, the total number of deaths in the United States due to COVID-19, the viral infection caused by a coronavirus known as SARS-CoV-2, exceeded 950,000. The total number globally, at the same time, was more than six million. These astonishing numbers have grown exponentially since early 2020. For a full time line of the pandemic, see the introduction of this issue (Redbird et al. 2022, this issue).
All of this upheaval was created by a virus that scientists initially knew little to nothing about. Yet a few things became clear as the pandemic has rolled on in those early months:
The virus first crossed over to human populations near Wuhan, China, sometime in the fall of 2019.
The virus could spread from person to person, even from carriers who have no symptoms.
Transmission via touching surfaces was debated: major routes seemed to be person to person via small droplets expended when the infected person coughs, sneezes, or exhales; another possibility was when someone touches an infected surface and then their eyes, nose, or mouth. (Doctors Without Borders 2020)
Standard protections from the virus included vigorous handwashing, social distancing (the six-foot rule), proper cough and sneeze etiquette, self-quarantining if one became ill or were exposed to someone who was, and avoiding crowded public gatherings. Masks were recommended when social distancing was not possible, though some worried that this guidance would create a shortage of masks and protective gear for frontline health-care workers (but see Doctors Without Borders 2020).
Approximately 80 percent of those infected developed minor symptoms and recovered at home. Another 15 percent developed severe symptoms and require hospitalization, and approximately 1 to 5 percent became critically ill, needing extensive medical intervention to save their lives (Baud et al. 2020; Rajgor et al. 2020).
COVID-19 seemed especially lethal among the elderly and people with respiratory problems. At first, it seemed that children were affected far less.
COVID-19 data collection, especially in the United States, was hampered by the dispersion of health statistics data collection to individual states, and wide differences in testing regimens in different parts of the country and around the world (See James, Tervo and Skocpol 2022, this issue).
This information represented something close to a scientific consensus as of mid-May 2020.
JUST WHEN WE NEEDED EXPERTS, WE IGNORED THEM OR SENT THEM PACKING
One would think that the onset of a pandemic would lead national leaders to rely on the information and recognition of experts. But one would be wrong. A number of writers and reporters have commented on the large numbers of scientists leaving government service since the beginning of the Donald Trump administration (Gowen et al. 2020; Friedman and Plumer 2020). According to the Office of Personnel Management (and analyzed by the Washington Post) more than 1,600 federal scientists left government employment in the first two years of Trump’s tenure (Gowen et al. 2020). Those exits included voluntary departures, firings, and resignations under pressure. The Brookings Institution regularly tracked turnover in the Trump administration, focusing on the so-called A Team, made up of members of the executive office of the president. Among these higher-level employees, the turnover rate was 86 percent as of May 15, 2020, and multiple turnovers occurred in 38 percent of the A-Team positions (Dunn Tempas 2020). This turnover rate is higher than that of the five most recent presidents (Dunn Tempas 2018). Unfortunately, it is not possible to separate voluntary exits from resignations under pressure or from firings in the Brookings data.
The almost systematic silencing of experts during the COVID-19 pandemic is tied to larger problems produced by social, cultural, and media fragmentation that undermine professionals whose knowledge depends on sound scientific and rational reasoning (see, for example, Leicht 2016; Leicht and Fennell 2022). Most damaging is the appearance of a “war on expertise” and the implications this has for the future of professional expert knowledge (Nichols 2017). Recent writers suggest a campaign against established knowledge that imperils democracies and their citizens. The traditional role of the expert (in our case synonymous with the professional) is to collect and interpret knowledge for citizens in specific areas. The traditional division of labor as Durkheim describes it requires that people defer to professional judgments in specific areas of expertise. The combination of lots of different experts in lots of different areas (and the commitment of professionals to defer to others outside of their areas of expertise) leads to an active dialogue where debates center around factual knowledge and interpretation with citizen input.
In Tom Nichols’s analysis, this dynamic has fallen victim to a pseudo “democratization of knowledge” where everyone’s opinion is of equal value regardless of what the conveyor actually knows (2017, 5). Any suggestion of factual, scientific, or logical errors in an argument is met with a direct attack suggesting the critic is elitist, out of touch, or worse. This form of aggressive ignorance denies that people who have studied a topic for years know anything of value that cannot be Googled (62). Nichols points out that the forms of pseudo-expertise this flattened hierarchy has created are elusory and dangerous. Google will confirm any random opinion we have, no matter how fanciful. So-called citizen journalists don’t do very good journalism. Pontificators and pundits talk about everything from global warming to heart surgery and know next to nothing about any of it. Worse still, the so-called expert citizen is seldom corrected when wrong and their opinions do not change, unlike professionals, for whom a check-and-balance system is in place that makes corrections (sometimes slowly). In some cases, the almost complete free pass granted by publics and supporters to these bogus claims has led many to conclude that we are entering a “post-truth” world (see Rose 2017; Gibbs 2016).
Tied to the silencing of experts is the creation of the COVID-19 infodemic—the spread of bogus misinformation and conspiracy theories about the virus’s origin and potential treatments and cures. To some extent this dimension of the pandemic simply mirrors more widespread problems in the spread of health misinformation via the web regarding vaccinations, cancer cures, and so on (see table 1).
The Trump administration and Trump’s enablers fueled this misinformation as well:
Trump dismissed the reports on COVID-19 as little more than the flu.
He significantly delayed or did not understand the need for widespread testing and left testing activities to the states.
He then promoted the use of hydroxychloroquine as a potential vaccine or prophylactic and began taking it himself despite the lack of evidence that it works and plenty of evidence that the side effects (including heart palpitations) were dangerous.
The administration claimed the number of cases would “converge toward zero” by May 1, 2020.
When it was clear that social distancing was harming the economy, Trump declared that the “cure was worse than the disease.”
He then asserted that states led by Democrats had mismanaged their responses to the virus and mismanaged their state economies (despite evidence that cases were rapidly spreading to Trump stronghold areas).
Trump was in virtually continuous conflict with his own health experts (most notably Anthony Fauci) and attacked any and all sources of information suggesting the U.S. response was too feeble, too decentralized, and too late. (Beer 2020)
This fragmented national response left state and local governments and health-care providers to their own devices (see James, Tervo, and Skocpol 2022, this issue). Individual states secured their own medical supplies though in some cases the federal government prevented the delivery of personal protective equipment and medical devices the states had attempted to purchase. In practice, this meant fifty individual responses to the pandemic rather than a coordinated national response. Politicians around the country took their cues from the White House and did not enforce social distancing in the belief that the consequences of the pandemic were “greatly exaggerated,” systematically ignored information that their populations were vulnerable and their health-care systems could not cope, or suppressed data on COVID-19 cases, hospitalizations, and deaths leading to protests from local health-care providers and scientists.
The inadvertent or deliberate confusion arising from the systematic sidelining of scientific experts combined with the recession caused by the pandemic shutdown to heighten conflict—both cultural and economic—around the country and often the world. The most visible manifestations of this were demonstrations and protests by citizens seeking to open the economy in spite of widespread evidence that lax social distancing guidelines would increase the number of cases, tax health-care systems, and lead to more deaths. There is considerable debate among journalists and observers about whether these protests were genuine outcries of economic distress, fueled by misinformation about the pandemic, or (worse still) “astroturfed” by specific national organizations looking to sow discord in areas controlled by Democrats (see Graves 2020).
Social Media, Fake News, and Labeling Misinformation—Will it Work?
Misinformation, as it is used in this analysis, refers to “cases in which people’s beliefs about factual matters are not supported by clear evidence or expert option” (Nyhan and Reifler 2010, 305). This is an appropriate definition in cases, such as COVID-19, characterized by a rapidly developing scientific consensus (see also Vraga and Bode 2017). Most analysts distinguish between misinformation, defined as false or inaccurate information circulating as a result of honest mistakes, negligence, or unconscious biases; disinformation, referring to false information deliberately designed to deceive others; and fake news, referring to “fabricated information that mimics legitimate news media content without a news organization’s process or intent” (Lazer et al. 2018, 1094; see also Gentzkow 2017; Pennycook and Rand 2019; Fallis 2015; French and Monahan 2020; McCloskey and Heymann 2020).
In this research, the scientific consensus about COVID-19, its likely spread, and mitigation strategies came together fairly rapidly despite, as stated, some disagreements about transmission via hard surfaces, masks, and the like. We are interested in the dissemination of COVID-19 social media posts that social media companies have labeled as misinformation. The labels, as of April 2022, do not distinguish among misinformation, disinformation, and fake news, though our larger project examines differences in the spread of posts with those distinctions (Leicht et al. 2021). We settle on the more benign term of misinformation because the social media labels do not distinguish between types of falsehoods and we are not privy to the motives of those who share the content.
Research identifies several major factors fueling the spread of misinformation and fake news. First is the diversification and globalization of scientific practice has led to the questioning of the “loyalty” of scientists as part of larger phenomenon of questioning the loyalties of a wide range of elite practices. Second is the deliberate fueling of political discord by those seeking to benefit from the anger and disorientation that results from disinformation campaigns. Third is motivated reasoning combined with cultural and media fragmentation, which draws people toward media that confirm their biases and silos them in attitudinal echo chambers that reinforce attitudes (see Kavanagh and Rich 2018; Leicht 2016). The question we address is whether the labeling of Facebook posts limits their spread in ways that would be consistent with social psychological theories about the presentation of self, cognitive processing, and motivated reasoning (see Goffman 1964; Pennycook and Rand 2019; Eagly and Chaiken 1998; Kleinhesselink and Edwards, 1975; McPherson 1983; Tabor and Lodge 2006; Lord, Ross, and Lepper 1979; Kunda 1990; Schaffner and Roche 2016; Epley and Gilovich 2016; Spinney 2017).
Fortunately, an ever-growing body of work within the journalism field on fact-checking and seeking the origins of antiscience rumors aids our research (for a list, see Leicht et al. 2020). As part of the wider effort to label misinformation, Facebook and Twitter are engaging in preliminary attempts to flag misinformation and at least label it, if not remove it. As of August 2020, Facebook began labeling posts it evaluated in regard to COVID-19 as dubious; such posts now come with a flag. However, despite claims to the contrary, Twitter posts are not flagged as misinformation and Twitter imposes relatively few limits on the spread of dubious information. In this analysis, we take advantage of a natural experiment, comparing the spread of dubious COVID-19 claims before and after Facebook started labeling posts. We explain this in the data and methods section.
To understand why labeling and facts-checking might affect the sharing of social media posts labeled as misinformation, it is useful to start with the work of Erving Goffman (1964; for updated treatments in relation to social media, see Hogan 2010; Bullingham and Vasconcelos 2013). Goffman spent a great deal of his career describing the intricacies of interpersonal interaction through what was eventually termed the dramaturgical perspective. In this perspective, face-to-face social interaction has four critical components (see Hogan 2010):
People engage in interaction rituals and other face-to-face encounters “putting their best foot forward” (that is, appearing intellectually competent, well-mannered, and engaged).
The group of people interacting have a collective interest in supporting actions that confirm or otherwise support similar attempts by others. When interaction disconnects occur, observers often help in various forms of verbal and nonverbal repair to restore the interaction to normalcy and to bolster the transgressor’s sense of competence and engagement.
Our interactional selves contain a front and a back stage. The front stage represents our public self as we attempt to cultivate an image of competence, rationality, and sanity. Our backstage represents the psychological and interactional places where we can express misgivings, anger, and distress without fear of damaging our front-stage image.
We move from one social encounter to another, regulating our front-stage behaviors to present and maintain a consistent sense of a competent self, and we (usually) assist others in maintaining their sense of a competent, front-stage self as well.
The implications of Goffman’s work for the study of online interactions are clear (see also boyd 2007; Marwick and boyd 2010; Mendelson and Papacharissi 2010; Lewis, Kaufman, and Christakis 2008; Quan-Haase and Collins 2008; Schroeder 2002; Tufekci 2008). The internet generally, and social media communications in particular, have been described as a free for all of sharing ideas, creating localized chat groups and online communities. If individuals view themselves as accountable to those communities, it would lead to the sharing and creation of posts that will lead to approval (or “likes”) by community members. Hence a “rational and competent” member of a social media group may care for or pay attention to their presentation of self in much the same way people do in face-to-face interactions.
In the standard interpretation of the Goffman model, a person’s desire to appear rational and competent would lead others to share social media posts labeled as misinformation less than other posts. The reason would be relatively straightforward—no one wants to appear to believe dubious and ungrounded things or the same attributes (“dubious and ungrounded”) will likely be applied to them. Imagine the horror some might experience when an array of the posts they share are labeled as misinformation and that moniker appears repeatedly on their feed.
But some debate centers on whether this relatively straightforward interpretation would follow in a social media environment. Several key differences might yield different results. First, it is not completely clear that a person’s social media friends or followers have the same status as those personally encountered face to face in public or private settings. Second, debate is ongoing about whether access control, the ability to limit views to friends or specific groups of people, on social media sites creates a “back stage” where a public persona is less on display (boyd 2006; Lewis, Kaufman, and Christakis 2008; Robinson 2007). Third, unlike interpersonal interactions, where people speak and utterances are (usually) quickly forgotten and have no history, social media posts exist on curated platforms, much like art and film, where the ability of others to see and react is decontextualized (see Hogan 2010).
In each of these deviations from the classical Goffman conception (and in many of cognitive and social psychology conceptions discussed), the effects of labeling posts are less than clear. If social media friends are not really interpersonal friends, then connections to them are weak and the same rules that apply to interpersonal interaction may be loosened in online communications. If social media is viewed as a backstage environment where access is restricted, then bizarreness and the embrace of alternative facts may be rewarded rather than punished. Finally, if posts are curated communications, then the interpersonal tie with the reader is severed and the attempt to draw attention to the post, regardless of what that attention might entail, is a driving force rather than the appearance of a competent, rational self. All of these processes might reinforce a user’s willingness to share social media posts that involve misinformation and to not associate this with their overall presentation of self.
In summary, Goffman’s perspective would assume that social media posts and platforms are significant expressions of one’s self-perception and self-concept. The user is at some level communicating something about themselves and their cognitive-emotional and social status via social media posts. The real questions come down to these: one, how close social media posts are to face-to-face interactions and the real or implied rules they follow; two, what the reference groups are for social comparisons and evaluations of self; and, three, how much cognitive energy the user is putting into evaluating what they post. Gordon Pennycook and David Rand (2019), for example, suggest that the inaccurate evaluation of information may be due to cognitive laziness rather than a conscious attempt to defend a consistent position. In the social psychological work cited, motivations for collecting and evaluating types of information vary and some suggest that interventions such as misinformation labels might produce better media-sharing practices by interrupting cognitive biases.
In addition to Goffman’s work in sociology and its related offshoots is long-standing work in social psychology and cognitive reasoning that suggests that people engage in motivated information seeking (Kleinhesselink and Edwards 1975; McPherson 1983), motivated information processing (Tabor and Lodge 2006; Lord, Ross, and Lepper 1979; Kunda 1990; Schaffner and Roche 2016), or motivated information recall (Eply and Gilovich 2016; Spinney 2017). All of these could promote or reduce user incentives to spread misinformation through slightly different mechanisms.
Under motivated information seeking, people are more attracted to messages supportive of their positions and to those that do not support their positions that are easy to refute (Kleinhesselink and Edwards 1975). Others identify existing psychological states, such as overall tolerance for ambiguity, as a trigger for seeking supportive information and discounting nonsupportive information (McPherson 1983). This perspective would suggest that online information is subject to strong existing motivations to seek information consistent with one’s views.
Under motivated information processing, media users may take in information that is not in accordance with their views but evaluate this information differently depending on its accordance with those views. Charles Tabor and Milton Lodge (2006) find that users select information in accordance with their beliefs when they have options about what information to access (a kind of confirmation bias) and that they tend to counterargue contrary pieces of information when confronted with it (disconfirmation bias). Both Charles Lord, Lee Ross, and Mark Lepper (1979) and Brian Schaffner and Cameron Roche (2016) find that belief polarization increases when ambiguous information is introduced, and that nonconcordant information yields longer response times because users are attempting to construct counterarguments to address information that does not line up with their views.
Finally, research focusing on motivated recall suggests that people selectively remember information and construct “collective recall narratives” even for contrary bits of information that is in opposition to group views (but see Epley and Gilovich 2016; Spinney 2017). This information becomes harder to dislodge over time no matter how implausible it really is because the dubious information becomes taken for granted.
In each of these perspectives the effect of misinformation labeling appears unclear at best. Our analysis takes advantage of the August 2020 shift on Facebook toward labeling COVID-19 misinformation. The critical question is whether and how posts that are labeled as misinformation are spread before and after the label is applied.
DATA, METHODOLOGY, AND RESULTS
We started our misinformation data collection by identifying websites that fact-check information about COVID-19, namely Healthfeedback.org (HF), Poynter.org, Snopes.com, and Politifact.com. Given the political nature of fact-checking, HF stood out for its science-focused approach. We therefore focused on HF for study 1, which was a comparison of misinformation sharing on Facebook versus Twitter. We found that of one hundred COVID-19 related misinformation fact-checks on HF, thirty-eight were shared on Twitter and Facebook.
A sample of HF’s COVID-19 related misinformation is presented in table 1. We pulled social media data using Brandwatch’s (previously Crimson Hexagon) historical Twitter database and CrowdTangle, a public insights tool owned and operated by Facebook (Fen 2019). Each of these databases only store publicly tagged posts and both databases have been used as Twitter and Facebook data sources in previous academic research studies (see, for example, Yun, Pamuksuz, and Duff 2019; Jernigan and Rushman 2014). The period on which we searched was January 1, 2020, to March 31, 2021.
For study 2, which focused on tracking engagement with misinformation on Facebook before and after Facebook labeled posts as misinformation, we used the Snopes COVID-19 misinformation data. We used Snopes data for study 2 because posts containing links that were evaluated as misinformation on Snopes.com were not labeled as misinformation on Facebook. We collected posts from Snopes for all of their fact-checked articles related to COVID-19, and then processed those posts through Amazon Mechanical Turk (Mturk) to get the original misinformation links and the ratings Snopes gave each link. At least two Mturk workers recorded information for each article and the resulting responses were harmonized.
The original misinformation links were screenshots of posts or memes, links to native Facebook, Twitter, or Reddit posts and links to articles/websites containing misinformation. We focused on a subset of the latter. These links were passed through CrowdTangle to verify that they were not labeled. This process gave us a dataset of posts of unlabeled misinformation links.
Study 1: Assessing Misinformation Sharing on Twitter Versus Facebook
We found 12,184 instances of HF’s COVID-19 misinformation links being shared on Twitter versus 6,388 instances of the same links being shared on Facebook (see table 2). Interestingly, Facebook labeled all of these posts as misinformation whereas Twitter flagged fewer than 1 percent. We could not find a specific pattern given that the same underlying misinformation link is labeled in a few instances but not in others. This seems to be in direct contrast to how public perception views the two platforms in regard to their efforts against misinformation in general. Facebook is considered to do a poor job at fighting misinformation (Fung 2020). Twitter is garnering more praise (Morse 2020). Our results suggest that Facebook is doing a much better job of labeling COVID-19 related misinformation than Twitter, a point we return to in the discussion.
Investigating whether accuracy reminders about COVID-19 information affected participants’ ability to discern truth and about sharing behavior of such information, Pennycook and his colleagues (2020) find that misinformation signaling reduced the likelihood that users would share information with others. Both Twitter and Facebook have stated publicly that they are actively engaged in labeling misinformation about COVID-19 on their platforms, thus this labeling should provide a real-world example of assessing the Pennycook results. We compared overall engagement with the HF COVID-19 misinformation posts on Twitter and Facebook, and find that users engaged with COVID-19 misinformation on Facebook approximately ten times as much as on Twitter, M = 73.59 vs. M = 7.32 (see table 1 and figure 1). This is in direct contrast to the Pennycook results, given that the Facebook misinformation posts were all labeled and most the Twitter misinformation posts were not.
To further understand what may be confounding our results, we investigated how many times any given user within each platform shared a misinformation link more than once. Our assumption was that a real human would do so only once, but automated bots or bad actors would multiple times. We find that, on average, users on Twitter shared unique links 1.14 times more than users on Facebook. We plot the distribution of posting behavior per user per platform in figure 2, and it is clear that users on Twitter have a longer right tail of multiple postings of unique misinformation links than users on Facebook. This difference in distribution of unique post sharing on Twitter versus Facebook (and the likelihood that multiple shares of the same post are not due to human intervention) could be confounding our analyses.
Initially, we were hoping to examine what the effects were of labeling Twitter and Facebook posts on subsequent sharing behavior by users. However, because Twitter labeled so few posts, we were left with assessing what the effect of labeling was on the sharing of COVID-19 misinformation on Facebook. This is the subject of study 2.
Study 2: Assessing Misinformation Engagement Before and After Facebook Labeling
Although both Facebook and Twitter claim to label COVID-19 misinformation on their platforms, only Facebook has published details on how it actually determines whether a post is misinformation. Facebook claims that it is working with more than “60 fact-checking organizations that review and rate content in more than 50 languages around the world” (Facebook 2019). Given this transparency, we found that we could analyze the effects of Facebook COVID-19 misinformation labeling on engagement rates because of the short lag time between the International Fact-Checking Network tagging of misinformation and Facebook’s labeling as it would appear to users on the social media platform. This lag allows us to analyze numerous misinformation links and to track the effects of labeling on engagement.
Because Facebook posts garner different levels of engagement, we had to find a baseline measure that would allow us to understand how a post could have been (or could not have been) affected by labeling. Because its main source of revenue is advertising, Facebook is expert at predicting how much engagement a post should receive. We therefore used its measures of “expected engagement” for each post as the baseline expectation—deviations from that expectation would point to the effects of misinformation labeling. Specifically, we were able to calculate each post’s deviation from expected engagement before and after Facebook’s misinformation labeling.
Figure 3 presents a visualization of our results.
Each red circle in figure 3 represents numerous posts regarding the same misinformation link. Figure 3 also shows three trend lines that encapsulate three potential effects of Facebook labeling. Along trend line 1, circles that do (or could) appear suggest that Facebook labeling increases engagement in these posts. Along trend line 2, circles that do (or could) appear suggest that labeling has little to no effect. Along trend line 3, circles that do (or could) appear suggest that labeling decreases engagement. These results, given that most posts are clustered near line 4, suggest that labeling has a dampening effect on sharing misinformation.
As figure 3 shows, a major batch of posts are near line 1, indicating that the labeling of these posts substantially increased user engagement with them. When we examined these in detail, we found that our original link to a New York Times article carried a rating of “imprecise” and was subsequently labeled misinformation. However when we returned to the link to investigate why the post was receiving so much attention, we discovered that it had been relabeled and was no longer tagged as misinformation. We do not know when the labeling change occurred, but the removal of the misinformation label seems to have increased people’s engagement with the post.
DISCUSSION
Our results support two conclusions. First, and contrary to popular belief, Facebook is doing a much more rigorous job of labeling misinformation than Twitter is. In fact, we could not detect how Twitter labels misinformation, but our use of a common corpus of COVID-19 misinformation sites suggests that Twitter does not challenge posts that Facebook does label, which is the major reason study 2 focused only on Facebook posts. This in itself is a significant finding and contrary to public perception. A recent Pew Research survey suggests that 59 percent of those surveyed distrust Facebook as a place to find reliable election and political news, in contrast to the 48 percent who distrust in Twitter (Jurkowitz and Mitchell 2020). Pew’s findings suggest an overall distrust of both Facebook and Twitter, but a greater distrust of Facebook. We cannot pinpoint why this is the case, but much of the distrust toward Facebook seems to stem from its history of data privacy decisions. Whether as a result of the fallout from the Facebook Cambridge Analytica scandal, when Facebook used its data to build psychological profiles of users without the user’s consent (Confessore 2018), or that 74 percent of people surveyed did not know that Facebook stores user data for advertising profiling purposes, it seems reasonable that people distrust Facebook (Hitlin, Raine, and Olmstead 2019). As this spills over into perception of misinformation labeling responsibility, Twitter may be associated with fewer high-profile offenses in the past even if users do not completely trust the platform.
Second, we also find that Facebook’s labeling COVID-19 misinformation changes the sharing trajectory of that information substantially and in the direction of less sharing. This result suggests that, at some level, labeling works as it is supposed to. The real question is why.
We cannot distinguish between mechanisms from social psychology and cognitive psychology that describe incentives for evaluating and processing information, but can say that some obstacles to changing how people process social media information that might be linked to motivated reasoning may be severed or at least interrupted by labeling. One of several processes may be operating either individually or in concert: first, per Goffman and his colleagues, people are concerned about their appearance as a competent social actor if they share social media posts that are labeled as misinformation; second, the labeling process interrupts normal bias in cognitive functioning that might otherwise lead to the unreflective or lazy sharing of social media misinformation. If motivated reasoning were dominating the social media environment in a time of cultural fragmentation and if that fragmentation were so total that people were functioning in different realities, social media labeling would not seem to work at all. Either no effect would be detectable (sharing patterns grouping along line 2 in figure 3) or misinformation labeling would actually increase content sharing (along line 3 in figure 3). This, as of now, is clearly not happening.
How do these results contribute to scholarly understanding of seeking, exchange, inequalities, and government responses to crises such as COVID-19? They, and many of the other results from other articles in this issue, expose fissures in American social life that the COVID-19 experience laid bare. The pandemic crisis exposed and exacerbated long-standing inequalities affecting the aged (Pezzia, Rogg, and Leonard 2022, this issue), underrepresented and disadvantaged people (Burns and Albrecht 2022, this issue; Cohen et al. 2022, this issue; Evans et al. 2022, this issue; Kamp-Dush et al. 2022, this issue). It also exposed serious fissures if not declines in trust and social solidarity (Suhay et al. 2022, this issue; Pears and Sydnor 2022, this issue) and widespread inconsistency in response to the pandemic crisis fueled by partisan fragmentation (James, Tervo, and Skocpol 2022, this issue; Evans et al. 2022, this issue).
The sum of these results presents a troubling landscape in which social cleavages and inequalities are exposed as weaknesses when crises erupt. The crises themselves do not alter the social landscape as much as they bring existing weaknesses to the fore—long-standing structural inequalities and cultural fragmentation becomes the basis for the spread of misinformation via social media. The spread of misinformation via social media then increases the barriers to the types of concentrated action that crises require. But social capital and trust cannot be ginned up overnight. Nor can a political system that rewards discord rather than consensus and enables people to simply construct an alternative set of facts and act on them.
Our analysis points to one possible way forward, and that is to interrupt the sequence of automatically ever-so-briefly and unreflectively sharing social media posts. The simple nudge of labeling a post as misinformation seems to reduce the sharing. This in itself may prove the basis for a more comprehensive set of interventions that might prevent the spread of misinformation even if (especially in the American context) stopping it in the first place is well nigh impossible. Evidence is considerable in other contexts that simple, short, and not terribly intrusive interruptions prevent other social ills from perpetuating themselves (for sexual harassment, see Coker et al. 2016; for stemming racial discrimination and hate, see Robi 2018). Like misinformation labeling, these interventions do not address the long-standing cultural and structural inequalities responsible for poor responses in the first place.
In addition, the evaluation of any intervention in the spread of misinformation via social media must deal with the continually shifting media landscape and new developments that seem to defy rational calculation. On March 24, 2022, the Kansas legislature passed a bill to increase access to Ivermectin, an antiviral drug developed to deal with stomach viruses in horses and widely shown to have no demonstrable effect on COVID-19 (see Carpenter 2022). Any attempt to stop misinformation from spreading must be active and ongoing because those who generate it will change tactics as the process moves forward. In addition, an expanding universe of press organizations (such as Fox News, Brietbart News, OneAmerica News) seems committed to spreading mass-media based misinformation outside of social media websites. Much of this content ends up on Facebook, Twitter, Instagram, TikTok, and Reddit. That it can be stopped on those and other platforms (such as YouTube) does not reduce citizen exposure to it if media outlets are producing the content themselves. Ultimately, systematic inequalities in the United States have led to systematic inequalities in the ability to evaluate and process information. Although public policy can seek to address the sources of misinformation, reducing these inequalities will reduce the market for misinformation and its damaging effects.
- © 2022 Russell Sage Foundation. Leicht, Kevin T., Joseph Yun, Brant Houston, Loretta Auvil, and Eamon Bracht. 2022. “The Presentation of Self in Virtual Life: Disinformation Warnings and the Spread of Misinformation Regarding COVID-19.” RSF: The Russell Sage Foundation Journal of the Social Sciences 8(8): 52–68. DOI: 10.7758/RSF.2022.8.8.03. This research is supported by a grant from the National Center for Supercomputing Applications, University of Illinois Urbana-Champaign, a grant from the National Science Foundation (SES-2031768, “Tracking and Network Analysis of the Spread of Misinformation Regarding COVID-19”), and a grant from the Discovery Partners Institute (“Developing Easy-to-Use Application-Based Software to Combat Social Media Misinformation—A Multi-Disciplinary Collaboration”). We thank the participants in and organizers of the Russell Sage Foundation Conference, “The Social and Political Impact of COVID-19 in the United States” (June 10–11, 2021) and the reviewers for their insightful comments. Direct correspondence to: Kevin T. Leicht, at kleicht{at}illnois.edu, Department of Sociology, University of Illinois Urbana-Champaign, Urbana, IL 61801, United States.
Open Access Policy: RSF: The Russell Sage Foundation Journal of the Social Sciences is an open access journal. This article is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.