Emerging From the Shadows: Shadow Banning and Misinformation

Amid the black leather and stained wood of the Dirksen Senate Office Building in Washington, D.C., Republican Senator Ted Cruz directed a salvo of questions at Twitter over the controversial practice that he referred to as “shadow banning”. Undoubtedly sweating under his suit and tie, Twitter’s Director of Public Policy and Philanthropy Carlos Monje Jr. met the question with a succinct response, “No, sir we do not shadow ban users.” This 2018 hearing was in response to growing concerns from conservatives over social media platforms censoring Republican users’ content without notifying them. Later that year, Twitter CEO Jack Dorsey would testify to Congress saying, “…we do not shadow ban anyone based on political ideology. In fact, from a simple business perspective and to serve the public conversation, Twitter is incentivized to keep all voices on the platform.” These moments took the practice of shadow banning from relative online obscurity and thrust it into public political discussion. Yet not only have the practice’s ethics been called into question, but so has its existence at all.

Shadow banning is the practice of blocking (or partially blocking) a user from a social media site without their knowledge so that it will not be readily apparent to the user that they have been banned. Its usage is controversial for two main reasons. The first is that it is viewed as an unethical method of restricting user behavior because it is done surreptitiously, which cedes more power to platforms to make unilateral choices on content restrictions. At the heart of this concern is the idea that social media platforms function as public forums and are key to human communication. This conception of the “public square” carries an expectation of ideological neutrality. So, if shadow banning is used on certain ideological groups and not others, it becomes a fearsome form of censorship and marginalization. The other reason is more fundamental, which is the question of whether shadow banning is actually implemented on any platforms in the first place. As aforementioned, Twitter has denied using shadow banning as a moderation technique and most other social media platforms remain similarly silent on the topic.

The definition of shadow banning also quickly becomes unclear once the methods of displaying content on sites are examined. Claims of shadow banning have been recast as definitional misunderstandings, such as when Twitter was accused of shadow banning conservative accounts by removing their auto-population in the search bar (yet not removing the accounts or their content at all). Twitter denied that this was shadow banning and explained it as a kind of content filtering. Shadow banning is also intertwined with “deboosting”, which is a practice that is described as Facebook suppressing livestreams on its platform by not sending notifications or allowing users to access the stream. Facebook denies that they implement deboosting, but the definitional confusion it invites still stands. Does the practice of not notifying users that someone is beginning a livestream mean the streamer has been shadow banned? What about when accounts are removed from auto-populating in the search bar? The term shadow banning suffers from ambiguity but seems to be primarily employed when the expected features of a platform are applied in some cases but not others. Social media companies might argue that deprioritizing content and filtering out content for certain users is something more innocuous than shadow banning, but for the purposes of evaluating the practice’s efficacy these cases are important to consider. They are certainly all types of visibility restrictions that intentionally decrease exposure to a users’ content. Of course, the more obvious cases are those in which specific user accounts have their content rendered inaccessible to audiences that the platform would normally allow them to reach. We will consider all these instances under the umbrella of shadow banning because they all inequitably restrict content without explanation, despite the attempts of companies to reason otherwise.

Most shadow banning takes place automatically via algorithms on a social media platform. Dorsey’s testimony did not concede that Twitter participated in “shadow banning” but he did admit to their algorithm unfairly filtering around 600,000 user accounts from auto-populating in the search bar. This was a type of shadow ban driven by algorithmic decisions that were designed to filter out spam. These automatic practices drive most of the content moderation for online spam, and companies often claim that alleged shadow bans are accidental overreach of algorithmic content filtering. The fundamentally curated and filtered nature of platforms gives them a defensible rationale for why they might unfairly shadow ban user content: “It was the algorithm’s fault”. Blame aside, it’s important to note that the logic of shadow banning allows social media platforms to sidestep confrontation with users. This is part of the incentives of using shadow banning as a tool.

Shadow banning, when it has been admitted to, is justified as a way of curbing online misinformation by fighting spam accounts, bots, and other types of posting that are considered platform misuse. Bots and spam accounts traffick in a variety of misinformation, from political propaganda to conspiracies. There are two overarching theories behind shadow banning’s intent. One is to “quarantine” a user to a smaller set of social nodes. When Reddit banned an anti-vaccine and anti-government subreddit called “r/NoNewNormal”, they quarantined the posts on the subreddit from being viewable on the front page where it could be seen by users not already affiliated with the community. This theoretically cuts off the content being posted in the community from gaining additional traction, while still allowing the existing community to interact. Shadow bans also try to prevent cycles of platform misuse where banned users end up immediately creating a new account to circumnavigate the ban. By not informing users that their account has been banned or quarantined, the account continues to shout misinformation into a void that doesn’t interfere with users. For bots and spam accounts that are vectors of misinformation, shadow bans would be fairly effective. Bot accounts vie for influence through quantity and reach of posts. Cordoning off bot and spam posts without outright banning the accounts helps ensure that bot creators will not simply create a new username for the bot to be associated to. Lastly, shadow banning became a prominent strategy on sites like Twitter in response to foreign influence operations such as Russian spam accounts. Shadow banning foreign accounts that seek to infiltrate the US’s information ecosystem is one way Twitter can restrict bad actors from influencing audiences.

Shadow banning will still not address some problems of misinformation. The other theory underlying shadow bans is to disincentivize the user from posting content by cutting them off from the “currency” of social media such as likes, comments, and shares. If a user’s posts receive no interaction, then it’s an implicit signal that their content is not being received positively or seen by many people, which may curb their use. Whether this would push users to post more extreme content to attract greater attention remains to be seen but is a potential backfire effect that could occur. This angle of shadow banning would do little to curb bot misinformation that is not driven by the feedback of other users, but posts content regardless of its audience reception. Past bots and spam accounts, shadow banning is unable to do much to address legitimate accounts that tout views that may radicalize people but are not considered spam. This includes posts that legitimize extremists or conspiracy theories while they themselves are not conspiratorial or extremist.

Shadow banning targets multiple underlying issues that make misinformation problematic and even dangerous. For example, shadow bans intend to stop the proliferation of falsehoods, which have a distinct informational advantage online. Social scientists found in one study that, “…falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.” Notably, this study found that bot accounts accelerated the spread of both true and false news at the same rate, which means shadow banning bot accounts could be less relevant than shadow banning human users. Another problem of misinformation that shadow bans look to ameliorate is people’s preference to confirm their own biases, which can make misinformation more compelling. Scholar Judith Mollar defines this as “selective exposure theory”, which asserts that “…individuals prefer to expose themselves to content that confirms their belief systems because dissonant information can cause cognitive stress they would rather avoid.” When shadow bans prevent bot or user accounts from repetitively spamming misinformation by making it invisible to others, it helps stop reinforcing cycles of confirmation bias. Yet, one cause of problematic misinformation that shadow bans do not directly address are people’s perceptions of misinformation is produced and consumed. Scholars Sophie Lecheler and Jana Laura Egelhofer note this distinction between the actual consumption of misinformation and the markedly different perception of information as “fake news” that stems from citizens worrying about being manipulated in a polarized political environment. They argue that people tend to overestimate the ratio of misinformation in their media consumption. This distinction is important because the perception of misinformation itself has an influence on public opinions of media and expert opinion. Shadow bans certainly do not function as reality checks for the actual prevalence of misinformation and are entirely focused around removing content from public conversation. They are purely tools to restrict visibility. Given this, how effective are they at targeting these types of misinformation and their causes?

Examining specific cases of shadow banning and measuring its efficacy is of course challenging given the practice’s lack of acknowledgement by online platforms. Allegations of shadow banning are almost always anecdotal and evidenced through end-user observations as they interact with the platform. Interestingly, one study assessed the plausibility of shadow banning being implemented on Twitter (which Twitter denies, as we know) through a statistical approach rather than through anecdote. Twitter has claimed that any instances of alleged shadow banning amount to “bugs” in their algorithms that filter users and content. Yet this team of scholars found that “…bans appear as a local event, impacting specific users and their close interaction partners, rather than resembling a (uniform) random event such as a bug.” In other words, the way in which certain user characteristics systematically appeared among shadow banned users indicates that it’s likely a non-random, and perhaps intentional, targeting of accounts. If this holds, shadow banning cannot be explained away. Online moderation scholar Dr. Carolina Are analyzed the implementation of shadow bans on Instagram as a tool for restricting sexually risqué content such as nudity and pole dancing. Though not an effort to quell misinformation specifically, the analysis exposes the likely existence and impact of shadow bans on Instagram. Dr. Are’s narrative of managing a popular pole dancing account catalogs the way her increase in followers curiously led to less follower engagement with her content. She notes that a common shadow ban approach that Instagram admitted to is the censoring of specific hashtags, which runs counter to the supposed community-driven function of hashtags that is commonly understood. Dr. Are acknowledges that her conjecture from the user perspective over shadow banning is a result of the opaque policies of social media platforms like Instagram on content performance and visibility. Assuming Instagram did implement shadow bans on her account, the bans were effective at dissuading additional posts because of the “…sense of powerlessness arising from content posted into a void.” Shadow banning, though perhaps troubling in execution, seemed to work.

The social media platform TikTok has come under fire for similar shadow banning practices that restrict hashtags, video visibility, and for allegedly removing videos altogether without notification. Black content creators have claimed that TikTok has shadow banned certain users and their videos that were associated to the #BlackLivesMatter movement. The platform admitted to what they termed a “glitch” that affected the view count displays of videos with the hashtags #BlackLivesMatter and #GeorgeFloyd. The effect was noticed amidst the skyrocketing attention of those hashtags during the summer of 2020, and ran counter to statements the company made on their support of Black content creators. Black content creators also claim that they had videos removed by TikTok that contained this kind of content, without permission or notification to the user. TikTok has also conceded that it shadow banned videos of disabled, queer, and fat content creators in a bizarre attempt to implement “anti-bullying practices”. It’s notable that this shadow ban was done manually, through a system of moderators flagging specific user accounts. Regarding automated moderation on the platform, TikTok concedes that their algorithms create a risk of “…presenting an increasingly homogenous stream of videos…” but leaves the extent of their filtering practices at that. The platform’s track record is a mélange of both automated and manual shadow banning practices, none of which have resulted with insight into how the platform implements content filters and bans.

In terms of efficacy, it’s unclear how well shadow bans work to combat online misinformation. There’s a kind of innate efficacy of shadow bans because they necessarily restrict content from being seen by others. This is “effective” insofar as shadow bans target the right accounts i.e., accounts that purvey misinformation. There are scant windows into the platform perspective of shadow banning practices, but the social media giant Reddit provides some context. In 2015, Reddit replaced its explicit shadow ban policy with traditional account suspensions. Reddit implemented shadow banning to quickly hide spam on their platform and treated it as an official moderation strategy, rather than denying its existence. The practice appeared to function on a manual basis, meaning Reddit admins (employees) were responsible for implementing shadow bans on user accounts. After moderating in this fashion, Reddit announced that shadow bans had “outgrown their usefulness” and that they had realized shadow bans were “…great for dealing with bots/spam rings, but woefully inadequate for real human beings.” The effectiveness of shadow banning was evaluated based on the kinds of users it targeted rather than by the content it removed. Misinformation was likely a large part of both bot and human content sharing that was shadow banned by Reddit, but the scale of this is still in question.

Considering shadow banning efficacy, Reddit noted that the practice created a toilsome system of ambiguous bans and user appeals to moderators that did not seem to serve the purpose that the platform intended. The benefit of shadow bans as opposed to account suspensions was that spammers would not find ways to circumvent their suspended account. But it was decided that the juice was not worth the squeeze, and Reddit began operating under a system of formalized account suspensions. This speaks to the continual calls by critics for more transparency around shadow banning practices by social media platforms. Platforms have little incentive to make these policies explicit, and some would argue they are actually disincentivized to do so. Reddit co-founder Steve Huffman noted that their company’s content policies function best when they are “specifically vague” because it prevents bad actors from exploiting loopholes in their policy. This argument, that ambiguity is security, speaks to the efficacy of shadow banning content and helps justify the practice.

The future of understanding shadow banning relies on increased acknowledgement and transparency on the part of social media platforms. This intervention’s effectiveness is challenging to measure primarily because it is predicated on the mere recognition that it is used as a moderation method at all, which is a fundamental step the major social media sites have yet to take. There are thus massive gaps in empirical data on shadow bans including the extent to which it tackles the problems of misinformation specifically, how widespread its usage is, and whether it is used equitably. Future studies would be bound to the datasets available, such as the one used in the aforementioned Twitter study that provided convincing parameters to indicate shadow banned users. Future studies could of course take the format of Dr. Are’s Instagram analysis, which is narrative driven and examines shadow banning through end-user experience. But ideally, rather than this kind of user focused analysis, we would gain more backend insight into how shadow ban decisions are made by algorithms or human moderators within the architecture of platforms. There are very few hooks between shadow bans and the types of content the accounts are banned for. If we hope to target the extent to which they address misinformation specifically, studies and data will need to include more context on the rationale of the ban. Theoretically, shadow bans occur to groups of user accounts based on some rule, criteria, or characteristic that platforms want to target. The link between these characteristics and the shadow ban itself is what’s needed. It’s very possible that shadow banning is an effective intervention for misinformation and is one that we ought to employ despite some ethical criticism. These ambitions are tall orders for a social media landscape that has many reasons not to do so, but perhaps continued calls for transparency and accountability will force companies to pull the practice of shadow banning into the spotlight.

Previous
Previous

Life360’s Surveillance as Care

Next
Next

Protecting the Digital Body