Moving the Needle on Health Disinformation
Governance

Moving the Needle on Health Disinformation

Preventing harmful health information requires transparency not regulation

A man using a smart phone walks through London's Canary Wharf financial district in the warm yellow evening light, casting a shadow in London, United Kingdom, on September 28, 2018.
A man using a smart phone walks through London's Canary Wharf financial district in the evening light in London, United Kingdom, on September 28, 2018. REUTERS/Russell Boyce

In the first quarter of 2021, the most shared piece of content on Facebook in the United States was an article by the Sun Sentinel newspaper, syndicated by the Chicago Tribune. The headline read: "A 'healthy' doctor died two weeks after getting a COVID-19 vaccine; CDC is investigating why."  

While the full article involved good reporting, the headline was deeply irresponsible, and for many people, it was all that they saw. I would argue this is a type of misinformation. But if Facebook had removed it, there would have been a number of people who would have been very angry about press censorship.  

The United States has a problem with health misinformation, from the use of dangerous or untested treatments to the depressing decline in trust in public health institutions. The way people responded to false claims about HIV, Ebola, and measles gave us some early warnings, but the COVID-19 pandemic has underscored the very serious consequences of low-quality information on people's beliefs and behaviors, from mask-wearing to vaccine uptake.  

The U.S. has a problem with health misinformation, from the use of dangerous or untested treatments to the decline in trust in health institutions

The U.S. Surgeon General issued his Health Misinformation Advisory in July 2021, which said the United States need a "whole-of-society" approach to mitigating the harmful effects of misinformation, from new education initiatives, more research, platform action, and government oversight. The World Health Organization (WHO) continues to build an infrastructure to respond to the "infodemic," and they continue to convene discussions on the topic and publish reports, which highlight the need for new skills and competencies for people working in health departments globally. But is any of this actually moving the needle? 

Why Regulation Isn't the Answer to Disinformation 

Very often people discuss regulation as the only intervention that will have any significant impact on the proliferation of medical disinformation. In the United States, in particular, activists regularly call for updates to Section 230 of the Communications Decency Act. Passed in 1996, Section 230 states that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." In practice, this means the technology companies are legally protected from taking responsibility for much of the content they host on their sites. 

Yet, the world is swimming in content that is causing real harm. This language is frustrating. Why shouldn't platforms be held responsible for what they publish?  

European Commissioner official Mariya Gabriel, who has long dark hair and is wearing a grey blazer, stands in front of a dark blue background as she speaks at a news conference. On either side of her are the blue flags with yellow stars of the European Union.
European Commissioner official Mariya Gabriel speaks at the launch of the Commission's expert group on tackling disinformation and promoting digital literacy, in Brussels, Belgium on October 12, 2021. REUTERS/Yves Herman/Pool

Gray Speech 

While I can understand calls for policy action, I have deep concerns that the unintended consequences of regulation that has not been fully thought through could actually create much more serious issues. The reason for this concern comes from a decade of actually studying the "bad stuff on the internet." Over the past few years, the type of misinformation circulating online has evolved. There actually isn't a significant amount of outright falsehoods. Instead, there is a great deal of "gray speech," content based on a kernel of truth, but twisted in a way that makes things confusing and frustrating.  

This change is partly in response to platform policies. It is important to acknowledge that the major technology companies have taken a number of concrete steps to limit these types of falsehoods on their sites, and as a result, the tactics and techniques of bad actors have evolved. Most noticeably, all the major platforms developed COVID-related misinformation policies—some better than others—in March and April 2020, and these have limited the number of outright falsehoods circulating on the internet. This has been accomplished through a mixture of partnerships with fact-checking organizations to help make decisions about which content to label, demote, or remove, and by establishing tougher internal moderation policies and more sophisticated detection systems. 

The result has been an increase in the type of speech that goes right up to the line in terms of platform content policies but doesn't cross it. For example, people will share videos of first-person accounts of vaccine side effects. It's often impossible to know whether these are real or staged and therefore difficult to determine what actions should platforms take with these types of videos. Should platforms assume bad intent and remove or assume good intent and leave them up?  

False information sharing also happens, for example, when people jump on social media in search of more information about unproven treatments such as ivermectin and hydroxychloroquine. Are these genuine questions or hoaxsters trying to drive google searches and purchases? What about rogue medical doctors—people wearing white coats advocating for alternative cures or supplements. It's not clear who bears responsibility for monitoring their content sharing and making the call when their content spurs risky or even deadly behaviors and should be removed. 

Those who call for increased regulation often have a sense that misinformation is obvious, but medical disinformation isn't always easy to identify. Like pornography—you know it when you see it. But there isn't one definition that allows for easy detection. 

There's also the question of what to do when science is unsettled. At the beginning of this pandemic, suggesting the virus was airborne was considered misinformation. Countries will never be able to have a shared definition of health misinformation that stays relevant during times of changing science and knowledge.  

Facebook CEO Mark Zuckerberg stands at an ornate wooden podium next to an American flag and gesticulates as he addresses an audience in front of a blue background that says "Georgetown University McCourt School of Public Policy"
Facebook CEO Mark Zuckerberg speaks on the challenges of fighting misinformation at Georgetown University in Washington, United States, October 17, 2019. REUTERS/Carlos Jasso

A Call for Information Transparency 

What countries do need however is transparency. Researchers need to know what is circulating on social networks and the media, and researchers need to understand how many people are seeing this content and what they are doing with it. Right now, researchers have almost no understanding of the information different people consume. With the print media, I can do a quick database search and find all articles that reference ivermectin. It's very difficult for me to do that same search for cable news. It's impossible for me to do that on social media. And the measly tools that do exist are being dismantled. For example, the content discovery and social monitoring platform Crowdtangle is on track to be discontinued by its owner, Facebook parent company Meta.  

The lack of tools means researchers often have little to no idea which posts are being shared most frequently, and whether those posts are from sites known for conspiracy theories and disinformation, mainstream news outlets, or official government agencies. In the European Union, every quarter, all major platforms have to publish Transparency Reports thanks to a Code of Practice that they have all agreed to adhere to. Take a look: they read beautifully and appear to suggest that there aren't really any problems with speech online. 

This is because U.S. platforms are writing their own transparency reports, and that's a problem. Instead, the United States needs independent third-party auditors to write those reports. Like a financial audit, independent bodies should investigate and assess how effectively the platforms manage information flows, remove, label, and demote low-quality information, and prioritize quality information. Instead of governments passing regulation based on hunches rather than data, countries need governments to insist on increased transparency paired with independent auditing mechanisms. 

Only this type of oversight would allow us to really understand what people are actually seeing as part of their information diets. Then researchers could compare the actions taken by different platforms in order to decide what best practices should be implemented and scaled. With that level of understanding, and hopefully, a parallel set of discussions at the societal level about the type of speech internet users want to see, regulation has a role to play. But not yet. 

A man in a blue shirt and black mask walks through an art installation of bright blue body bags laid out on the sidewalk with labels that say "Disinfo kills"
A pedestrian walks through an art installation of body bags as demonstrators protest against Facebook and COVID-19 disinformation outside Facebook headquarters in Washington, U.S., on July 28, 2021. REUTERS/Jim Bourg

Claire Wardle, PhD is a professor at Brown University School of Public Health where she studies the impact of misinformation on society. 

 

Most Popular