Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Is There Too Much Focus on Disinformation?

Recently, the United Nations (UN) Security Council stated that there needed to be more done to counter misinformation and disinformation concerning the UN’s 12 global peacekeeping operations, which have received an increase of attacks on social media channels. Approved by all council members, the statement emphasizes that strategic communications needed to be improved in order to better mitigate potential adverse impacts to the UN’s peacekeeping missions, pointing out that newer technologies were being leveraged to change the character of war.  Indeed, armed groups, militias, terrorists, etc., were all actively engaged in using disinformation as a “weapon of war” to various degrees of success.

Disinformation (as well as misinformation and propaganda) has been well socialized into the public conscious since the 2016 U.S. presidential election when Russia, as well as other states, capitalized on global connectivity of social media and other online platforms to disseminate stories, push false narratives, and otherwise sow discontent in target audiences. Since then, there has been much debate over disinformation and its offshoots like fake news that is produced and spread throughout the Internet.  In the United States especially, the call to censor disinformation has been rampant, with pressure being applied to social media outlets like Facebook and Twitter to proactively identify and remove accounts conducting questionable activities.  At least two bills in Congress have been drafted, both with the stated purpose of countering disinformation, though neither has progressed far in the legislative process.

Supporters of these efforts state their concerns that disinformation is an influencing agent that poses a grave threat to democratic principles.  However, given the missteps that have transpired over what is and what is not disinformation calls into question what body is the authority of such matters, and how did some of the most glaring examples went unnoticed (the Steele dossier and a particular laptop spring to mind).  This calls into question if concerns are not more political in nature with people more interested in how such information impacts their positions and not so much democracy write large.  Regardless of political party, a recent survey showed most Americans believe that these platforms censor political viewpoints with 75% of respondents not trusting social media to make fair content moderation decisions.  It’s clear that current standards of combatting disinformation are not working, especially to the public’s satisfaction.

But to be fair, the Internet has exploded the information space to such a degree that curbing disinformation is a near impossibility.  Yes, social media and online platforms can help identify and remove rogue accounts in a never-ending game of whack-a-mole in a futile attempt of curtailing the volume of questionable content produced.  But as observed over the past few years, the subjectivity associated in determining misinformation/disinformation/propaganda facilitates draconian censorship for the sake of protecting people and at the expense of burying legitimate news stories, and effectually swaying the public’s perception.  That seems to be more in line with pushing specific narratives, than “protecting” people under the misguided auspices that burying information suddenly makes it go away.  Further exacerbating matters, no matter how noble, censorship and platform-banning risks being purposefully weaponized to target legitimate but oppositionist political, social, economic, religious, and ideological voices.

Therefore, it’s time to rethink how we address the disinformation conundrum and try to determine exactly the threat it poses and codify what that exactly looks like.  After all, despite all the investigations conducted after the 2016 election, there has been no conclusive evidence suggesting that any of the influence and disinformation efforts caused voters to change their votes as a result of the activity (we know that Russian hackers did not change votes in the voting machines).  Even if Russians tried to influence voters to vote for Trump in 2016 as stated in a U.S. Intelligence assessment, its efforts to do so in 2020 did not succeed, raising the question why? And if the answer is because people were now more aware of disinformation and influence campaigns, then the public proved that it was capable of discerning information for themselves without the need of government or private sector leading censorship or content policing efforts.

In a highly charged political environment, the criteria by which fake news and disinformation is determined should not be left up to a government body.  This is inherently why the Disinformation Governance Board folded. Regardless of its authorities, such a board could be used as a weapon for whatever party was in power to target and censor news it deemed unfit for public consumption. This is the very weaponization employed by governments like China and Russia as they have long-tried to curb certain types of information from reaching their respective populations.  They had seen how the diffusion of information was a catalyst for the political turmoil and social discontent caused by the Color Revolutions and Arab Spring that instigated political change. For the United States or any Western democracy to adopt similar practices as these two authoritarian states would run contrary to freedom of speech principles, regardless of if it was done with good intentions.  Some may believe that that wouldn’t happen in the United States but there is no reason to run the risk or test those waters.

The public doesn’t need a government body telling it what it should and should not read.  But perhaps it does need help to better consumers of what it does read.  If disinformation is that serious a problem, the government would be best served by empowering its democratic citizenry to better discern disinformation from legitimate content via ongoing education efforts and raising awareness of current campaigns.  Even the Department of Homeland Security has produced an infographic that should be posted on every traditional and independent news outlet, social media platform, and online source where individuals receive news information. Censoring or banning information is a slippery slope that begs the question, what is the cut-off of such meddling?  Would the same principles be applied to product advertising or political campaign ads whose messaging could fall under the same “questionable” content rubric, or fall into the gray area of inconsistent consideration?

If we as voters are entrusted with the responsibility of electing our state and national leaders, it is incumbent on us to be responsible for how we make our decisions.  As discerners of information, we can make up their own minds as to what to question, what to accept, and what to ignore.  This is what holds us accountable for the decisions we make and guide what we do in the future should we have failed to scrutinize information properly. In a democratic republic where freedom of speech is a cherished right, it is difficult to make an argument that the public should be able to access whatever it wants (as long as it’s legal) but is absolved of any accountability with respect to determining the legitimacy and accuracy of the information before it wittingly or unwittingly promulgates it to other audiences.  That mindset needs to change because the more informed the public is to assume this responsibility on its own, the less need for government involvement in the First Amendment and the less financial, human, and legal resources it needs to divert from other, more pressing matters.

Emilio Iasiello

About the Author

Emilio Iasiello

Emilio Iasiello has nearly 20 years’ experience as a strategic cyber intelligence analyst, supporting US government civilian and military intelligence organizations, as well as the private sector. He has delivered cyber threat presentations to domestic and international audiences and has published extensively in such peer-reviewed journals as Parameters, Journal of Strategic Security, the Georgetown Journal of International Affairs, and the Cyber Defense Review, among others. All comments and opinions expressed are solely his own.