#cyberbullying | #cyberbully | Disinformation: an existential threat to democratic ideals


According to the Washington Post, Donald Trump made 30, 573 “false or misleading claims” as president. During Trump’s four years in office, lying became normalized as an integral part of a larger political dilemma: the creation and spread of disinformation as a political strategy. As access to social media platforms become ubiquitous, a preponderance of disinformation is being generated and spread by Facebook, Twitter, and You Tube.

In his book “Information Wars,” Richard Stengel defines disinformation as the “deliberate creation and distribution of information that is false and deceptive in order to mislead an audience.” Disinformation takes a variety of forms: propaganda, oppositional research, fake news, prophecy, deep fakes and conspiracy theories. It thrives under circumstances of intense societal trauma and complexity. The current COVID-19 pandemic, for example, has upended people’s sense of agency, threatened societal well-being, and deepened political polarization.

A Pew Research Study conducted mid-March of 2020, early in the pandemic, found nearly half of Americans (48%) claimed they had been exposed to “at least some information that seemed completely made up, on subjects ranging from the origins of the virus to its risks and potential cures.” Public health officials bore the brunt of much of this disinformation for imposing what many conservatives considered infringements on personal liberty, such as social distancing requirements, the closing of schools, and mandatory masking in stores. False information about how COVID-19 was spread and what precautions were best to take weakened any chance of national unity in fighting the pandemic.

The 2021 Edelman Trust Barometer found growing mistrust in a wide array of institutions in American life. This was especially evident with the news media. According to the report, only 25 percent of those surveyed claimed they practiced good information hygiene; engagement with the news, avoiding information echo chambers, verifying information, and not spreading false information. The report claimed an “epidemic of misinformation and widespread mistrust of societal institutions and leaders.” is currently underway.

Article continues after advertisement

Algorithms and hate

A Pew Research Center Study in January 2021 found that 53 percent of adults in the United States claim they get their news from social media “often” or “sometimes.” The study also found Facebook is the regular source of news for 36 percent of Americans. Social media algorithms have a propensity to promote memes and conspiratorial content that push conventional boundaries. The more extreme the content, the more likely it will go viral, especially if readers are unwilling to carefully analyze that content. Unlike trained editors, algorithms do not act as gatekeepers and are incapable of analyzing content critically. In the social media ecosystem clickbait is valued over veracity. The motivating force in most social media platforms is the emotions of the receiver; the more extreme the content, the more likely it will be read and shared. Numbers of likes and retweets translate into increased advertising revenue. Extreme content sells.

Thomas J. Scott

The insurrection of Jan. 6 in Washington, D.C., was a harbinger of political messaging that embraces extremism and lies while rejecting truth. Research by Sander van der Linden of the University of Cambridge has found conspiracy theories, for example, are prevalent among all extremist groups but are more likely accepted by conservatives than liberals. In fact, the spread of conspiracy theories has become so prevalent that the Department of Homeland Security’s Jan. 27 Bulletin identified “false narratives” as a threat to national security.

Peter Pomerantsev notes in his book “This Is Not Propaganda: Adventures in the War Against Reality,”  “There is no middle between truth and lies.” Ignoring this dichotomy allows extremists to claim disinformation as a legitimate form of political discourse. Once disinformation becomes normalized it can be used as a weapon to scapegoat targeted groups in society. For example, early in the outbreak of COVID-19, Trump and Republican leaders made repeated references, without evidence, that the coronavirus was deliberately created and spread by China. A study undertaken by the University of San Francisco found Trump’s tweet on March 16, 2020, using the term “the Chinese Virus” led to a significant increase in anti-Asian hashtags. NPR reported 3,800 acts of discrimination against Asian Americans and Pacific Islanders since the beginning of the pandemic. Facebook has been implicated in ignoring hate speech aimed specifically at Muslims during communal unrest in Sri Lanka, and acts of genocide against the Rohingya in Myanmar. An online environment with little public opprobrium toward hate  speech increases the likelihood impulsive acts of violence will occur.

The fight against disinformation

Social media platforms have an ethical responsibility to protect their users from content that promotes falsehood, hate speech and the incitement to violence. Likewise, users of social media platforms should be allowed the right to know how algorithms are using information they access. To date, federal courts have generally ruled in support of Section 230 of the Communication Decency Act passed by Congress in 1996. Section 230 gives social media companies legal immunity for user-generated content.

A pending piece of legislation, the Safe Tech Act, submitted by Democratic Sens. Mark Warner of Virginia, Mazie Hirono of Hawaii, and Minnesota’s Amy Klobuchar would reform Section 230. If enacted, the Safe Tech Act would hold social media companies accountable for disregarding consumer protection safeguards and civil rights online. The act would address the growing number of incidents of cyber bullying, racist and homophobic content, discriminatory advertising, fraudulent money schemes, and recruiting of political extremists by right and left-wing radical groups. If passed by Congress, the Safe Tech Act would enact reasonable reforms to Section 230 to ensure social media companies would be held accountable for these nefarious online behaviors.

It is evident “Big Tech” has not been effective in self-regulating hate speech and extreme content in their platforms. For example, Avaaz an online nonprofit activist network, released a March 18 report claiming Facebook allowed “267 pages and groups with a combined following of 32 million to spread violence-glorifying content during the 2020 elections.” Of those 267 pages and groups, 118 are still active on the platform reaching 27 million followers. It is time for Facebook, Twitter, You Tube and other social media companies to demystify algorithms, so users have full transparency in how algorithms are used when accessing content. Avaaz recommends social media platforms “detox” algorithms so hate speech, fraudulent, and extremist content is downranked from the top of users’ feeds.

Hearings held by the House of Representatives on March 25 focusing on the role Facebook, Twitter, and Google played in the Jan. 6 insurrection did not yield definitive conclusions. However, they did suggest bipartisan support for holding the companies accountable for the spread of disinformation. The Jan. 6 attack on the U.S. Capitol building and security fencing currently surrounding the Minnesota statehouse has brought forth an uncomfortable reality: Disinformation in all of its insidious forms is an existential threat to civil society and democratic ideals.

Thomas J. Scott is an adjunct professor in the Social Science Department at Metropolitan State University and professor of education at the School of Education at Saint Mary’s University of Minnesota.

Article continues after advertisement

WANT TO ADD YOUR VOICE?

If you’re interested in joining the discussion, add your voice to the Comment section below — or consider writing a letter or a longer-form Community Voices commentary. (For more information about Community Voices, see our Submission Guidelines.)



Source link