Media’s misinformation bubble
Platforms’ spreading of misleading information has drastic consequences
Obsessively refreshing their Twitter feeds, Americans scanned their screens for more than four days after the presidential election. Following the announcement of former Vice President Joe Biden’s projected win on Nov. 7, President Donald Trump launched tweets to his 88.5 million followers declaring he won the presidency and the Democratic party was attempting to “steal the vote”.
Trump’s social media usage resulted in Twitter flagging Trump’s supposed victory tweets and voter fraud claims as “misleading” and “disputed”. Still, #StopTheSteal event groups on Facebook garnered more than 350,000 users. Members of the group argued for violence and civil unrest in support of President Trump as protests formed in battleground states like Pennsylvania.
Despite Facebook preventing the group from appearing in election searches due to concerns of violence, #StopTheSteal was spread over multiple conservative media platforms. Conspiracy theorist groups and so-called “fake news” have become commonplace on social media platforms.
In fact, a study conducted by the Massachusetts Institute of Technology found that fake or misleading news spreads six times faster than true news on Twitter.
The technological architecture of social media is mostly responsible for this spread. Media companies develop a model of an individual user, taking into account their internet history, likes, followers, and use this information to suggest similar content.
The implications of this proliferation of misleading information are concerning according to tech industry employees interviewed in the Netflix Documentary, The Social Dilemma.
Early investor in Facebook, Er McNamee said danger lies in this isolated bubble of information.
“Over time you have the false sense that everyone agrees with you because everyone in your news feed sounds exactly like you,” McNamee said. “Once you’re in that state, it turns out you’re easily manipulated.”
A study analyzed by the University of Colorado at Boulder specified that most misleading information is perpetuated by users with extreme ideological views; “In all, about one-fifth of users at the far ideological extremes were responsible for sharing nearly half of the fake news on the two platforms.”
Social media’s bias towards proliferating false information relies on this stark difference in opinions: polarizing and/or untrue information makes companies more money than the truth. Additionally, online conspiracy theories and the widespread weaponization of social media can lead to harmful offline effects, said Senior Internet Researcher Cynthia M. Wong.
“The most prominent example that’s got a lot of press is what happened in Myanmar,” Wong said. “Facebook gave the military and other bad actors a new way to manipulate public opinion and to help incite violence against the Rohingya Muslims. It included mass killings, burning of entire villages, mass r**e, and other serious crimes against humanity.”
Besides the Rohingya genocide that has been present in Myanmar since 2017, hate crimes organized and encouraged via media platforms have also occurred in the United States. This past July, Holocaust survivors took to Facebook, urging founder Mark Zuckerburg to censor offensive posts denying the historical existence of the mass-murder and incarceration of Jewish individuals.
“I don’t believe that our platform should take that down because I think there are things that different people get wrong,” Zuckerburg said.
Defined as a form of hate speech according to the Anti-Defamation League, organizations like the ADL argue that forms of Holocaust denial cause offline violence and hate crimes targeting the Jewish community.
Regional Director of the ADL Gary Nachman said social media has aided in the spread of negative and positive information.
“ However, hate speech that threatens others is dangerous and has no place in social media just as calling someone on the phone and threatening them, tests legal boundaries,” Nachman said.
The ADL encourages those struggling with online hate speech to visit their website for resources.
“It is not just the social media platforms that have a responsibility to monitor hate,” Nachman said. “By understanding sources and fact-checking, we can aid in stopping the proliferation of these conspiracy theories and falsehoods. We as individuals have a responsibility to call out discrimination, bigotry, and hate.”