Silenced, for now
The move by Twitter, Facebook, and Snapchat to remove or suspend Donald Trump’s accounts and decisions by Google, Apple, and Amazon that led to a shutdown of Parler brings questions about the unchecked power of social media and the future of the platforms. University of Michigan experts weigh in.
Bans, restrictions have mixed impact following Capitol riot
The amount of potential misinformation was impacted on at least one social media platform following actions to suspend or shut down thousands of accounts, including President Donald Trump’s, following the Capitol riot Jan. 6, according to a University of Michigan measure of “iffy content.”
Between Jan. 5 and Jan. 13, the U-M Center for Social Media Responsibility’s Iffy Quotient on Twitter fell from 14.8 percent to 11.5 percent, while on Facebook it went from 10.9 percent to 11.6 percent.
This means that fewer URLs from iffy sites made the top 5,000 most popular URLs on Twitter in the immediate days after the platform took action to ban the president permanently and suspend some 70,000 user accounts.
The center’s Iffy Quotient, produced in partnership with NewsWhip and NewsGuard, measures the fraction of the most popular URLs on Facebook and Twitter that come from iffy sites that often publish misinformation. NewsWhip determines the most popular URLs each day, while NewsGuard provides website ratings, with Media Bias/Fact Check providing ratings for sites unrated by NewsGuard.
“We shouldn’t overlook the fact that Facebook’s Iffy Quotient was already lower than Twitter’s on Jan. 5 and, on average, has actually been lower than Twitter’s over almost the last two years,” says Paul Resnick, director of the center and the Michael D. Cohen Collegiate Professor of Information. “Still, it is encouraging to see a marked drop in Twitter’s Iffy Quotient after they very publicly intervened on their platform.”
In addition to seeing fewer iffy sites among the most popular 5,000 URLs on Twitter, relative engagement with iffy content was down, albeit barely so on Facebook. On Jan. 5, the engagement share of iffy content on Twitter was 24.3 percent but by Jan. 13 it was down to 9.5 percent. On Facebook, the engagement share was 16.9 percent on Jan. 5 and 16.8 percent on Jan. 13.
“What this means is that over this eight-day period on Twitter, the URLs that were most engaged with were less and less often from iffy sites. In other words, there was more robust engagement with iffy sites’ URLs on Twitter before they announced that they were taking some specific actions,” says James Park, assistant director of the center. “Naturally these things fluctuate, but it’s noteworthy to clearly see this sort of result on Twitter after they’ve taken some direct action.”
“One particular value we believe the Iffy Quotient has — illustrated by recent events — is to help assess whether there are measurable effects that follow the platforms making announcements or taking actions,” Resnick says.
Since it was launched in 2018, the Iffy Quotient has measured content around elections, the COVID-19 pandemic, and incidents of racism, protests, and riots.
A formative moment
Cliff Lampe, professor of information, studies the social and technical structures of large-scale technology-mediated communication, working with sites like Facebook, Wikipedia, Slashdot, and Everything2. He has also been involved in the creation of multiple social media and online community projects. His current work looks at how the design of social media platforms encourages moderation, misinformation, and social development.
“This is a formative moment for social media companies,” he said. “They have the obligation and right to police their platforms for the type of content they want to host. Still, many people feel a lack of agency, since the power of the platform can feel overwhelming to the individual and group. How social media platforms navigate this over the next few months could define the industry for a decade.”
‘Deplatforming Parler was a good start’
Libby Hemphill, associate professor of information and associate director of the Center for Social Media Responsibility, is an expert on political communication through social media, as well as civic engagement, digital curation, and data stewardship.“Finally deplatforming Trump was a big move for social media platforms,” she says. “Coupled with other actions like shuttering QAnon groups and propaganda accounts ahead of the elections in Uganda, I hope that we’re seeing platforms step up to meet their public obligations. However, I don’t expect them to continue holding folks accountable unless extremists and disinformation campaigns stay bad for business.
“We should definitely consider whether three companies ought to have this much power over our communication networks, but Apple, Google, and Amazon finally flexed their market muscles. They could do more to root out apps and customers who violate their terms, but deplatforming Parler was a good start.”
Considering the broader social and cultural context
Sarita Schoenebeck, associate professor of information, specializes in research on social computing, social media, and human-computer interaction. Several of her studies have focused on online harassment.
“For years, platforms have evaluated what kinds of content are appropriate or not by evaluating the content in isolation, without considering the broader social and cultural context that it takes place in,” she says. “This means harmful content persists on the site, and content that should be acceptable may be removed. We need to revisit this approach. We should rely on a combination of democractic principles, community governance, and platform rules to shape behavior.
“We also should center commitments to equity and justice in how platforms regulate behavior. Allowing some people to engage in hate speech and violence simply means that others can no longer participate safely or equitably, and that is not the kind of society — whether online or offline — that we should aspire to.”
The potential for regulation
Josh Pasek is an associate professor of communication & media and political science, faculty associate in the Center for Political Studies, and core faculty for the Michigan Institute for Data Science at U-M. His current research explores how both accurate and inaccurate information might influence public opinion and voter decision-making and evaluates whether the use of online social networking sites such as Facebook and Twitter might be changing the political information environment.
“Most critically, tech companies are afraid of the potential for regulation,” he says. “When they were accused of spreading misinformation, they reacted by providing minimal fact-checking on claims that might mitigate that criticism. When they were being criticized for supposedly stifling voices on the political right, they bent over backward to ensure their policies would not have a disproportionate political impact even though the violations of those policies were far from politically neutral. And when they read the tea leaves, it had become clear that the most likely source of future regulation was from a Democratic Congress that would be more worried about dangerous information and incitement than ensuring that even the extremes of the political spectrum had a platform.
“Social media companies really don’t want to play a police role, but they are far more worried about regulation than about playing that role. That said, it may be a good thing, because it is really not clear that there are any other actors that we would trust more to do it.”