Facebook’s Dilemma: Should Social Media Shoulder More Responsibilities?

Photo Courtesy of fortune.com | Facebook founder Mark Zuckerburg, grapples with the challenges of verifying news in the digital age.

Sophia Zhu ‘18
Opinions Editor

Despite calling himself a “fake news fighter,” Trump, as many have argued, has taken advantage of the rising tide of fake news and extreme opinions voiced by alternative media sources during his campaign. In retrospect, some pointed to people’s increasingly extensive use of social media as a significant contributor to the spread of fake news. The question is whether the time is right for us to establish a regulation system that can fact-check information circulated on social media platforms.

This is probably the brightest time for media as technological development has given more people the opportunities to make their views public. People do not have to rely on a few news authorities to know what’s going on in the world. Rather, the power of messaging, publicizing and free expression has extended to so many people that modern news can resonate at an unprecedented speed in an unbelievably wide range, both geographically and digitally.

Unfortunately, this could also be the darkest time for media, as the truth keeps getting more expensive, while fewer people are willing to pay for it. The challenges facing traditional media organizations are enormous: the credibility of mainstream media plummeted while the financial pressure on journalism reached a new high, and as if it’s not depressing enough, here comes the Trump era. Without well-resourced and professionally trained reporters doing the fact-checking, democratization of news reporting is leading to a flood of false information in the public sphere. More people are getting informed through their newsfeed on social media — a hotbed of misinformation that can more easily go viral than ever before.

Did the use of social media alone change the course of the past election? No, I don’t believe so. But research did show that people’s biased opinions can be reinforced through their news consumption patterns. In this sense, media can still have a significant amount of influence, especially in the polarization of political opinions. Social media sites, however, often refuse to admit the power they wield in the political world and thus evade the responsibilities as a covert news provider. Following the election in November last year, Facebook CEO Mark Zuckerberg called the claims that fake news on Facebook influenced the election absurd, reiterating that Facebook is only a technology platform rather than a media company. This may be the original idea, but as always happens in innovations, unintended consequences start to reveal themselves down the road — the proliferation of fake news being one of them.

This month, a public letter by Zuckerberg has shown a drastic change in his opinions on the role and influence of social media platforms as Facebook. He underlined two major problems — filter bubbles and fake news — and pledged to work hard on countering them. Such acknowledgment and commitment are certainly praiseworthy and show some good signs of social media corporations shouldering more social responsibility.

However, how to achieve this is still an open question. The first and most obvious obstacle is the question — how can we filter news without institutionalizing censorship? Who should be given the power to decide which news should be taken out? Should it be a handful of private companies? Facebook currently uses a combination of user report and third-party assessment by sites like factcheck.org and politifact.com to judge whether a story is untrustworthy. Is there bias? Could it result in meaningless sabotages between antagonistic groups?

Then, should the power be transferred to AI? At the very least, algorithms can never lie, but would they be accurate and reliable? As NPR has pointed out, can any system be effective enough to differentiate a philanthropic attempt to direct refugees to safe places from human trafficking schemes? As Zuckerberg has admitted himself, automated filtering systems that can interpret news will still take years to develop.

After all, there is always the danger that censorship encroaching on our valuable space of free expression would develop over time. Would opinions and satires be correctly differentiated from fake news? Would our posts of genuine thoughts be banned one day in the name of extreme and inflammatory speeches?

And then, there is also the dilemma that “political correctness” often faces. On one hand, Zuckerberg envisions a “safe community” where extensive use of personal settings will help the users avoid encountering unpleasant contents, such as hate speech, graphic violence and sexually explicit content that could impair safety. But on the other hand, he also recognizes the importance of exposing oneself to different perspectives, so as to counter the filter bubbles. Could the strategies counteract each other? Can one truly escape from the echo chamber of one’s personal social media space?

Censorship is a dirty word, which often triggers alarm bells in people’s heads, as well as the conception that certain people have the right to judge what’s good for you. But while an era of unavoidable transformation in the media world is unfolding itself, some new counterstrategies need to be developed before the decaying of truth spirals out of control and the democratization of media turns into anarchy.

Leave a Comment