Washington Post: What Google and Facebook must do about one of their biggest problems
Google could lose as much as $750 million because of a boycott by advertisers, according to Nomura Research. Companies are protesting against the placement of their ads next to extremist and hateful content. An even worse offender is Facebook, which had enabled the propagation of fake news that may have influenced the outcome of the U.S. elections. The two companies have reaped massive profits from the spread of misinformation; yet they have claimed both ignorance of how their technology is misused, and an inability to control it.
It wasn’t supposed to be this way. Social media was developed with the promise of spreading democracy, community and freedom, not ignorance, bigotry and hatred. Connecting billions of people together and allowing them to share knowledge and ideas, it could have enabled them to achieve equality and justice; to expose what is wrong and crowd-solve global problems. Instead, it has become a tool enabling technology companies to mine data to sell to marketers, politicians and special-interest groups, enabling them to spread disinformation. It has created echo chambers in which people with similar views reinforce their ignorance and bias. And the loss of control over user data has now affected not just the economic lives of Americans but also the political messages they receive on platforms such as Facebook.
Part of the problem is that a handful of large technology companies have become small oligopolies in connectivity and information; they are reaping incredible profits but forsaking the responsibilities that come with the power they have gained. Facebook, for example, has become a media company with more power and influence than The Washington Post and the New York Times. More than 65 percent of its users — 44 percent of U.S. adults — get their news through its platform. Yet it claims not to be a publisher and to not have responsibility for what appears on its platform or done with its marketing data.
In light of the backlash, Facebook and Google have acknowledged the problem and pledged to do something about it. In a blog post, Facebook chief executive Mark Zuckerberg detailed plans to build a safe, informed, civically engaged and inclusive community that fulfills the benevolent promise of social media. But he says that Facebook can’t possibly review the billions of billions of posts that are made on its platform every day and solving the problem will require artificial intelligence — which is “technically difficult” and will take years of research and development.
We can’t wait years. And it isn’t that this industry is powerless when it comes to controlling the misuse of its platforms; there is insufficient motivation. Go back a few years in time, for example, to when our mailboxes were getting flooded with spam. Tech companies created filters and blacklists and many other defenses and virtually eliminated it. When marketers learned how to game Google’s page-ranking system by creating multitudes of websites with links to each other, it updated its algorithms to penalize the offenders. Whenever it comes to making money, the tech industry always seems to be able to find a way — and it doesn’t take years.
Trolling is also common problem on Twitter, with millions of automated bots being available for hire. You can openly purchase fake accounts and fake followers, and have other accounts spread marketing content as well as misinformation and hate. The company has the technology to disable these accounts, but it doesn’t, possibly because doing so would hurt its stock price.
There is no way of turning back technology; what we need is for their owners to steer them in a more positive direction. The problems of fake news and the spread of disinformation can be remedied by opening up social networks and vetting news in a more effective manner; by using technology and imagination to solve the problems that technology has created through lack of imagination.
In his book, “Whose Global Village?,” Ramesh Srinivasan explains that digital technologies are not neutral but are socially constructed: created by people within organizations, who in turn approach the design process on the basis of a set of values and presumptions. The platforms that have come to dominate our experience of the Internet, Google and Facebook, are for-profit companies, not democratic institutions. As they become the face of journalism and public information, they must be held accountable for their effects.
Srinivasan points out that invisible algorithms determine the content that social-media networks curate and present to us; they decide what is important. These algorithms take input from the people we associate with on social media — and this leads to the echo chambers — but a lot more is done in secret. What we do know is that they tend to confirm our existing biases, and those of our existing networks. Yet as users we know almost nothing about the choices that went into these personalization algorithms, and we are not given much of an alternative.
Srinivasan argues for a few important choices:
First, we can ask for social-media companies to make transparent and comprehensible the filters and choices that go into the most important algorithms that shape interactivity. This does not mean having to publish proprietary software code, but rather giving users an explanation of how the content they view is selected. Facebook can explain whether content is chosen because of location, number of common friends, or similarity in posts. Google can tell us what factors lead to the results we see in a search and provide a method to change their order.
Second, we must provide users with the opportunity to choose between different types of information, whether it be the news shared by people beyond their social networks or options on the filters on their feeds. Such filters would allow users to determine what parts of the world they’d like to see information from and the range of political opinions they choose to be exposed to.
Third, we can return to a practice that long characterized the Web: open-ended browsing and surfing. Social-media companies can develop tools that allow news credibility to be visualized, enabling users to browse content within and beyond their immediate social network. Facebook could make posts available from users who are not in the user’s friend network, or provide the user with tools to browse the networks of others, with their permission. It could even develop interfaces that allow users to look across posts from multiple perspectives, places, and cultures in relation to a given topic.
The bigger issue is that we need to develop political literacy in our educational and social systems. This entails viewing no piece of information, whether presented on social media or through a traditional news outlet as infallible, but instead learning to scrutinize that story’s framing, the agenda it serves, and the integrity and transparency of its sources. In other words, as a society, we need to up our own game.