Should tech giants be liable for content?
Sep 10, 2018

Should tech giants be liable for content?

 

GOOGLE marked its 20th birthday this week. It celebrated in fitting style—being lambasted by politicians in Washington. Its failure to send a senior executive to a congressional hearing, over Russian use of tech platforms to meddle in the presidential election in 2016, was tone-deaf. Like Facebook and Twitter, whose top brass did show up, Google wields too much influence to avoid public scrutiny. A vital debate is under way over whether and how tech platforms should be held responsible for the content they carry. Angering legislators increases the chance of a bad outcome.

Back when Google, Facebook, Twitter and others were babies, the answer that politicians gave on the question of content liability was clear. Laws such as America’s Communications Decency Act (CDA), passed in 1996, largely shielded online firms from responsibility for their users’ actions. Lawmakers reasoned that the fledgling online industry needed to be protected from costly lawsuits. They were to be thought of more as telecoms providers, neutral venues on which customers could communicate with each other.

That position is hard to maintain today. Online giants no longer need protection: they are among the world’s most successful and influential firms. Nearly half of American adults get some of their news on Facebook; YouTube, Google’s video-streaming service, has 1.9bn monthly logged-on users, who watch around 1bn hours of video every day. To complaints about trolling, fake news and extremist videos, the old defence of neutrality rings hollow. The platforms’ algorithms curate the flow of content; they help decide what users see.

The pendulum is thus swinging the other way. Lawmakers are eroding the idea that the platforms have no responsibility for content. Earlier this year America passed the SESTA act, which has the worthy aim of cracking down on sex trafficking; the Department of Justice this week said it would look into the platforms’ impact on free speech. In Germany the platforms have strict deadlines to take down hate speech.

The tech giants themselves increasingly accept responsibility for what appears on their pages, hiring armies of moderators to remove offending content.

This new interventionism carries two big dangers. One is that it will entrench the dominance of the giants, because startups will not be able to afford the burden of policing their platforms or to shoulder the risk of lawsuits. The other is that the tech titans become "ministries of truth", acting as arbiters of what billions of people around the world see—and what they do not. This is no idle worry. Facebook and YouTube have banned Alex Jones, a notorious peddler of conspiracy theories. Loathsome as Mr Jones’s ideas are, defenders of free speech ought to squirm at the notion that a small set of like-minded executives in Silicon Valley are deciding what is seen by an audience of billions.

The weight given to free speech and the responsibilities of the platforms vary between countries. But three principles ought to guide the actions of legislators and the platforms themselves. The first is that free speech comes in many flavors. The debate over the platforms is a melange of concerns, from online bullying to political misinformation. These worries demand different responses. The case for holding the tech firms directly responsible for what they carry is clear for illegal content. Content that may be deemed political is far harder to deal with—the risk is both that platforms host material that is beyond the pale and take down material that should be aired.

 

 

 

 


Previous article Next article