Ever since the election, we have heard a chorus of voices, largely in the media, explaining that a key influence on the election results was the spread of “fake news”online. This term is used to refer to a combination of deliberate propaganda, crankish speculation, and wild assertions, especially spread via social media, but sometimes also ranked highly by search engines. The Guardian recently published a pointed critique of Google, opening with the claim that “Google must urgently review its search ranking system because of “compelling”evidence that it is being “manipulated and controlled”by rightwing propagandists.”The story goes on to quote various “concerned experts”who solemnly intone that “unless Google acknowledged responsibility for the problem, it would be a ‘co-conspirator’ with the propagandists.”Similar pieces have appeared blaming Facebook for making it too easy to share “fake news.”This critique is wide of the mark: “Fake news”and misinformation are not new problems, and are also not primarily technology problems.
There may be technical changes that Google and Facebook could make that would be helpful, but any technical change risks unanticipated consequences. Nobody is very good at predicting these consequences, and so we should neither press for drastic immediate action nor expect this problem to go away on its own. Instead, we should be thinking as a society about social remedies.
Crankish conspiratorial thinking has been a theme in America for a long time. There was considerable worry in the 1990s about the angry conspiratorial tone of talk radio. Videos circulated about how the Clintons had killed Vince Foster. But the problem is much older than that. In the 1930s, one of the country’s most popular radio shows was hosted by John Brinkley, a medical huckster and quack. Just to show how far back these things go, in 1834, a Boston mob burned down an Ursuline Convent in Charlestown based on essentially false pretenses. It was the “pizzagate“of the Jackson administration: somebody takes two half-understood anecdotes and builds an accusation around them with only the slenderest connection to reality. An unscrupulous newspaper prints the accusation. Denials by the authorities are taken as a sign that the authorities are untrustworthy. Presently a crowd shows up with incendiaries and burns down somebody’s home.
Facebook didn’t invent rumor-mongering. It doubtless has made the problem more visible, since what used to be merely asserted drunkenly in saloons or spoken on talk radio is now in publicly visible text online. But visibility is not the same as impact and we should not assume without evidence that technology has made false rumors more dangerous to society. (The election of Donald Trump is not evidence that falsehood has any new potency. Partisans have been repeating lies about their opposition since the birth of democracy.)
Some of our problem is not cranks, but propaganda. In many countries, the large “respectable”corporate media will spread misinformation at the behest of the government. The Chinese media will sometimes assert patent untruths. RT, sponsored by the Russian government, can and does make things up, or present unhinged speculation as though it were plausibly true. We in the west have faith that our government-run media, like Voice of America and the BBC, are reasonably honest and conscientious and that those in Russia and China are not. But this is not a thing that is universally agreed on, and there are media sources about which legitimate opinion differs. Google and Facebook have a deep ethos of neutrality, and to the extent that they are credible, it is precisely because they do not make blatant editorial decisions that embed their preconceptions and beliefs about which sources to trust. If Google or Facebook were to anoint some limited set of news sources as “authoritative”and some others as “fake”, they would immediately be faced with quite an ugly controversy about who is who, and this is controversy they avoid for both business and philosophical reasons.
When asked about “fake news,”a Facebook representative responded by asking “what is truth“, for which they were roundly mocked. But the question is a serious one and the answer is illuminating. Our best theory is that truth is a correspondence between what we say and how the world really is. Our machines do not have unimpeded access to the world; they have to take our word for it. And if there are a lot of lying or confused humans, the machines are not going to be able to know which human authors are truthful and which aren’t.
Technical changes could be helpful, but we should not push for drastic change and should not expect dramatic improvement. Google and Facebook regularly tinker with their algorithms to improve relevance and user experience. However, whenever they do, the web designers, the advertisers, the trolls and the propagandists all go to work trying to get their content rankedas high as possible. Placement on search engines and social media is an adversarial domain, and as with other adversarial contexts, nobody is very good at predicting the consequences of changing the rules. The search engine optimizers are smart and will come up with tactics that the Google engineers didn’t predict in advance.
The Guardian and its “gravely concerned”experts argue that Internet companies should adopt more editorial responsibility. But this is a task that the Internet companies are ill-suited for and that their users do not want. Google and Facebook are in the business of showing results that users find relevant, not the editorial business. If users are seeking carefully curated news, The New York Times and The Wall Street Journal are both available online, and there is no particular reason why Google ought to compete directly against them. As Google founder and CEO Larry Page put it, the perfect search engine is something that understands exactly what you mean and gives you back exactly what you want. Alas, for most people, what they want is not pure truth.
In America, we allow both broadsheet newspapers and supermarket tabloids to be sold. The underlying philosophy of the First Amendment (and free speech generally) is that society should leave people to make their own truth judgements, and should not rely on appointed authorities to promulgate truth and suppress falsehood. The technology industry stands on solid philosophical ground with its desire to concentrate on building content-neutral platforms, rather than to be the world’s arbiter of fact.
We should be humble about the capacities of our current technology. If you Google “does New Zealand exist”, the search results include a factbox titled “New Zealand: The Country That doesn’t Exist.”If Google’s technology can be made to doubt the existence of a sizeable English-speaking democracy that shows up in satellite imagery, it might be overly ambitious to expect them to solve the general problem of separating truth and fiction. (I can also confirm with TechPolicyDaily.com colleague and New Zealand resident Bronwyn Howell that the country does in fact exist.)
The history of science, politics, and philosophy demonstrate that even intelligent, educated, and well-meaning humans often believe foolish things. Technology is not going to change that. Intellectual honesty has always been a difficult virtue to practice, and no algorithmic cleverness is likely to change that. This post was originally published on TechPolicyDaily.
The results are clear: the sowing of deep divisions and turmoil over the US elections results, combined with perilous attacks on the capabilities and even motivations of US intelligence agencies. They are smiling in the Kremlin.
|
|