In 1985, Neil Postman, in his now-prescient social commentary Amusing Ourselves to Death, compared the dystopian visions of George Orwell’s 1984 with Aldous Huxley’s Brave New World:
Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture.
Huxley and Postman might not have imagined the technology that enabled social networks like Facebook and Twitter. But they both understood the implications of modern technology’s ability to cloud fact from truth, and what people’s perception of the truth is.
The term “fake news” has now entered the public lexicon, and it has different meanings to different people. To some, it simply means news that doesn’t agree with their world view or perception of reality. The term is being used by our current president to engender distrust among his supporters for mainstream media organizations, such as CNN and The New York Times, when he disagrees with their reporting. But the fake news that would have kept Huxley and Postman up at night is the kind that has been widely employed on Facebook and Twitter, the dominant social network platforms. The technology that supports it is the same technology that has propelled Facebook to be the fastest growing advertising platform of the past several years.
Facebook has a vast and rich array of information that has enabled advertisers to reach specific audiences in far more cost-effective ways than ever before. The platform’s 2 billion monthly active users generate a trove of information that can be used to target audiences–what they share on timelines, what ads they click, what they like and dislike, which devices they use, specific demographics, locations visited and travel profiles, and more. Facebook tools can use this information to generate custom audiences, for whom particular messages can be crafted.
Moreover, if you have your own list of followers for a website, you can upload your list and match the names with their personal Facebook profiles. The platform provides the ability to finely slice and dice audiences, and create messages that can resonate personally and cause strong reactions. This is what advertisers use to create those ads, which sometimes seem like they know more about you than you know yourself.
This methodology is also what political campaigns use to target specific voters with messages that will motivate them. While this targeting is not fake news in and of itself, political campaigns–on both sides–often manipulate news to build their narrative. The Trump presidential campaign used Facebook to great effect in 2016, building a loyal base of supporters that they continue to cultivate with specific messaging unfiltered by traditional media outlets. In particular, they used A/B testing techniques on a huge scale, delivering thousands of targeted messages to its so-called Project Alamo database of 220 million voters culled from voter rolls, purchase databases, and small donors acquired through its web sites.
Fake news sources don’t typically use Facebook audience targeting like political campaigns do. They use the platform to generate traffic to their sites, and typically monetize through Google ads. In the waning days of the 2016 presidential campaign, Buzzfeed reported on how a large percentage of the fake news on Facebook originated from teenagers running 100 fake pro-Trump sites in Veles, Macedonia, part of the former Yugoslavia. The articles were plagiarized from US political sites, given sensational clickbait headlines, and posted to conservative Facebook groups, often with purchased, US-based fake profiles. Many of these teens were pulling in thousands of dollars a month in a country where the average monthly income is under $ 400.
Why Does Fake News Work?
Obviously social networks make it easy to take something viral. A Pew Research report in 2016 indicated that 62 percent of adults obtained their news from social media. The sheer amount of news and news outlets available, not just through social media, delivered to multiple devices, make for an information inundation. Traditional news sources with dedicated staff and editors now compete with sites with thousands of freelance contributors that cover news from many angles.
The stories that capture most people’s attention tend to be headlines that aim for an emotional reaction, entertain, or stoke enough curiosity to click. Depressingly though, many people, pressed for time and overwhelmed by the amount of content, or simply lacking intellectual curiosity, read just the headlines–often posting to Facebook, retweeting, liking, and sharing without much thought. A recent study by researchers at Columbia University and the French National Institute notes 59 percent of all shared links are not ever clicked through and read.
Are there some people more prone to believing fake news? The somewhat controversial firm Cambridge Analytica claims to do psychographic targeting for political campaigns. They are reported to have been instrumental in targeting voters for the “leave” campaign in the Brexit referendum in the UK. Cambridge Analytica was known to work with the Trump campaign, which used the company’s 220 million person US voter database as well as the databases compiled by the Republican National Committee and the campaign’s own. Analysis of the Brexit vote after it happened showed many voters did not know what the implications were of leaving the European Union, but reacted based on hot button issues like immigration.
A study by Richard Fording and Sanford Schram (political science professors at University of Alabama and Hunter College, respectively) analyzed the psychological profiles of voters and the “need for cognition.” They coined the term “low information voter,” who were voters who measured low in their knowledge of government and politics and questions like “thinking is not my idea of fun.” The professors also sought to separate these traits from educational level, as they did not necessarily correlate. This may be the real challenge in battling the spread of fake news. The web makes it easy to publish a story that has no basis in fact–and also makes it easy to find a community that believes it.
Amplifying Fake News
Twitter is like a bullhorn, helping to amplify stories, ideas, and messages virally. The 140-character nature of the message lends itself short attention spans and the instant ability to share a tweet to followers without much thought. A recent study at Indiana University analyzed the spread of fake news on Twitter. They followed 122 sites known to promulgate fake news, 400,000 claims made, and 14 million twitter mentions of these claims.
One of the key findings is automated Twitter bots play a key role in amplifying fake news. Twitter bots could often be identified by the volume of tweets (multiple tweets per minute indicate automation rather than a human) as well as directing tweets to influencers with big followings. For example, a pro-Trump bot @amrightnow generated 1,200 posts during the final Clinton-Trump debate. Since there are many useful Twitter bots employed by companies and otherwise, bots themselves are not necessarily the problem. Better scrutiny of tweets before instant liking or retweeting would help. But perhaps better technology can help detect the dubious content.
Can AI Help Curb Fake News?
AI and machine learning are changing everything, and it may be able to limit both fake content and the spread of it. Facebook has been implementing machine learning techniques to limit fake news. But the results at least so far are inconclusive, even though Facebook has had some success with image recognition to curb pornographic and violent content on its platform. Better ways of detecting Twitter bots that are disseminating false content will help, but will be hard to control, as the same bot may spread real news that happens to fit with the message it’s trying to push.
One interesting AI project looking to combat fake news is Fake News Challenge, an open-source collaboration of featuring over 100 volunteers from academia and industry around the world. Their first step is to use machine learning and natural language processing algorithms to be able to identify something as fake news, by doing what they call stance detection, which analyzes what other news sources are saying about the topic. If it works, the main goal is to identify what might be fake news and bucket it, speeding up the mostly manual task of fact checking the content. If the content can be fact checked quickly enough and debunked, it can stop the viral spreading of it.
Technology will also keep advancing the state of the art in fake news. Most of us can recall how hard it was in the era of film photography to doctor an image. Today most good photo editing software–even on your smarphone–can magically erase someone from a picture. With this kind of digital technology, how can people even trust images? It’s only getting more difficult: Researchers at the University of Washington have developed an AI lip-syncing system that can realistically alter video of someone, taken saying something different, and literally put words in their mouth. It is difficult to tell if it’s fake unless you are looking for it carefully.
Ultimately the real challenge for limiting fake news is the human one. Encouraging reading, questioning, and curiosity about how things work, along with an understanding of history and not seeking simplistic, slogan answers to complex subjects, will all help in limiting disinformation.