Facebook May Not Want to Beat Fake News | Katherine Maher | Big Think
[Music]
Wikipedia is a fascinating thing today, that Wikipedia is as trusted as it is misused by as many people as it is. Because if you think about how an encyclopedia that anyone could edit could possibly grow to be a resource that about a billion people visit every single month from all over the globe, there's a story of constant self-improvement in there. A story of really grappling with our flaws and with our faults along the way.
Wikipedia wasn't trusted when it first started because it was the encyclopedia anyone could edit. Then we had a series of fairly high-profile mistakes, hoaxes, and screw-ups. The thing that makes Wikipedia work is that the Wikipedia community is so committed to getting it right that when errors happen, their first response is to fix them.
There's this story from 2005 when a journal, Nature, which is a scientific journal, did a study—a sample study—on how accurate Wikipedia articles are. The study found that, on average, the articles surveyed were about as accurate as a relative sample set from Encyclopedia Britannica. The story goes that when this was published, the Wikipedia community went to Jimmy Wales, the founder, and asked if he could put them in touch with the editors at Nature so they could find out where the errors were, so they could fix them.
I think that is such a classic example of wikipedians. When they find out that something's wrong, their first response is not to get defensive but generally delight—because it means that there's something to improve.
The whole conversation around fake news is a bit perplexing to the wikipedians because bad information has always existed. The very first press freedom law was passed more than 250 years ago in Sweden, and I bet the very first conversation about misinformation happened within that first year. Yellow journalism, missing information, propaganda—however you want to name it—there were already names for fake news and established ways of dealing with them.
So for wikipedians, we look at this and say this has been a problem since time immemorial. For the last 16 years, we've been working on sorting fact from fiction and doing a pretty good job of it. So to have this conversation, I think, is a little bit disingenuous because it is looking at fake news as though it's the problem instead of actually looking at some of the commercial and other factors that enter into play around the distribution of information.
The obstacles, the source of information, the consolidation of the media landscape, the commercial pressures on publishers that have been created by major platform distributors, and the lack of transparency in the way information is presented through algorithmic feeds—these factors contribute to this issue. It's really not about the quality of information itself.
I would certainly say the media landscape in media literacy are important, and it is a call to arms for us to be more engaged in education around civics and media literacy. But I also think it's an opportunity to have hard conversations with platforms that present information within algorithmically curated feeds about why they aren't presenting some critical information that allows people to make good decisions about understanding where that information comes from.
One thing that we would point to within Wikipedia is that all of the information that is presented can be scrutinized. You can understand where it comes from, you can check the citations, and you can also check almost every single edit that has ever been made to the projects in their 16 years of existence. Those are more than 3 billion edits, and all of that is available to the public.
We hold ourselves up to scrutiny because we think that scrutiny and transparency create accountability, and that accountability creates trust. When I'm looking at a Facebook feed, I don't know why information is being presented to me. Is it because it's timely? Is it because it's relevant? Is it because it's trending, popular, important? All of that is stripped out of context, so it's hard for me to assess whether it is good information that I should make decisions on or bad information that I should ignore.
Then you think about the fact that all of the other heuristics that people use to interpret information—where it comes from, who wrote it, when it was published—all of that is obscured in the product design as well. So the conversation that we're having, I think, is a bit disingenuous because it doesn't actually address some of the underlying platform questions and commercial pressure questions. It tends to focus on, you know, I'm not even sure, and then to focus on educating the end consumer, which is good.
We believe in an educated user, but we also have a lot of confidence that if you give the user the information they need, they can make those decisions and determinations.
[Music]