A recent story from Wired helpfully explains the latest batch of changes Facebook has made to its algorithm—the algorithm that sorts through the billions of available articles, photographs, and videos to determine the few we will actually see as we scroll our news feeds. This is just their latest attempt to head off the never-ending stream of content that is illegal, abusive, or otherwise inappropriate, and to deliver content that is safe, inoffensive, and within the bounds of their “community standards.” Experts believe these algorithmic changes will substantially change our Facebook experience by changing the kind of content we will see there. In that way, it provides the opportunity to consider what it means to have so much information delivered to us algorithmically, and to ask whether we are really comfortable with this fact of online living. I’m going to suggest it’s time we begin to take steps to break free.
Before we go any farther, we need to consider the fact that what we see on Facebook—and Twitter and Instagram and Google News and Apple News and … —is determined by algorithms, formulas carefully coded to spread some content and to suppress others. We rarely have access to complete collections of information anymore. Rather, algorithms pre-sort it for us. This is necessary because of the sheer quantity of content being produced today, and also because of the ugly qualities of so much of it.
The Algorithm-Driven Life
Here’s how it works. Every day millions of individuals and organizations create tens of millions of pieces of content. From news giants like the New York Times who churn out hundreds of articles every day, to hobby photographers who share occasional photographs, to bloggers who write their listicles, to whoever it is that creates all those memes—all of these content creators feed their material into a very few content distributors. These are the sites or the apps where people go to discover or consume the majority of their content—Facebook, YouTube, Twitter, Apple News, and so on. The task of an algorithm is to filter down the many pieces of information it could present us to the few it actually will present us. It makes this determination by considering what it knows about us, then comparing that to the many articles, videos, and photographs people have fed it. What it presents to us when we open it are the relatively few bits of content it believes we are most likely to find appealing.
But before any of that can happen, the algorithms need to determine whether such content should even be seen in the first place. YouTube, after all, doesn’t want to serve pedophilic content to pedophiles, and Twitter doesn’t want to feed extremist content to extremists. Thus all the information submitted to these content distributors is algorithmically scanned to determine whether it is even permitted to exist on their platforms or to be disseminated by them. These algorithms can, in theory, distinguish a male nipple (permitted) from a female nipple (not permitted). They can, in theory, distinguish hate speech (not permitted) from free speech (permitted). What passes through this first set of algorithms is placed into the bucket of available content that can be delivered to us by the second set of algorithms. More on this shortly.
The fact is that much, and perhaps even most, of the information and entertainment we encounter online today is filtered in this way. When we visit YouTube, we each see the long, customized list of videos its algorithm has decided are most likely to appeal to us. Tap on the Apple News app and we are presented with lists of articles its algorithm has determined are most likely to cause us to tap and read. These lists differ from the ones it shows our husbands or wives, parents or children, or even our twin siblings. Whether on YouTube or elsewhere, we rarely see complete and unfiltered collections of content anymore. We see only what the many algorithms present us.
The Benefits and the Dangers
It is true of all technologies that they invariably come with both benefits and drawbacks. Algorithms are no exception, and present us with both strengths and weaknesses. The strengths are obvious. For example, they can sort through the vast amounts of content to cut it down to something manageable, they can distinguish between what’s interesting to you and what’s interesting to me, they can detect nudity and block it from those who don’t wish to see it. The weaknesses, though, can be a little harder to detect. Let me bullet point just a few of them.
- They are biased. Algorithms are not unbiased. Rather, they are created by human beings who subconsciously (or sometimes very consciously) embed their ideologies into their formulas. If conservative information and perspectives are being algorithmically suppressed today, as some have charged, that’s likely only because non-conservatives form the great majority of the employees within the tech companies, and they’ve embedded their ideologies accordingly. If conservative Christians coded the algorithms, they would be biased as well, though obviously in different ways.
- They are moral. Just as there are biases within algorithms, so there is morality. Those who code the algorithms have to determine what is good and evil, what is safe for public consumption and what is dangerous, what deserves to be spread and what deserves to be suppressed, what constitutes hate speech and what is legitimate free speech. This is why people who advocate modern sexual mores are likely to find their content being algorithmically disseminated while those who advocate traditional sexual mores are likely to find it suppressed. Such morality is coded into the algorithm by the people who create it.
- They cannot determine truth or accuracy. Algorithms are well-suited to presenting content that is appealing, that grabs our attention, that makes us want to watch it, click it, share it. But they aren’t well-suited to determining what is true and helpful, or what is worthy of our time and attention. In other words, they are better at pleasing us than instructing us, and better at delivering what’s popular than what’s true.
I’ve listed just a few concerns out of many, but I trust even these are enough to get us thinking about the place and the prominence of algorithms in modern life. As we put it all together, we can see, for example, that the people behind Facebook’s algorithm have necessarily encoded their own biases and morality into it. They have determined what represents truth and error, what constitutes hate and love, what should be spread virally and what should be suppressed immediately. It is no secret that the great majority of people who work for the big tech companies are neither conservative nor friendly to conservatives in religion, politics, or matters of morality. These people have immense power—power we have given them by so wholeheartedly embracing their product and power we continue to give them as we go on using it. They are now the gatekeepers of so much of the information we encounter day-by-day.
The Solution: Self-Curation
For all these reasons, I am convinced there is increasing value in self-curation and a growing necessity for it. It’s time to escape from the algorithm, at least in those areas that matter most to the good life and the Christian faith. Sure, we can let the algorithm work its magic while we browse for books on Amazon or look for entertainment on YouTube. But when we want to be equipped, edified, and informed, we need to take responsibility. To this end, I’ll offer two broad suggestions with a few specifics for each.
First, be your own curator. Discover trusted sources of news, articles, and other information and curate them yourself. Don’t rely on Facebook to determine, for example, when you ought to read an article from WORLD or Desiring God or Modern Reformation. Rather, regularly check these sites on your own so you can determine when they have something that will benefit you. Remember, some of their most compelling and important articles may otherwise never reach you because the algorithm will deny or suppress them. Specifically:
- Use Feedly or a similar service. Through the magic of a hidden technology called RSS, Feedly allows you to subscribe to sites and then see all their new content. It involves no algorithm, so you will need to be your own curator. You’ll learn quickly how to skim the headlines to find the material that will benefit you. Skim many so you can deep-read a few.
- Sign up for the email newsletters of trusted sources of news and information.
- Subscribe to channels on YouTube. When you click the “subscribe” button, new videos from that channel will always be placed in your sidebar. This means you will not need to rely on YouTube’s algorithm to find and recommend these videos for you. They may, after all, be the kind of content YouTube will formally allow but algorithmically suppress. (But remember, YouTube may have already algorithmically denied or removed videos it considers unsuitable).
- Turn off the algorithm in Twitter so you can see all updates chronologically rather than some algorithmically. Alternatively, use a third-party app that offers this feature. (But remember, Twitter may have already algorithmically denied or removed tweets it considers unsuitable).
- Mark certain sites “appear first” in Facebook. (But remember, Facebook may have already algorithmically denied or removed posts, images, or videos it considers unsuitable).
In short, reduce your reliance on algorithmic sites when it comes to important, meaningful content.
Second, find other trusted curators. Find curators you trust—people whose theology or politics or other interests you trust—and let them serve as a filter for you. Then find a way to follow them outside of any algorithms (i.e. outside of Facebook, Twitter, or Apple News).
- Subscribe to their email newsletter.
- Follow them using Feedly or another RSS service (see above).
- Make it a habit to visit their site on a regular basis.
Don’t be afraid to follow creators or curators whose perspectives differ from your own; be on guard against the internet’s “echo chamber effect.”
It is becoming increasingly clear to me that we have not thought deeply enough about all these algorithms. We’ve stood ignorantly, idly by while they’ve invaded so much of our lives and shaped so much of what we see and experience online. It’s time to consider all we know of Mark Zuckerberg (Facebook), Jack Dorsey (Twitter), Tim Cook (Apple) and all the rest and to ask ourselves, do we really want to give them this kind of authority? To allow them to judge what we’ll find interesting and informative is to cede to them the authority to withhold from us what they determine is inappropriate or offensive. It’s time to face how much we stand to lose by living the algorithm-driven life. It’s time to break free.