When I heard that Wade Roush is planning to leave Facebook, I took note. Wade is a veteran technology journalist and the host of the podcast Soonish. He is not the first person to take a stand against Facebook, but when someone who follows technology and thinks about the future as a profession makes such a decision, it’s a big deal. Wade’s announcement reminded me of my own plan to get off Facebook, a plan that’s been in the works for, oh, five years now. It made me wonder if there’s anything I can contribute to the “Fexit” discussion, so I’ll explore that here.

Let me start by saying that on a personal level, the benefits of being on Facebook have been major. Rather than trying to summarize these benefits, I’ll just mention that several the most significant relationships and events in my musical life were made possible by Facebook, including meeting the harpsichordist who became my collaborator for my Canons project. The disadvantages of being on Facebook have also been major and, rather than trying to outline them, I’ll just say they include a warped sense of reality, the devaluing of real-world interactions, and countless hours lost to mindless scrolling.

As far as what I might be able to add to the Fexit discussion, I’d like to deconstruct a certain idea that has been part of my excuse for staying on Facebook so long even as I’ve wanted to leave. It is the idea of the guarded and judicious Facebook user. It is the idea that by being careful and deliberate about what you post, you can mitigate the downsides of being on Facebook while still partaking in the advantages. Is this possible?

Here’s the thought process of the guarded Facebook user: You start by being a little bit proud of yourself. You remark that unlike all those impetuous fools out there, you don’t have a problem with drunken 2AM stream-of-consciousness Facebook impulse posting. You think twice, thrice, even four times about what you post. In fact, you only post material you want to publicly broadcast, material that you’d be happy for everyone to see, whether they’re close to you or not.

You figure that if Facebook wants to show you ads related to your publicly-declared interests and preferences, that’s not so bad; the worst that could happen is that you see an irrelevant or annoying ad and you scroll quickly past it; the best that could happen is that you learn about some product or service that’s genuinely useful to you. You’re not too worried about your own susceptibility to propaganda spreading through Facebook because you consider yourself to be a critical thinker who doesn’t believe claims without evidence, and who knows how to ignore (or else call out) the outlandish stuff that appears in your feed.

You figure that if your Facebook data were acquired by a third-party – even by a hacker with malicious intent – nothing bad could happen because you’ve only shared the things you want the whole world to know. If an ethically challenged corporation or intelligence agency were to acquire your Facebook posts, they’d see nothing but all those articles about climate change that you shared, and what are they going to do with those? If they really wanted to read through them, well, maybe they’d learn something about a grave issue facing human civilization. As far as damaging information – data that could be used to manipulate you or steal your identity – you think you simply haven’t exposed it.

Unfortunately this idea of the judicious, and therefore “safe” Facebook user is a myth. To expose this myth as such, I would remind you that Facebook collects, or has the potential to collect, more information about you than you can probably imagine, and it can collect this information even when you think you’re being totally passive and guarded. For example, in 2015, Facebook announced an attention-tracking feature whereby it measures how long you spend looking at an item in your news feed as you scroll, even if you don’t like the item, click on it, comment on it, or engage with it in an active way. Here’s how Facebook described the feature:

…just because someone didn’t like, comment or share a story in their News Feed doesn’t mean it wasn’t meaningful to them. There are times when, for example, people want to see information about a serious current event, but don’t necessarily want to like or comment on it. Based on this finding, we are updating News Feed’s ranking to factor in a new signal—how much time you spend viewing a story in your News Feed.

As far as I can ascertain, this was a faint whisper of an announcement that has since received very little attention, but it’s quite significant. It means that you can log in and silently scroll through your news feed for a few minutes and Facebook still learns a ton about you. It’s as if there’s an invisible “like” button that you’re clicking all the time, even though you might think you’re being totally passive. (I should add that beyond tracking how you scroll through your news feed, Facebook tracks your every mouse movement.) Importantly, these statistics are not included in the profile data that Facebook lets you download. The downloadable data has some of what Facebook knows about you, but not by any means everything it knows.

Now let’s turn to the idea that you’re “safe” because you’ve only shared material that you are happy to make public. Let’s even imagine that your scrolling habits are disciplined, so you only spend time looking at items in your news feed that you’re happy to let Facebook know you care about. Yes, you understand that Facebook has been used to spread fake news and political propaganda, and you’re not thrilled about it, but you believe that your own use of Facebook in no way contributes to that problem because you’re only cautiously reading and posting content that you consider to be fact-based and in the public interest, like carefully chosen articles about climate science.

Here’s where it’s important to understand that not only is Facebook probably collecting more data about you than you’ve considered, but it can use that data in ways that you probably haven’t considered. Let’s think about how your data could be used to help Facebook decide what advertisements to show to other users.

Of course, Facebook can analyze the things you like and recommend them to other users who are, in some way, similar to you, and vice versa. If you’ve shared articles about climate change and you’ve also liked a product, say a line of energy-efficient LED bulbs, Facebook might then know to show an ad for these energy-efficient bulbs to other users who have posted articles about climate change. And perhaps Facebook will show you an ad for the compostable bags that another environmentally-aware user liked.

But the value of your data doesn’t end there. The things Facebook knows about your preferences might help it make very different recommendations to people who are in some sense the “opposite” of you. To give a simple example, Facebook could use your data to make the inference that if someone has posted an article denying climate change (the “opposite” of what you’ve posted) then it’s probably a waste of time to show them an ad for energy-efficient bulbs (the product you like) and it might be better to show them an ad for traditional incandescent bulbs (the “opposite” of what you like, a product that you’ve publicly complained about).

I don’t know the inner workings of Facebook’s recommendation algorithms and I don’t know whether the algorithms are currently making inferences of this form; the important point is simply that they could. The important point is that your data is useful not only to a party that seeks to influence people similar to you, but it is also potentially useful to a party that seeks to influence people who are dissimilar to you.

The more data Facebook has about users of all different viewpoints and interests – the larger its information monopoly – the better it can make predictions about, and target messages towards users of any specific viewpoint. To see this, it’s important to understand a little bit about how machine learning works, and what makes it successful. When you’re training a machine to classify what it sees, you need to feed it a varied data set, a data set that includes positive and negative examples. For example, if you’re trying to get a machine to recognize images of hot dogs, you won’t succeed if you only show it images of hot dogs. You’ve also got to show it images of things that are not hot dogs so that it can learn to make the distinction. The machine needs to have the “experience” of wrongly categorizing a non-hot-dog as a hot dog and then learning from its mistake.

Now let’s imagine that a political operative somehow obtained a trove of Facebook data. He is training a machine to recognize Facebook users who would be susceptible to a message of climate denialism because he wants to motivate them to be more vocal about their beliefs and to vote for political candidates who call climate change a hoax. He needs to show the machine lots of Facebook users who deny climate science, true, but he also needs to show the machine lots of Facebook users who accept climate science. The Facebook user who posts article after article about the grave danger of climate change, thinking that this messaging can only help the cause of climate awareness, may not consider the flipside: that these well-intentioned posts could help a political operative make better predictions by giving the operative a fuller, more well-rounded data set to feed into a machine-learning algorithm. At the very least, this unsuspecting user is helping the operative avoid wasting resources on someone who isn’t persuadable. But maybe the climate-science believers are persuadable a little bit: the operative could analyze the Facebook behavior of believers to find patterns that reveal what might discourage them and cause them to stop posting.

The larger point is that when you’re thinking about the pros and cons of being on Facebook, you should think beyond yourself and beyond the question of what could happen to you right now if your data were stolen or misused. Maybe nothing significant would happen to you right now. You should still think about how the data you contribute to Facebook helps to build the information monopoly of a corporation that may now, or someday, act against your interests, or may enable other actors to do the same. Maybe the dangers seems too abstract. Or maybe you feel overwhelmed in noticing that this problem extends beyond Facebook, but in fact includes Google, your mobile service provider, your ISP, your credit card company, the credit agencies, and all the other data-hungry organizations you interact with. But that doesn’t make it not a problem, and it doesn’t absolve you of the responsibility to take it seriously. Taking something seriously means, at some point, taking action.

 

See also: 2019 Resolution: Leave Facebook

Comments ༄