Facebook’s most recent episode of stepping well past the painted lines and wandering around in no-man’s land continues to reverberate, and probably won’t quiet down quickly. In case you’ve been living in an ice cave in the Antarctic, here’s a summary: Adam Kramer, a data scientitst at Facebook, led a research project that covertly sought to manipulate the emotions of Facebook users by controlling the frequency of happy and sad posts that reached users, with the intent of confirm the hypothesis that this was possible. 700,000 people were selected for the study, none of whom were informed or asked for consent.
Zuckerberg hasn’t said anything public, but his alter ego and COO, Sheryl Sandberg offered up the corporate doubletalk:
Sheryl Sandberg said Wednesday during a trip to India that the study was “part of ongoing research companies do to test different products” and was “poorly communicated.”
An understatement, to say the least. This follows the pattern of Zuckerbergisms following Facebook’s previous toe stubs in the realm of privacy (hat tip to Mike Isaac for digging up the quotes):
- ‘We really messed up on this one’ — launch of the news feed, 2006
- ‘We simply did a bad job with this release, and I apologize for it’ — Beacon release, 2007
- ‘Sometimes we move too fast. We just missed the mark.’ — Privacy setting ‘fix’, 2009
- ‘I’m the first to admit that we’ve made a bunch of mistakes. We can also always do better.’ — Settlement with the Federal Trade Commission re deceiving users on privacy, 2011
The Microsoft researcher danah boyd made the case that this specific episode and the outrage that it caused is a bit more complicated, and is an upwelling of fear about big data in general:
The more I read people’s reactions to this study, the more that I’ve started to think that the outrage has nothing to do with the study at all. There is a growing amount of negative sentiment towards Facebook and other companies that collect and use data about people. In short, there’s anger at the practice of big data. This paper provided ammunition for people’s anger because it’s so hard to talk about harm in the abstract.
I agree with danah. People are anxious about the side effects of being involved in public social networks in the first place, because it makes them vulnerable to social and corporate backlash. As Ben Domenech, the editor of The Transom newsletter said,
It heightens the level of uncertainty, anxiety and risk aversion, to know that you’re only a bad day and half a dozen tweets from being fired.
We are experiencing a heightening of the background anxiety that arises from living in a world where corporations are routinely attempting to extract as much information about us from as many sources as possible with the express goal of manipulating us to buy their products. The Snowden era disclosures about the NSA are contributing too, with the most recent leaks coming last week, implicating the NSA in surveilling a large percentage of Americans and retaining information about them considered useless to ongoing investigations.
This sort of anxiety will have a growing impact of people’s perceptions of Facebook, as well as other organizations perceived to be somehow linked to these big data activities, like Facebook.
The work technology angle on this has two parts.
First, companies that are using Facebook as part of their community outreach, marketing analysis, or customer support are going to have to think long and hard about riding on a social networking platform that is manipulative of the users. Leaving aside the question of whether or not advertising on Facebook is effective — some recent research suggests it’s not — the question becomes one of backlash. If people start to distrust Facebook, might they start to feel the same way about companies that are using the platform for ads, outreach, and support purposes?
Second, Facebook is rumored to be at work on a variant of the platform intended for enterprise use as a work technology platform, called Facebook@Work (see What does Facebook at Work mean? and The consumerization of work, or the enterprization of life?). Facebook employees have said for years that Facebook uses their own product as a work technology platform, and so it makes sense to consider its use in other companies as an additional source of revenue.
However, will that come with administrative controls that allow the companies using it to determine what posts each user sees? Facebook — at least the consumer product — has been optimized as an advertising machine as much as a social network. Users do not see all the posts that their friends post: it is a limited subset, intended to surface the information most ‘relevant’ — or most likely to get users to return, or stay on line, or buy things.
Perhaps Facebook@Work will relax these controls or at least allow companies to do so. But wouldn’t management — at least some management — be tempted to control things for their benefit? To tweak the settings so that people are more productive? Or soft censor posts that seem critical of company direction or decisions?
It might seem like a short step from monitoring and analyzing employee sentiment to keep abreast of what’s bubbling out in the front office to actively steering the discussion by making some things more popular while others are squelched. That would be a big step from inspirational posters in the cafeteria.
Will we be reading about a scandal where some company has undertaken an Orwellian manipulation of employees through a Facebook-like approach?
Again, danah’s insight is the big takeaway, one that is larger than the newest Facebook faux pas: people have growing levels of anxiety about big data: the implied surveillance and the motivation to influence our behavior without our knowing it.
I’m sure there are more scandals to come, and one of them might be an enterprise example.