Facebook’s modus operandi is pushing the boundaries of user expectations, rolling out new features to user outcry, and making minor adjustments and rollbacks while continuing to pursue its lofty visions. It’s a two-steps-forward, one-step-back approach. A company that makes something so many people care so much about should always have a clear messaging strategy and crisis mode at the ready. But Facebook has been especially weak on explaining its bold changes around user privacy in the last six months.
First of all, the relationship between privacy and Facebook is always going to be complicated. This is *the* issue for the company, and will continue to be. Facebook needs its users’ trust in order to provide them value. But the company has been slipping up — on a number of fronts. First, it overstepped user comfort levels with the rollout of instant personalization features that are opt-out rather than opt-in. (It’s also setting itself up for another maelstrom over user data retention.) Meanwhile, Facebook’s privacy controls continue to be way too complicated — the whole product itself needs significant improvements. And lastly, it’s suffering multiple unintended security holes, both by itself and its partners.
These problems build on each other. Now the leading narrative in the media is that Facebook is cavalier about privacy. Last night came the news that Facebook had to shut down one of its three carefully chosen instant personalization partners, Yelp, for repeated insecure protection of user data. Some prominent users are leaving the site altogether, and they’re perceived as level-headed technologists rather than Chicken Little-types. An upstart group of four programmers building a private alternative to Facebook called Diaspora has gained steam incredibly quickly. And some widely read tech commentators say they believe Facebook’s leadership is evil.
Facebook can handle all of this. (See my GigaOM Pro piece (sub req’d), “There’s No Stopping Facebook,” for an in-depth discussion.) The company has incredible strength right now, and has laid out a compelling vision for what it can offer to the rest of the web. Those four college students that raised $10,000 in 12 days to build the anti-Facebook are hardly a serious threat.
But the company’s messaging around its changes is just terrible. Facebook seems pathologically incapable of laying out a compelling rationale for why less privacy would be a good thing for its users — instead insisting that nothing about their privacy has changed. Then it leaves it to the media and users’ alarmist messages spread through Facebook wall posts to construct conspiracy theories in the absence of explanation. This hamfistedness dates back to last December, when Facebook first rolled out an ambitious set of privacy changes.
I remember a reporter asking on the press call in December whether the changes would make user information more private or more public. Facebook stonewalled her, saying that the changes were intended to encourage more sharing, because users would be more aware of with whom they were sharing any one item. But as soon as we were all able to get off the call and look at the new settings it became obvious that Facebook was asking users to default much of their information to be seen by the public. So just say that! Explain why and how it’s a good thing.
Similarly, the new, tricky instant personalization feature was tacked onto the end of Facebook CEO Mark Zuckerberg’s f8 keynote last month, with a quick demo of Pandora. I remember turning in my seat and saying to Om, “I didn’t get that feature.” Only after I sat down with a Facebook platform engineer for half an hour did I understand that this was an entirely separate feature from the core open graph and social plug-in launch, available to just three sites and using dramatically different privacy settings than other features. Now, maybe I was a little slow on the uptake, but it shouldn’t be so hard! If this is the most complicated and foreign feature you’re launching at a massive press and developer event, take the time to justify and explain it.
Last night The New York Times posted a reader Q&A with Elliot Schrage, Facebook’s VP of public policy. Schrage’s tone on privacy is apologetic. “Trust me. We’ll do better,” he writes, adding that:
It’s clear that despite our efforts, we are not doing a good enough job communicating the changes that we’re making. Even worse, our extensive efforts to provide users greater control over what and how they share appear to be too confusing for some of our more than 400 million users. That’s not acceptable or sustainable. But it’s certainly fixable. You’re pointing out things we need to fix.
But Schrage sounds too much like a politician for my taste. On advertising, he writes, “I think people still ask because the ads complement, rather than interrupt, the user experience. They think, ‘That can’t be it.’ It is. The privacy implications of our ads, unfortunately, appear to be widely misunderstood.” Schrage promises better messaging, but he also implies that users just don’t get what the benevolent Facebook is trying to do.
Maybe it would help if Facebook offered up a sacrificial lamb — instant personalization, perhaps. Like the company’s user activity tracking Beacon product of three years ago, instant personalization was probably launched before its time, and needs the market to grow around it. Or maybe Facebook can just ride this whole privacy uproar out — an option that would be greatly helped by an end to any privacy breaches and security holes, effective immediately.
I don’t think Facebook is evil. The company’s leaders believe that each perceived privacy erosion is actually an improvement to user experience — and if that’s true, they need to tell us, show us and convince us. They brought this privacy fiasco upon themselves, and they need to deal with it.
Please see the disclosure about Facebook in my bio.