The new year brought a treat for those who like to follow aging media moguls, with the launch of official Twitter accounts belonging to both News Corp. chairman Rupert Murdoch and his wife Wendi Deng, including some awkward banter around a tweet that Murdoch later deleted. The only problem with the voyeuristic appeal of this exchange, however, is that Deng wasn’t the real thing — although the account was marked as “verified,” with Twitter’s blue check mark, it was revealed to be a fake on Tuesday. A simple slip-up? Perhaps, but one that reinforces how little we know about Twitter’s verification process, something that is becoming more and more important as the service grows.
When Murdoch showed up on Twitter on December 31, there was widespread skepticism about whether it was the real News Corp. billionaire or not, despite the fact that the account was marked as verified. But a tweet from Twitter co-founder and chief product officer Jack Dorsey confirmed that it was the real Murdoch — and the “verified” check-mark, combined with the apparent back-and-forth between the Wendi Deng account and Murdoch’s, convinced many that it was also real (although some, including publishing industry veteran Michael Wolff, continued to doubt this).
How was the account verified? We don’t know
On Tuesday, however, it emerged that the Wendi Deng account had been set up as a prank by a British man, who said he “set up the account for a laugh” during the holidays, when he saw how much attention the Murdoch account was getting. The account’s creator said that he was as surprised as anyone when his account showed up with a blue check-mark, and that he hadn’t been contacted by anyone at Twitter about who he was or whether the account was for real, telling the Guardian:
I just couldn’t believe they would have verified such a high profile account without checking it out, but I absolutely received no communication from Twitter to the email address I used to register.
Twitter has refused to speak publicly about what happened with the Deng account, or to explain why it was verified and then suddenly un-verified — and the company has also repeatedly refused to talk on the record about how the verification process as a whole works, and why some accounts are chosen for verification and others aren’t. Even if the Deng verification was a simple screw-up due to reduced staffing levels over the holidays, Twitter’s radio silence on the issue makes it even harder to trust the entire process, and that could have ramifications that go beyond just the Murdoch case.
The “verified” program started with the blue check mark as a beta in 2009, primarily because a number of celebrities had complained about fake accounts pretending to be them, and the company said it wanted to help users figure out which were real. For a time, anyone could apply to have their account verified by using a form on the Twitter website, but this was later phased out and verification is now done on what the company calls a “case by case” basis, including advertisers and partners.
Twitter needs to be more transparent about the process
Given the rapid growth in Twitter’s user base, it’s not surprising that Twitter would have problems scaling a widespread verification program — and in some ways, doing this runs against the grain for the network, which has made a point of not requiring real names from users the way that Facebook and Google+ have. But even worse than having an arbitrary verification process is having one that doesn’t work properly, and one that the company is so opaque about. It’s not clear why Twitter doesn’t talk about it, but this vacuum of information is hardly conducive to gaining the trust of users.
And trust is something that Twitter needs in spades, especially as it grows and becomes a crucial part of the way that news and other information spreads in a social-media age. The network is already in a delicate situation when it comes to issues like free speech, with the State Department pressuring it to shut down accounts that belong (or appear to belong) to terrorist organizations, and other lobby groups launching legal claims against the company because it allegedly supports entities like Hezbollah by giving them a platform.
The company’s refusal to provide more details about how the verification process functions may stem in part from its desire to protect the users it is verifying, or to prevent the system from being gamed somehow. But if it is going to continue to ask for the trust of its users, it is going to have to be more transparent about how it manages the network, or risk losing the faith that it has spent so much time building up.
Post and thumbnail photos courtesy of Flickr users Hans Gerwitz and See-ming Lee
Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.
- The Internet of things: creating tomorrow’s health care
- Working out loud: how work media and social cognition are altering business
- Going social: Recommendations engines need to factor in consumer reviews