Facebook Reveals New Verification Process for Large Pages

Share this

Facebook released policy prescriptions to combat election meddling in the upcoming U.S. midterms. These prescriptions are its most detailed to date.

The new rules will impact businesses running Pages and Instagram accounts. The plan is effective immediately in the U.S., and in the coming months elsewhere.

On Friday, the company said that it would require large Pages to verify the identities and locations of the people and businesses managing them, a move largely meant to prevent people outside the U.S. from making fraudulent posts on topics like domestic politics. (It has already said it will require anyone who buys political ads on Facebook to be verified). There are tens of millions of businesses and brands with Facebook and Instagram Pages that could be affected.

Facebook said that the new verification process would pertain only to Pages with big followings, but did not say the minimum number of fans a page would need to be subject to the review process. The verification will include sharing government-issued identification with Facebook, as well as an address so that the social network can send a postcard via snail mail with a code to confirm it’s valid.

“People who manage Pages with large numbers of followers will need to be verified,” Facebook said in a blog post attributed to Rob Goldman, VP of ads, and Alex Himel, VP of local and Pages. “Those who manage large Pages that do not clear the process will no longer be able to post. This will make it much harder for people to administer a Page using a fake account, which is strictly against our policies.”

Facebook did not reveal all the details of how it will vet Pages, but a spokesman for the company said there would be “additional signals” beyond follower counts that will determine which advertisers get screened.

The new procedure is being put in place as Facebook tries to rid its platform of the type of activity that marred the 2016 presidential election in the U.S. After the race, U.S. authorities determined that Russian actors used social media to spread disinformation and sow discord among voters.

Facebook’s internal investigation found similar behaviors and uncovered troll farms linked to Russia, creating accounts using phony covers and buying ads.

The Social Media site said it found more accounts linked to the same Russian troll farm, the Internet Research Agency, that it had previously uncovered, and that it has removed more than 200 across the social network and Instagram. Last year, Facebook had identified close to 500 such accounts, which could have produced posts seen by more than 100 million people.

The Russian group was blamed for helping stir discontent among users on a number of issues that divide Americans, such as guns and religion, and even for instigating real-life demonstrations.

Facebook announced last year that it would do more to vet political groups and ads on the platform. It had announced that it’s rolling out a new way to label political and issue-based ads, and that it will offer a way to search who paid for any such ads.

The site has been responding to multiple vulnerabilities uncovered on the platform, and executives Mark Zuckerberg and Sheryl Sandberg have been making public appearances and talking with media to acknowledge failures.

At the end of March, a whistleblower came forward with information that data anlyatics company Cambridge Analytica had stored personal information on more than 50 million Facebook users (that number is now up to 87 million, Facebook says). It was discovered that Facebook had known about the illicit use of its data but continued to work with the firm, and may have failed to properly oversee the deletion of the data.

Since the security breakdown, Facebook has been making sweeping changes to how data can be collected by third-party developers and used by advertisers.

Share this

Related posts

Leave a Comment