An anti-Facebook right-wing 'stunt' has shed light on its tactics for tackling targeted harassment (FB)

Project Veritas, a right-wing activist group known for sometimes-misleading stings of organisations and individuals, has found its latest target: Facebook.

With the help of a now-fired Facebook worker, the group got its hands on a bevvy of documents and internal files that it alleges show evidence of anti-conservative bias on the social network. Facebook has pushed back hard, saying Project Veritas has misinterpreted the documents and that they show nothing of the sort, calling the whole thing a "stunt."

But the files do provide a fascinating window into how Facebook has been thinking about the problem of coordinated harassment and trolling campaigns on its platform, and how it has considered novel approaches for trying to stop them — from public shaming to a "Twilight Zone."

Facebook says Project Veritas has it wrong

First, some background on Project Veritas: The organisation bills itself as "investigating and exposing corruption," and often goes undercover in sting operations, but has been accused of taking its "gotcha" material out of context. Its has a long history of its projects backfiring, notably when it failed to snare a Washington Post reporter by posing as a sexual harassment victim of former GOP Senate candidate Roy Moore, and in 2010 leader James O'Keefe was convicted of a misdemeanor after pretending to be a telephone repairman to try and break into the phones of a former Senator.

(Amusingly, one of the documents Project Veritas has released is a screenshot from Facebook's internal workplace chat platform, in which an employee describes the activist group's modus operandi like so: "There's a recipe here: Find junior person at brand-name company. Record them. Misrepresent their role. Promote wildly.")

The crux of its allegations against Facebook involve a tag that their source alleged was attached to some of the pages of right-wing figures, including the Pizzagate-conspiracy theory-pusher Mike Cernovich: "ActionDeboostLiveDistribution." The tag limits the reach of tagged pages' live video feeds, which Project Veritas alleges is a sign of anti-conservative bias.

Facebook, however, says that's just not how this works. Instead, the tag is applied when users broadcast non-live video using the company's livestreaming tool, in violation of its policies, a spokesperson said — suggesting the pages affected were only being dinged because they were misleading users by using the live tool for non-live footage.

"We fired this person a year ago for breaking multiple employment policies and using her contractor role at Facebook to perform a stunt for Project Veritas," the company said in a statement. "Unsurprisingly, the claims she is making validate her agenda and ignore the processes we have in place to ensure Facebook remains a platform to give people a voice, regardless of their political ideology."

Facebook wants to put trolls in 'Twilight Zone'

That said, one of the documents Project Veritas has obtained still provides interesting insight into how Facebook has thought about trying to police abusive behaviour on its platform without actually banning users.

In a presentation from 2017, the company suggests approaches for tackling trolls who coordinate harassment and trolling campaigns in private Facebook groups, using Kekistan — a far-right group born out of internet meme culture — as an example.

One potential tactic it outlines is putting suspected trolls in a so-called "Twilight Zone" and subtly messing with their ability to use Facebook. This includes things like logging them out every few minutes, randomly redirecting them to the homepage, "magically" making photos and comments fail to upload, and throttling their bandwidth. These methods add friction to users' experiences and making trolling less easy and enjoyable (for the troll), without outright banning them.

Another tactic is the use of shame. It suggests Facebook could tell a user's friends when they've been suspended for something "egregious," displaying a message like "John Smithi's account has been suspected for 7 days because he shared hate speech in [a group]."

It adds: "Fear of being outed as a miscreant is what regulates behavior in real like and we should re-introduce that to the online world."

Other ideas include creating a "toxic meme cache," a registry of problematic images, and a "troll classifier" that detects if a user is a troll based on their vocabulary.

Facebook has not disputed the authenticity of the documents, thought it's not clear if any of these tactics have ever ultimately been tested or incorporated into Facebook's systems. Facebook spokesperson Andy Stone did not respond to Business Insider's request for comment on it.

Do you work at Facebook? Contact this reporter via Signal or WhatsApp at +1 (650) 636-6268 using a non-work phone, email at rprice@businessinsider.com, Telegram or WeChat at robaeprice, or Twitter DM at @robaeprice. (PR pitches by email only, please.) You can also contact Business Insider securely via SecureDrop.

Original author: Rob Price

Sign in to read full story
In order for you to continue reading the full contents of the post, you will need to login first