VANITY FAIR: HOW THE NEW ZEALAND SHOOTER HIJACKED OUR SOCIAL MEDIA HELL-SCAPE
Confluence Daily is your daily news source for women in the know.
Source: Vanity Fair, By – ERIC LUTZ
Facebook, YouTube, and Twitter raced to contain the perpetrator’s 17-minute video and manifesto. But the Christchurch massacre underscores the ways in which Silicon Valley is still struggling to combat hate speech and violence.
The right-wing attack on two New Zealand mosques Friday, which left nearly 50 dead and dozens wounded, seemed specifically designed to go viral. In the wake of the shooting, Facebook, Twitter, and YouTube were left scrambling to stop the spread of a 17-minute video of the attack, recorded by the 28-year-old Australian gunman, that was live-streamed and shared across all three platforms. Tech giants have faced versions of this same struggle before, having repeatedly failed to tamp down on conspiracy theories after other large-scale tragedies. But the Christchurch video, and companies’ efforts to contain it, has intensified scrutiny of the platforms’ handling of offensive and dangerous content—particularly when the perpetrator is Web-savvy.
“This is just going to keep happening until tech companies step up and realize that this is a singular problem, repeated in deadly ways near-monthly, spurred by their machines,” reporter Ben Collins wrote on Twitter the wake of the attack. “It will take a realignment of priorities for tech companies to snuff out white supremacists seizing on faulty algorithms to incite violence.”
As Collins notes, the purposeful spread of violence on social media is a long-standing problem. Other, less calculated perpetrators have posted or live-streamed violent acts before. In 2017, a 37-year-old man recorded himself shooting and killing another man, posted it on Facebook, and then took to Facebook Live to describe the attack. But, as tech writer Charlie Warzel wrote in The New York Times, the Christchurch shooter’s “apparent familiarity with the darkest corners of the Internet” may become the new normal. “Not only has conspiratorial hate spread from the Internet to real life,” he wrote, “it’s also weaponized to go viral.”
In the hours after the shooting, tech companies raced to ensure the public that they were acting appropriately. “New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced, and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” Mia Garlick, Facebook’s director of policy for Australia and New Zealand, told CNN in a statement. Twitter told the network it had suspended “an account related to the shooting,” and was “working to remove the video from its platform,” while YouTube tweeted, “Our hearts are broken over today’s terrible tragedy in New Zealand. Please know we are working vigilantly to remove any violent footage.” Yet copies of the video and the shooter’s manifesto—which was riddled with in-references to memes and Internet culture—continued to circulate. (YouTube declined to comment to CNN about how long it took to remove the video initially.)
But, others argue, even an instant response time is no match for platforms’ routine failure to, say, effectively monitor hate speech or Islamophobia, and adapt to the new ways in which they’re spread. “The pattern continues,” wrote Times tech columnist Kevin Roose.“People become fluent in the culture of online extremism, they make and consume edgy memes, they cluster and harden. And once in a while, one of them erupts.”
Lucinda Creighton, a senior adviser at the Counter Extremism Project, expressed similar sentiments to CNN. “While Google, YouTube, Facebook, and Twitter all say that they’re cooperating and acting in the best interest of citizens to remove this content, they’re actually not, because they’re allowing these videos to reappear all the time,” she said. “The tech companies basically don’t see this as a priority.”
Confluence Daily is the one place where everything comes together. The one-stop for daily news for women.