Pro-Palestinian social networkers use ‘algo-speak’ to avoid detection

Business


A week into Israel’s Operation Iron Swords, the satirical news site The Onion joked that “hundreds of multipronged Israel-Palestine proxy wars are currently being fought across local Facebook groups.” In fact, commentary about the war really has become a battleground, both in person and online, over who can say what and how. 

For better or worse, it is notoriously difficult to squash speech on the internet, and the war in Gaza has drawn attention to a new front in that perpetual conflict: ‘algo-speak,’ a term coined in recent years to describe the strange, coded language used on websites like Instagram and Tik-Tok to evade automated censorship algorithms.

Josh Joffe, a 23-year-old Jewish American, looks at social media posts about the Israel-Palestinian conflict as he poses for a photo with the phone he uses to access social media at his home in Washington, U.S., October 15, 2023. (credit: REUTERS/ELIZABETH FRANTZ)

A report in The Washington Post this week describes measures that pro-Palestinian users have adopted since the start of the war: “In some cases,” the paper reports, “users may begin their post with ‘I stand with Israel’ only to start talking about their support for Palestinians. Others are finding creative ways to spell critical words about the conflict in both Arabic and English,” or using a convoluted mix of numbers, symbols, and punctuation marks that are legible to a human being but incoherent to a computer: users post about ‘P*les+in1ans,’ or about who is committing acts of ‘t*rr0rism’ in ‘Pa&lesti*ne.’

Yet another common tactic is to replace a word that’s likely to raise red flags with an innocuous-sounding homonym. For years now, Gen-Zers have used ‘seggs’ in place of ‘sex,’ for example, or ‘un-alive,’ in place of ‘suicide.’ Those topics, while not necessarily ban-worthy, are usually flagged as advertiser-unfriendly. In wartime, however, the stakes are a lot higher, and the game of whack-a-mole that much more chaotic. A video about terrorism may have captions that talk about terriers instead, or an argument about violence appears as one about violins. 

A challenge for social networks that goes well beyond algospeak

The war in Gaza has been a nightmare for social media moderators in more ways than one. A report from The New York Times in the early days of the war described “a flood of misinformation and violent images” on X, formerly Twitter. “The site ‘has become a war zone with no ethics,’ Achiya Schatz, director of pro-Israel media monitor FakeReporter, was quoted as saying. Newsguard, an online content watchdog group, found that those with ‘blue checkmarks’ were some of the worst culprits, calling them “misinformation superspreaders.”

And because of laws passed in recent years to curb the spread of misinformation, some of these social networks might be liable to criminal charges if they fail to get such content under control. On October 11, the European Union gave Meta, Facebook’s parent company, 24 hours to answer for the spread of Hamas propaganda on their platform, and today, October 25, is the deadline for both Meta and TikTok’s handling of misinformation. The EU challenged Alphabet, Google’s parent company, as well, in a letter to Google’s CEO about a “surge of illegal content and disinformation.”

“As you know,” the letter, which was posted on X, concluded: “following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed.”







Source link

Leave a Reply

Your email address will not be published. Required fields are marked *