Science, Technology, and Social MediaUS News

Facebook Is Actively Promoting ISIS Propaganda, Whistleblower Says

  • ISIS terrorists are likely using Facebook’s auto-generated video features to spread propaganda, a new complaint alleges
  • Facebook’s AI tools failed to ding a large number of easily identifiable ISIS-related content, a new complaint to the Securities and Exchange Commission notes. The complaint comes as the company is focused on eliminating white nationalist material 
  • Researchers complain after a new study alleges that Facebook’s AI failed to ding a trove of ISIS-related material

Facebook is auto-generating and helping to mass promote Islamic State-created propaganda as CEO Mark Zuckerberg cheers the gains his company has made taking extremist content offline, a whistleblower alleges in a complaint Thursday to the Securities and Exchange Commission.

The social media company is likely violating securities laws prohibiting companies from misleading shareholders and the public, according to a petition filed by the National Whistleblower Center (NWC). The complaint includes a study that shows Facebook used its auto-generating to produce videos for ISIS terrorists detailing their exploits over the year.

One video begins with a photo of the black flags of jihad and then flashes highlights of a year of social media posts from a user calling himself “Abdel-Rahim Moussa, the Caliphate.” It then contains plaques of anti-Semitic verses, and a picture of men carrying more jihadi flags while they burn the American flag.

One profile of an al-Qaida affiliated group listed the users’ employer as Facebook. The video concludes with Facebook’s famous salutation. “Thanks for being here, from Facebook,” the video concludes before flashing the company’s “thumbs up” image.

Researchers monitored the Facebook pages of users in 2018 who affiliated themselves with groups the U.S. has designated as terrorist groups. Nearly 38 percent of the posts with symbols of extremist groups were removed, their research showed. Much of the banned content cited in the study — an execution video, and images of severed heads — remained on the platform as of May, media reports show.

The complaint comes as Zuckerberg claims his company has made a big dent in ISIS material. “In areas like terrorism, for al-Qaida and ISIS-related content, now 99 percent of the content that we take down in the category our systems flag proactively before anyone sees it,” he said during an earnings call in April. Zuckerberg added: “That’s what really good looks like.”

The researchers involved in the project argue that there is likely a lot of profiles dotting the platform. “I mean, that’s just stretching the imagination to beyond incredulity,” Amr Al Azm, one researcher, told an AP reporter. “If a small group of researchers can find hundreds of pages of content by simple searches, why can’t a giant company with all its resources do it?”

Facebook said it’s working on the problem. “After making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago,” a company representative said in a statement. “We don’t claim to find everything and we remain vigilant in our efforts against terrorist groups around the world.”

Al Azm’s researchers in Syria looked at 63 profile accounts that liked the auto-generated page for Hay’at Tahrir al-Sham, an al-Qaida group affiliated with al-Nusra Front. Researchers confirmed that 31 of the profiles matched real people in Syria. Experts believe Facebook’s algorithmic tools are not up to the task of effectively moderating the company’s massive platform, a platform that registers more than 2 billion users per month.

Facebook’s artificial intelligence system is failing, according to Hany Farid, a digital forensics expert at the University of California, Berkeley, who advises the Counter-Extremism Project.

“The whole infrastructure is fundamentally flawed,” he told the AP. “And there’s very little appetite to fix it because what Facebook and the other social media companies know is that once they start being responsible for material on their platforms it opens up a whole can of worms.” He’s not the only one warning people about ineffective AI systems.

Emily Williams, a data scientist and founder of Whole Systems Enterprises, for one, argues that Facebook’s lack of transparency about the frailties of its AI-deep learning instruments makes it difficult for conservatives to understand why and how their content is being throttled.

Conservatives meanwhile argue that Facebook is targeting them because of their politics. President Donald Trump’s social media director Dan Scavino Jr., for instance, was temporarily blocked in March from making public Facebook comments. The president then later told his Twitter followers that he is looking into complaints that big tech companies are targeting conservatives.

Williams believes that Facebook’s algorithm likely has a 70 percent success rate, which means roughly 30 percent of the time the company’s moderators are nixing conservatives who are sharing provocative content but not that what might be prohibited by the Silicon Valley company.

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact [email protected]

Related Articles

Back to top button