US News

Facebook Users Look For Answers As Company’s AI Goes Haywire After Moderators Were Sent Home

Facebook said Tuesday a bug in the company’s anti-spam system that is randomly and mistakenly flagging user content is unrelated to any changes in the workforce due to coronavirus.

Twitter users tweeted images of a warning they received from Facebook suggesting their content violated company policies against spam. The content was flagged due to a bug rather than a lack of human oversight caused by social distancing, according to one Facebook security official.

“We’re on this – this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce. We’re in the process of fixing and bringing all these posts back. More soon,” Guy Rosen, Facebook’s vice president of safety and integrity, said in a tweet addressing the complaints.

Rosen was responding to a tweet Tuesday night from Facebook’s former head of security, Alex Stamos, who said from his vantage point the problem looks like “an anti-spam rule at FB is going haywire.”

Stamos added: “We might be seeing the start of the ML going nuts with less human oversight.” He also reminded people on Twitter that Facebook sent home their content moderators on Monday over concerns related to the coronavirus.

Facebook spokesman Andy Stone directed the Daily Caller News Foundation to Rosen’s tweet for further explanation.

https://twitter.com/sfmnemonic/status/1240059769295011841?ref_src=twsrc%5Etfw” target=”_blank” rel=”noopener noreferrer

https://twitter.com/ProfessorShaw/status/1240059624906162177?ref_src=twsrc%5Etfw” target=”_blank” rel=”noopener noreferrer

https://twitter.com/rudoren/status/1240063723408101377?ref_src=twsrc%5Etfw” target=”_blank” rel=”noopener noreferrer
Twitter and Google’s YouTube were among the big tech companies to announce Monday that their artificial intelligence tools will now be taking on more responsibility for content moderation due to social distancing.

“We’re working to improve our tech,” Twitter noted in a statement, adding that “this might result in some mistakes.” Big tech companies often blame artificial intelligence system for mistakenly nixing or impacting user content that does not in any way violate Twitter’s policies.

Twitter, for instance, suggested in April 2019 that the auto system was partially to blame for the suspension of a pro-life group.

“When an account violates the Twitter Rules, the system looks for linked accounts to mitigate things like ban evasion,” a company spokeswoman told the Daily Caller News Foundation in April 2019. “In this case, the account was mistakenly caught in our automated systems for ban evasion.”

The spokeswoman was referring to an account called “Unplanned,” which promoted a movie about a former abortion clinic director who became pro-life. The system is designed to suspend so-called sock-puppet accounts connected to a profile that violated company policies, according to the spokeswoman.

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org

Chris White

Share
Published by
Chris White
Tags: Facebook

Recent Posts

RFK Jr. Is Poised To Tilt The Presidential Race — But It’s Still Not Clear To Whom

Independent candidate Robert F. Kennedy Jr. is expected to influence the outcome of the 2024…

5 hours ago

Congress Must Ban Earmarks Once And For All

Before 2011, earmarks were a frequent source of corruption and furthering of personal agendas by…

10 hours ago

Regulated Into the Dirt

The Biden administration is creating regulations at an historic pace and it's making everything so…

12 hours ago

Mounting Evidence Is Pointing To A Nightmare Scenario For The US Economy

The U.S. economy is showing signs of stagflation as growth slumps down and prices continue…

12 hours ago