Facebook footed $13 billion bill on security since 2016, report finds

Facebook publicized that since 2016 the social network mogul invested over $13 billion by onboarding 40,000 new staffers to endorse the platform’s security and safety measures, following a series of leaks by the Wall Street Journal (WSJ).

Weeks after the Wall Street Journal’s “The Facebook Files” were published, the social media giant went back over to the same grounds and revealed in a blog post the extent it is willing to take to safeguard and maintain safety and security on its platform.

“In the past, we didn’t address safety and security challenged early enough in the product development process,” Facebook said.

“But we have fundamentally changed that approach. Today, we embed teams focusing specifically on safety and security issues directly into product development teams, allowing us to address these issues during our product development process, not after it,” the post added.

In its series of allegations, the WSJ averred that Facebook purposely postponed any action implementation during the COVID-19 period, even though the Big Tech giant was aware of the severity of the situation and the effects of spreading misinformation and misleading emotional burden on its userbase.

According to the newspaper, Facebook was aware of the affliction it was exposing its users to, and the platform refrained from dealing with or fixing these issues out of worry it might influence user engagement.

In a way to embellish the Big Tech giant’s public image, Facebook executive Nick Clegg issued a counterstatement accusing the publication of intentionally mislabeling what the company was trying to accomplish during the pandemic.

To highlight the Big Tech mogul’s efforts, the platform’s security teams executed a plan leading to a purge of more than 150 stealthy influence operations. Facebook’s sophisticated artificial intelligence (AI) played a major role in blocking 3 billion fake accounts in the first half of 2021 and has upgraded itself since then.

“Today, we embed teams focusing specifically on safety and security issues directly into product development teams, allowing us to address these issues during our product development process, not after it. Products also have to go through an Integrity Review process, similar to the Privacy Review process, so we can anticipate potential abuses and build in ways to mitigate them. Here are a few examples of how far we’ve come,” the post declared.

In parallel, the company proceeded to exonerate itself from The Facebook Files by revealing that it consciously removed content that represented a direct violation of its standards on hate speech, in addition to removing 15 times more of similar content from its platform and Instagram in 2017 alone.

This happened by implementing advanced technology that acts by learning from one language to apply the same tactics on all posts in various languages to augment its performance level.