fbpx
Connect with us

Ethical Tech

Facebook personnel were asked to restrain news

Published

 on

Once more, the social networking giant is under the spotlight.

Facebook’s employees have effortlessly acted to restrain right-wing platforms, ignoring managers’ objections to prevent any future political clash on its platform, reported by the Wall Street Journal (WSJ).

In-house debates between the tech giant’s managers and employees were driven by recent worries that Facebook is inversely dealing with news outlets based on their political stance.

The WSJ’s report highlighted how Facebook dealt with Breitbart’s news, with the titan’s employees pursuing the website’s News Tab function, by removing certain information following protests concerning George Floyd’s death last year.

“I can also tell you that saw drops in trust in CNN 2 years ago: would we take the same approach for them too?” a senior researcher responded after following an employee’s question about removing Breitbart from Facebook.

Facebook’s vice president of global affairs, Nick Clegg, informed employees that “we need to steel ourselves for more bad headlines in the upcoming days, I’m afraid.”

Clegg’s statement comes as a follow-up to WSJ’s latest report in a series of groundbreaking blows around Facebook’s way of managing news on its platform, in addition to its ever-growing thirst for profit at the expense of its users.

It seems that voices are rising against the titan’s misconduct towards its users, as a new whistleblower emerged to the scene on Friday and informed the Securities and Exchange Commission (SEC) that the company has endlessly disregarded worries around spreading hate speech and the infectious rollout of false information out of fear it would jeopardize its monetary growth.

While the new whistleblower’s name has yet to be revealed, the individual submitted the testimony under oath. In addition, the testimony added that one Facebook communications official, Tucker Bunds, perceived hate speech as a “flash in the pan” and went further to say that even though “some legislation will get pissy,” Facebook is “printing money in the basement.”

In parallel, an employee who worked at the company informed The Post that the whistleblower’s statements about Tucker Bounds are truthful.

“That’s how Tucker talks,” the former employee stated.

“The Tucker quote, as much as I disagree with it, really does reflect the attitude during 2017,” he added.

Facebook’s whistleblower Frances Haugen’s statement to the SEC encouraged other employees to come forward and speak against the company’s misconduct to enlarge its financial growth at the expense of its users. At the end of the day, the social networking giant managed to grow its supremacy while operating in the dark.

Daryn is a technical writer with thorough history and experience in both academic and digital writing fields.

Ethical Tech

Report: ‘Whole of society’ effort must fight misinformation

Published

 on

Report 'Whole of society' effort must fight misinformation

Misinformation is jeopardizing efforts to solve some of humanity’s greatest challenges, be it climate change, COVID-19 or political polarization, according to a new report from the Aspen Institute that’s backed by prominent voices in media and cybersecurity.

Recommendations in the 80-page analysis, published Monday, call for new regulations on social media platforms; stronger, more consistent rules for misinformation “superspreaders” who amplify harmful falsehoods and new investments in authoritative journalism and organizations that teach critical thinking and media literacy.

The report is the product of the Aspen Institute’s Commission on Information Disorder, a 16-person panel that includes experts on the internet and misinformation, as well as prominent names such as Prince Harry, the duke of Sussex.

“Hundreds of millions of people pay the price, every single day, for a world disordered by lies,” reads the report’s introduction, written by the commission’s three co-chairs: journalist Katie Couric, former White House cybersecurity official Christopher Krebs and Rashad Robinson, president of the organization Color of Change.

Specifically, the report calls for a national strategy for confronting misinformation, and urges lawmakers to consider laws that would make social media platforms more transparent and accountable — to officials, researchers and consumers.

Another recommendation would strip some of the platforms’ legal immunity when it comes to content promoted by ads, or for lawsuits regarding the implementation of their platform’s designs and features.

The authors of the report blame the proliferation of misinformation on factors including the rapid growth of social media, a decline in traditional local journalism and a loss of trust in institutions.

Falsehoods can prove deadly, as shown by the conspiracy theories and bogus claims about COVID-19 and vaccines that have set back attempts to stop the coronavirus. The report’s authors said misinformation is proving just as damaging when it comes to faith in elections or efforts to fight climate change.

During a briefing on the report’s findings Monday, Couric, Krebs and Robinson stressed that every American has a role to play in fighting misinformation, by reviewing where they get their information, by ensuring that they don’t spread harmful falsehoods, and by fighting the polarization that fuels misinformation.

“The path to making real change is going to require all of us,” Robinson said.

The Aspen Institute has shared its findings with several social media platforms including Facebook. A message seeking a response from that company was not immediately returned on Monday.

The Aspen Institute is a nonpartisan nonprofit based in Washington, D.C. The report was funded by Craig Newmark Philanthropies, a charity founded by the creator of Craigslist.

Continue Reading

Ethical Tech

Meta’s networking platforms enforce hate speech, report shows

Published

 on

Meta shared on Wednesday its latest statistics report addressing the intensity of bullying, hate speech, and harassment on its platforms, as the public’s scrutinizing vigilance centers its gaze on the social network.

The data released by the Big Tech giant’s last quarterly transparency report surfaced as Facebook’s guardian company, Meta, endures an increasingly examining scrutiny directed at its capability of safeguarding its userbase and measures taken to heighten policies’ influence.

“As a social technology company, helping people feel safe to connect and engage with others is central to what we do. But one form of abuse that has unfortunately always existed when people interact with each other is bullying and harassment,” the blog post stated.

“While this challenge isn’t unique to social people tools to protect themselves and also measure how we are doing,” it added.

The report, signified as the first time Facebook ever shares its “prevalence” metrics regarding the intensity of bullying and harassment, showcased how the social network utilized statistics to trail and measure violating content that is typically overpassed by its systems.

In reference to the giant’s report, the prevalence of bullying content ranged between 0.14 to 0.15 percent on Facebook, and 0.05 to 0.06 percent on Instagram, according to Engadget.

“This means bullying and harassment content was seen between 14 and 15 times per every 10,000 views of content on Facebook and between 5 and 6 times per 10,000 views for content on Instagram,” the post added.

Following The Wall Street Journal’s documentation around the social networking’s conduct towards its users, the Facebook Files unmasked how the social platforms had a detrimental effect on users, specifically teens.

The documents revealed that the tech titan was aware of the damaging effect on teens but barely initiated any requisite methodologies to counteract the issues emerging from its platform. The files’ outcomes highlighted Facebook’s knowledge of its impact on teens, and the effect its networking platform has on their mental well-being.

This emphasized the broad variation between how Facebook sees itself in regard to its public opinion and the actual endeavors that are being implemented on its part to reduce the disturbing impact it has on its userbase.

According to the Facebook whistleblower, Frances Haugen, the Menlo-Park-based firm is simply not capable of addressing a massive rate of hate speech on its platform, with Haugen revealing that the titan can only focus on three to five percent of hate speech. This results in most accounts being unobserved, an aspect playing a significant role in contaminating users’ News Feed.

In its defense, the company played all of its cards to derive the public’s wrath from its misconduct towards its users, and the latest prevalence report could be perceived as one of its tactics to beautify its image in the public’s eye.

Yet, one thing the social network’s own research team proved to the world is that Facebook itself is not capable of handling the ever-growing damage it caused on a societal level.

This derives from the fact that the platform’s automated systems cannot be perceived as a reliable source regarding distinguishing harmful content, such as bullying and body image issues, especially if it is not in English.

While the tech giant disclosed in its report that hate speech systematically dropped with each quarter, with prevalence decreasing from 0.5 percent in its second quarter, to 0.3 to the third one, the fact remains that this data is not sufficient compared to the mass of hate speech generated on both social platforms.

Experts believe that Meta – the new façade guardian that Facebook hopes will embellish its image – will not have enough capacity to refurbish its ecosystem to accommodate the public, alongside regulator’s needs to deliver more ethical products, without jeopardizing users well-being at the expense of enlarging its popularity, supremacy, and wealth.

Continue Reading

Ethical Tech

Meta to deploy feature to remove ad targets of sensitive content

Published

 on

Facebook and Instagram’s parent company, Meta, announced Tuesday that it is looking to remove ad targets of sensitive content by prohibiting advertisers’ usage of detailed targeting options based on interaction with sensitive fields.

In a blog post on the Meta for Business blog, the recently rebranded social media platform is seeking to diminish advertisers’ supremacy by blocking ads that target areas, such as race or ethnicity, religious views, political beliefs, sexual orientation, health, and so much more.

“We’ve heard concerns from experts that targeting options like these could be used in ways that lead to negative experiences for people in underrepresented groups,” Meta’s vice president for marketing and ads, Graham Mudd, wrote in the post.

The removal of detailed targeted ads based on sensitive content will initiate on January 19, 2022. This move will shift Meta’s advertising business – accounting for approximately 98 percent of its global revenue.

According to Mudd, the not-so-surprising change in tactics for the Menlo-Park company derives from worries from civil rights specialists and lawmakers. These parties vocalized their concerns regarding a remarkable number of advertisers unethically exploiting the platform’s targeting options.

The most significant aspect is that these ad targets are taking into consideration users’ interactive behavior with the content from Meta products, including Facebook, Instagram, and Messenger.

“The decision to remove these Detailed Targeting options was not easy and we know this change may negatively impact some businesses and organizations,” Mudd explained.

“Some of our advertising partners have expressed concerns about these targeting options going away because of their ability to help generate positive societal change, while others understand the decision to remove them,” he added.

From another stance, digital ad-buying specialists stated that Meta’s latest maneuver will destructively affect both profit and non-profit affairs groups depending on ad targeting to fundraise income, according to The New York Times. 

The conglomerate is constructing a plan to remove a variety of its “sensitive” detailed targeting options, following occasions where the social networking platform was forced to extract debatable categories in the past.

This came as a follow-up after it was highlighted that Facebook was weaponizing advertisers with the needed means to differentiate between specific demographic groups or provoke violence through its network.

In the past, advertisers were capable of directing their ads to radical groups, such as anti-Semitic categories and pseudoscience – statements or beliefs that full under pretense scientific claims and facts but not compatible with scientific methods.

Meta’s latest adjustment to its ad targets of sensitive content policy does not change the fact that advertisers can still address specific categorizations on its platform but will simply restructure its framework to maintain the effectiveness of delivering safe services for users and advertisers of its products.

The behemoth’s marketing vice president revealed that even though Meta will oust detailed ad targeting, advertising groups can still use a feature labeled “Engagement Custom Audiences.” The feature will help them access users who have already shown interest in their page.

In parallel, advertisers can also adopt the latest feature to develop a Lookalike Audience feature.

“A lookalike audience is a way your ads can reach new people who are likely to be interested in your business because they share similar characteristics to your existing customer,” the blog post explained.

Meta also revealed that it will deliver its userbase with more control concerning ads generation by minimizing advertisement of specific content, including gambling and weight loss.

Continue Reading

Trending