fbpx
Connect with us

Ethical Tech

Why we need more than transparency reports from social media

Inside Telecom Staff

Published

 on

transparency reports

Social Media giants Facebook and YouTube both have more than 2 billion users.  With this amount of subscribers comes many violations. Reports recently released by both, indicates just how much wrong doing occurs. YouTube, has recently come under fire for its slow and murky appeals process, lack of statistics and, in contrast, its failure to remove graphic and inappropriate content quickly enough.

In the second and third quarter of 2019, Facebook announced that it removed or labeled more than 54 million pieces of content that it considered to be violent and graphic. This included 11.4 million posts that violated its rules on hate speech, 5.7 million uploads that were considered to be bullying and harassment policies and 18.5 million items determined to be child nudity or sexual exploitation. YouTube’s removal of content it considered ‘violations of children’s safety’ also spiked at the end of last year.

Facebooks report also contained information on the efforts taken to police Instagram. This revealed that it removed at 1.2 million photos or videos involving child nudity or exploitation and 3 million that violated its policies prohibiting sales of illegal drugs over the last six months.

What is glaringly obvious is that these numbers are growing, yet the ratio of inappropriate content to rate of removal still needs work. Even a one incident can create chaos for the content moderators. As an example, the Christchurch shooting, which was covered in the report, generated 4.5 million pieces of content that required removal between March the 15th and September 30th 2019.

Facebook seems to be catching more of such problems via AI and automated systems but YouTube is under fire for its failure to remove inappropriate content quickly enough.

Guy Rosen, Facebook’s vice president of integrity, described Facebook’s progress:

“Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy”.

Obviously, the quicker Social Media giants can detect hate speech, drug and weapon sales, child exploitation and other concerns, the more likely it is that they are able to alert the relevant law enforcement agencies for them to react effectively. For Facebook, this is a positive story conveyed in their latest report.

However, as usual with social media, there is a darker side to the story. This concerns how frequently governments compel Facebook to release user data, typically without informing the person in question or even closing service in a country completely.

The U.S. government leads the way and has the highest number of requests with 50,741 separate demands for user data. Around 88% of requests were accepted by Facebook and the company reported that two thirds of them came with a ‘gag’ order which prevented the company from informing the user about the government request for data.

The report also found that 15 countries have interrupted Facebook 67 times in the first half of the year in comparison with nine countries disrupting service 53 times in the same time period of the previous year. In most cases, disrupting Facebook is seen as an attempt to quash anti-government dissent.

Facebook takes a more rigorous approach to transparency, in comparison to its peers (Google, YouTube etc.,) which CEO Mark Zuckerberg pointed out in a press call addressing the report. They do highlight some of the vertical work that is done to keep us safe but these reports also demonstrate how little room people have if they are accidently caught up in the automated systems. The YouTube appeals process is murky and unclear and human language and social norms can change faster than machine learning systems can catch up to them. It is also worth noting that YouTube only recently included appeals data in their reports and even this was only relevant for the last quarter of 2019. Furthermore, the data they did release showed that 78% of appeals are declined.

If what you want from a social media platform is something in the form of fairness and justice then transparency reports are both informative and necessary, however still not sufficient. There have been calls for oversight boards or third-party intermediaries to intervene as the average user has no way to hold a platform accountable when they do make a mistake

Advertisement

We’re a diverse group of industry professionals from all corners of the world. Our desire is to provide a high-quality telecoms publication that caters to an international market, offering the latest and most relevant telecoms information to businesses, entrepreneurs and enthusiasts.

Ethical Tech

Lawmakers call YouTube Kids a ‘wasteland of vapid’ content

Associated Press

Published

 on

Lawmakers call YouTube Kids a 'wasteland of vapid' content

A House subcommittee is investigating YouTube Kids, saying the Google-owned video service feeds children inappropriate material in “a wasteland of vapid, consumerist content” so it can serve them ads.

The inquiry comes despite Google agreeing to pay $170 million in 2019 to settle allegations that YouTube collected personal data on children without their parents’ consent.

In a letter sent Tuesday to YouTube CEO Susan Wojcicki, the U.S. House Oversight and Reform subcommittee on economic and consumer policy said YouTube does not do enough to protect kids from material that could harm them. Instead it relies on artificial intelligence and creators’ self-regulation to decide what videos make it on to the platform, according to the letter from the committee’s chairman, Illinois Democrat Raja Krishnamoorthi.

And despite changes in the wake of the 2019 settlement, the letter notes, YouTube Kids still shows ads to children. But instead of basing it on kids’ online activity, it now targets it based on the videos they are watching.

YouTube said it has sought to provide kids and families with protections and controls enabling them to view age-appropriate content. It also emphasized that the 2019 settlement was over the regular YouTube platform, not the kids version.

“We’ve made significant investments in the YouTube Kids app to make it safer and to serve more educational and enriching content for kids, based on principles developed with experts and parents,” the company said.

The congressional investigation comes a year into the pandemic that has shuttered schools and left parents who are working from home increasingly reliant on services such as YouTube to keep kids occupied. This has led to a rethinking of “screen time” rules and guilt over the amount of time kids spend in front of screens, with some experts recommending that parents focus on quality, not quantity.

But lawmakers say YouTube Kids is anything but quality.

“YouTube Kids spends no time or effort determining the appropriateness of content before it becomes available for children to watch,” the letter says. “YouTube Kids allows content creators to self-regulate. YouTube only asks that they consider factors including the subject matter of the video, whether the video has an emphasis on kids characters, themes, toys or games, and more.”

Kids under 13 are protected by a 1998 federal law that requires parental consent before companies can collect and share their personal information.

Under the 2019 settlement, Google agreed to work with video creators to label material aimed at kids. It said it would limit data collection when users view such videos, regardless of their age.

But lawmakers say even after the settlement, YouTube Kids, which launched in 2015, continued to exploit loopholes and advertise to children. While it does not target ads based on viewer interests the way the main YouTube service does, it tracks information about what kids are watching in order to recommend videos. It also collects personally identifying device information.

There are also other, sneaky ways ads are reaching children. A “high volume” of kids’ videos, the letter says, smuggle hidden marketing and advertising with product placements by “children’s influencers,” who are often children themselves.

“YouTube does not appear to be trying to prevent such problematic marketing,” the letter says. The House research team found that only 4% of videos it looked at had a “high educational value” offering developmentally appropriate material.

The kids app has helped turn YouTube into an increasingly more attractive outlet for the advertising sales that generate most of the profits for Google and its corporate parent, Alphabet, which is based in Mountain View, California.

YouTube brought in nearly $20 billion in ad revenue last year, more than doubling from its total just three years ago. The video site now accounts for about 13% of Google’s total ad sales, up from slightly more than 8% in 2017.

The House subcommittee is recommending YouTube turn off advertisements completely for kids aged 7 and under. It also asks that it give parents the ability to turn off the “autoplay” feature, which is not currently possible (though parents are able to set a timer to limit their kids’ video watching).

The lawmakers are asking YouTube to provide them with information on YouTube Kids’ top videos, channels and revenue information, as well as average time spent and number of videos watched, per user, among other information.


By BARBARA ORTUTAY.

Continue Reading

Ethical Tech

Lawmakers press Big Tech CEOs on speech responsibility

Associated Press

Published

 on

Lawmakers press Big Tech CEOs on speech responsibility

The CEOs of tech giants Facebook, Twitter and Google faced a grilling in Congress Thursday as lawmakers tried to draw them into acknowledging their companies’ roles in fueling the January insurrection at the U.S. Capitol and rising COVID-19 vaccine misinformation.

In a hearing by the House Energy and Commerce Committee, lawmakers pounded Facebook CEO Mark Zuckerberg; Sundar Pichai, the CEO of Google, which owns YouTube; and Twitter chief Jack Dorsey over their content policies, use of consumers’ data and children’s media use.

Republicans raised long-running conservative grievances, unproven, that the platforms are biased against conservative viewpoints and censor material based on political or religious viewpoints.

There is increasing support in Congress for legislation to rein in Big Tech companies.

“The time for self-regulation is over. It’s time we legislate to hold you accountable,” said Rep. Frank Pallone, D-N.J., the committee’s chairman.

That legislative momentum, plus the social environment of political polarization, hate speech and violence against minorities, was reflected in panel members’ impatience as they questioned the three executives. Several lawmakers demanded yes-or-no answers and repeatedly cut the executives off.

“We always feel some sense of responsibility,” Pichai said. Zuckerberg used the word “nuanced” several times to insist that the issues can’t be boiled down. “Any system can make mistakes” in moderating harmful material, he said.

Shortly after the hearing began, it became clear that most of the lawmakers had already made up their minds that the big technology companies need to be regulated more rigorously to rein in their sway over what people read and watch online.

In a round of questioning that served as both political theater and a public flogging, lawmakers called out the CEOs for creating platforms that enabled the spread of damaging misinformation about last year’s U.S. presidential election and the current COVID-19 vaccine, all in a relentless pursuit of profit and higher stock prices.

Lawmakers also blamed the companies’ services for poisoning the minds of children and inciting the deadly insurrection at the Capitol, as well as contributing to the more recent mass murders in Atlanta and Boulder, Colorado.

The three CEOs staunchly defended their companies’ efforts to weed out the increasingly toxic content posted and circulated on services used by billions of people, while noting their efforts to balance freedom of speech.

“I don’t think we should be the arbiters of truth and I don’t think the government should be either,” Dorsey said.

Democrats are laying responsibility on the social media platforms for disseminating false information on the November election and the “Stop the Steal” voting fraud claims fueled by former President Donald Trump, which led to the deadly attack on the Capitol. Rep. Mike Doyle, a Pennsylvania Democrat, told the CEOs that the riot “started and was nourished on your platforms.”

Support is building for Congress to impose new curbs on legal protections regarding speech posted on their platforms. Both Republicans and Democrats — including President Joe Biden as a candidate — have called for stripping away some of the protections under so-called Section 230 of a 25-year-old telecommunications law that shields internet companies from liability for what users post.

The tech CEOs defended the legal shield under Section 230, saying it has helped make the internet the forum of free expression that it is today. Zuckerberg, however, again urged the lawmakers to update that law to ensure it’s working as intended. He added a specific suggestion: Congress could require internet platforms to gain legal protection only by proving that their systems for identifying illegal content are up to snuff.

Trump enjoyed special treatment on Facebook and Twitter until January, despite spreading misinformation, pushing false claims of voting fraud, and promulgating hate. Facebook banned Trump indefinitely a day after rioters egged on by Trump swarmed the Capitol. Twitter soon followed, permanently disabling Trump’s favored bullhorn.

Facebook hasn’t yet decided whether it will banish the former president permanently. The company punted that decision to its quasi-independent Oversight Board — sort of a Supreme Court of Facebook enforcement — which is expected to rule on the matter next month.

Researchers say there’s no evidence that the social media giants are biased against conservative news, posts or other material, or that they favor one side of political debate over another.

Democrats, meanwhile, are largely focused on hate speech and incitement that can spawn real-world violence. An outside report issued this week found that Facebook has allowed groups — many tied to QAnon, boogaloo and militia movements — to extol violence during the 2020 election and in the weeks leading up to the deadly riots on the Capitol.

With the tone and tenor of Thursday’s hearing set early in the hearing, many internet and Twitter users seemed more interested in Dorsey’s fresh buzz cut and trimmed bread. His newly groomed appearance captured immediate attention because it was a stark contrast to his scraggly beard that drew comparisons to Rasputin in last year’s remote appearances before Congress.

Another point of curiosity: a mysterious clock in Dorsey’s kitchen that displayed sets of figures that seemed to be randomly changing in a way that made it clear it had nothing to do with the time of day. The tech blog Gizmodo eventually revealed the device was a “BlockClock” that shows the latest prices of cryptocurrencies like bitcoin and ethereum.


WASHINGTON (AP) — By MARCY GORDON and BARBARA ORTUTAY

Continue Reading

Ethical Tech

Betting sites offer software blocks for compulsive gamblers

Associated Press

Published

 on

Betting sites

Some sports betting companies are offering tools that allow compulsive gamblers to block themselves from most online sites.

Unibet last week announced it was making software from U.K.-based Gamban available to customers in the U.S. The tools allow customers to in effect ban themselves from gambling sites across multiple devices.

On Wednesday, FanDuel did so, as well. The software blocks thousands of licensed and unlicensed gambling sites and is constantly updated to add new ones as they appear.

“Educating customers about the importance of gambling responsibly and within limits is a business imperative and ethically the right thing to do,” said Carolyn Renzin, chief risk and compliance officer with FanDuel Group. “Offering Gamban’s software to those customers signaling they need help adds another layer of protection for our customers, our program, and to the industry.”

“This is a massive moment for the industry and one we’ve been pushing to achieve since the launch of Gamban,” added Jack Symons, Gamban’s co-founder. “As the largest real-money gaming provider in the United States, FanDuel Group is making a statement of intent and throwing down the gauntlet to operators across the industry to offer self-exclusion support for their vulnerable customers.”

Most licensed sports betting and online casino companies already offer ways for compulsive gamblers to either pause or halt their behavior, including “cool-down” periods in which customers can have their accounts suspended for a length of time.

And states including New Jersey offer state-administered self-exclusion lists where gamblers can prohibit themselves from gambling for differing periods, or permanently. While they are on the list, casinos and sports books cannot accept bets from them or send them marketing materials enticing them to gamble.

Unibet’s parent company, Kindred Group said last week that its provision of blocking software to its customers is “an important step for the industry.”

Keith Whyte, executive director of the National Council on Problem Gambling, praised the companies’ moves.

“We strongly support the ability of gamblers to self-exclude through both the operator and on their own personal devices,” he said. “Self-exclusion is one part of what should be a comprehensive network of problem gambling prevention, education, treatment, enforcement, research and recovery services in every state.”


ATLANTIC CITY, N.J. (AP) — By WAYNE PARRY

Continue Reading

Trending