fbpx
Connect with us

Ethical Tech

What’s in a tag? Twitter revamps misinformation labels

Associated Press

Published

 on

Last May, as Twitter was testing warning labels for false and misleading tweets, it tried out the word “disputed” with a small focus group. It didn’t go over well.

“People were like, well, who’s disputing it?” said Anita Butler, a San Francisco-based design director at Twitter who has been working on the labels since December 2019. The word “disputed,” it turns out, had the opposite effect of what Twitter intended, which was to “increase clarity and transparency,” she said.

The labels are an update from those Twitter used for election misinformation before and after the 2020 presidential contest. Those labels drew criticism for not doing enough to keep people from spreading obvious falsehoods. Now, Twitter is overhauling them in an attempt to make them more useful and easier to notice, among other things. Beginning Thursday, the company will start testing the redesigns with some U.S. users on the desktop version of its app.

Experts say such labels — used by Facebook as well — can be helpful to users. But they can also allow social media platforms to sidestep the more difficult work of content moderation — that is, deciding whether or not to remove posts, photos and videos that spread conspiracies and falsehoods.

“It’s the best of both worlds” for the companies, said Lisa Fazio, a Vanderbilt University psychology professor who studies how false claims spread online. “It’s seen as doing something about misinformation without making content decisions.”

While there is some evidence that labels can be effective, she added, social media companies don’t make public enough data for outside researchers to study how well they work. Twitter only labels three types of misinformation: “manipulated media,” such as videos and audio that have been deceptively altered in ways that could cause real-world harm; election and voting-related misinformation and false or misleading tweets related to COVID-19.

This photo provided by Twitter shows a screen that show labels warning about misinformation. After its labels on election-related misinformation became a regular sighting in the weeks leading up to and following the 2020 U.S. presidential election, Twitter is now working on overhauling them in an attempt to make them more useful and easier to notice, among other things. Beginning Thursday, July 1, 2021, the company is taking comments from its U.S. users on the redesigns. ( Twitter via AP)

One thing that’s clear, though, is that they need to be noticeable in a way that prevents eyes from glossing over them in a phone scroll. It’s a problem similar to the one faced by designers of cigarette warning labels. Twitter’s election labels, for instance, were blue, which is also the platform’s regular color scheme. So they tended to blend in.

The proposed designs added orange and red so they stand out more. While this can help, Twitter says its tests also showed that if a label is too eye-catching, it leads to more people to retweet and reply to the original tweet. Not what you want with misinformation.

Then there’s the wording. When “disputed” didn’t go over well, Twitter went with “stay informed.” In the current test, tweets that get this label will get an orange icon and people will still be able to reply or retweet them. Such a label might go on a tweet containing an untruth that could be, but isn’t necessarily immediately harmful.

More serious misinformation — for instance, a tweet claiming that vaccines cause autism — would likely get a stronger label, with the word “misleading” and a red exclamation point. It won’t be possible to reply to, like or retweet these messages.

“One of the things we learned was that words that build trust were important and also words that that were not judgmental, non-confrontational, friendly,” Butler said.

This makes sense from Twitter’s perspective, Fazio said. After all, “a lot of people don’t like to see the platforms have a heavy hand,” she added.

As a result, she said, it’s hard to tell if Twitter’s main goal is to avoid making people angry and alienating them from Twitter instead of simply helping them understand “what is and isn’t misinformation.”

Ethical Tech

PayPal, ADL announce initiative against criminal funding

Daryn Kara Ali

Published

 on

In PayPal’s most recent efforts to fight racism and extremism across the industry, the financial gateway partners up with the Anti-Defamation League to investigate how extremists adopt financial platforms to fund their activity.  

PayPal and Anti-Defamation League’s (ADL) atypical collaboration initiates the first step to divert focus towards the importance of recognizing how extremists are leveraging financial platforms for criminal funding. 

The initiative will be guided through ADL’s Center on Extremism, one of the main authorities that addresses extremism, terrorism, and hate.  

“By identifying partners across sectors with common goals and complementary resources, we can make an even greater impact than any of us could do on our own,” said PayPal’s chief compliance officer Aaron Karczmer in a statement.  

“We are excited to partner with the ADL, other non-profit and law enforcement in our fight against hate in all its forms,” he added. 

The Financial platform alongside the ADL will create a partnership with civil rights organizations to secure marginalized communities from extremists.  

ADL’s fight against extremism has been going for decades, with its team of investigators, analysts, researchers, and technical experts who are constantly monitoring, and aiming to expose radical threats, whether on the internet or on the ground.  

Various civil rights organizations encouraged the developed efforts PayPal and ADL are putting to spread awareness and develop key insights that would optimistically minimize extremists’ efforts in funding their activities through any financial platform. 

One of the initiative’s biggest advocates is the League of United Latin American Citizens (LULAC). It considered this new partnership between the financial platform and the ADL as a stepping point to motivate such organizations to take initiative to proceed or initiate its own fight against extremism.   

“All of us, including in the private sector, have a critical role to play in fighting the spread of extremism and hate. With this new initiative, we’re setting a new standard for companies to bring their expertise to critical social issues,” said ADL’s CEO Jonathan Greenblatt. 

This is a clear demonstration how PayPal is working on broadening its reach on financial crimes capabilities, which will take place through multi-sector collaborations concerning any vital societal and community issues. 

Continue Reading

Ethical Tech

Why the Anthony Bourdain voice cloning creeps people out

Associated Press

Published

 on

Anthony Bourdain

The revelation that a documentary filmmaker used voice-cloning software to make the late chef Anthony Bourdain say words he never spoke has drawn criticism amid ethical concerns about use of the powerful technology.

The movie “Roadrunner: A Film About Anthony Bourdain” appeared in cinemas Friday and mostly features real footage of the beloved celebrity chef and globe-trotting television host before he died in 2018. But its director, Morgan Neville, told The New Yorker that a snippet of dialogue was created using artificial intelligence technology.

That’s renewed a debate about the future of voice-cloning technology, not just in the entertainment world but in politics and a fast-growing commercial sector dedicated to transforming text into realistic-sounding human speech.

“Unapproved voice cloning is a slippery slope,” said Andrew Mason, the founder and CEO of voice generator Descript, in a blog post Friday. “As soon as you get into a world where you’re making subjective judgment calls about whether specific cases can be ethical, it won’t be long before anything goes.”

Before this week, most of the public controversy around such technologies focused on the creation of hard-to-detect deepfakes using simulated audio and/or video and their potential to fuel misinformation and political conflict.

But Mason, who previously founded and led Groupon, said in an interview that Descript has repeatedly rejected requests to bring back a voice, including from “people who have lost someone and are grieving.”

“It’s not even so much that we want to pass judgment,” he said. “We’re just saying you have to have some bright lines in what’s OK and what’s not.”

Angry and uncomfortable reactions to the voice cloning in the Bourdain case reflect expectations and issues of disclosure and consent, said Sam Gregory, program director at Witness, a nonprofit working on using video technology for human rights. Obtaining consent and disclosing the technowizardry at work would have been appropriate, he said. Instead, viewers were stunned — first by the fact of the audio fakery, then by the director’s seeming dismissal of any ethical questions — and expressed their displeasure online.

“It touches also on our fears of death and ideas about the way people could take control of our digital likeness and make us say or do things without any way to stop it,” Gregory said.

Neville hasn’t identified what tool he used to recreate Bourdain’s voice but said he used it for a few sentences that Bourdain wrote but never said aloud.

“With the blessing of his estate and literary agent we used AI technology,” Neville said in a written statement. “It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive.”

Neville also told GQ magazine that he got the approval of Bourdain’s widow and literary executor. The chef’s wife, Ottavia Busia, responded by tweet: “I certainly was NOT the one who said Tony would have been cool with that.”

Although tech giants like Microsoft, Google and Amazon have dominated text-to-speech research, there are now also a number of startups like Descript that offer voice-cloning software. The uses range from talking customer service chatbots to video games and podcasting.

Many of these voice cloning companies prominently feature an ethics policy on their website that explains the terms of use. Of nearly a dozen firms contacted by The Associated Press, many said they didn’t recreate Bourdain’s voice and wouldn’t have if asked. Others didn’t respond.

“We have pretty strong polices around what can be done on our platform,” said Zohaib Ahmed, founder and CEO of Resemble AI, a Toronto company that sells a custom AI voice generator service. “When you’re creating a voice clone, it requires consent from whoever’s voice it is.”

Ahmed said the rare occasions where he’s allowed some posthumous voice cloning were for academic research, including a project working with the voice of Winston Churchill, who died in 1965.

Ahmed said a more common commercial use is to edit a TV ad recorded by real voice actors and then customize it to a region by adding a local reference. It’s also used to dub anime movies and other videos, by taking a voice in one language and making it speak a different language, he said.

He compared it to past innovations in the entertainment industry, from stunt actors to greenscreen technology.

Just seconds or minutes of recorded human speech can help teach an AI system to generate its own synthetic speech, though getting it to capture the clarity and rhythm of Anthony Bourdain’s voice probably took a lot more training, said Rupal Patel, a professor at Northeastern University who runs another voice-generating company, VocaliD, that focuses on customer service chatbots.

“If you wanted it to speak really like him, you’d need a lot, maybe 90 minutes of good, clean data,” she said. “You’re building an algorithm that learns to speak like Bourdain spoke.”

Neville is an acclaimed documentarian who also directed the Fred Rogers portrait “Won’t You Be My Neighbor?” and the Oscar-winning “20 Feet From Stardom.” He began making his latest movie in 2019, more than a year after Bourdain’s death by suicide in June 2018.

Continue Reading

Ethical Tech

Urgent investigation into WhatsApp, Facebook data sharing

Karim Husami

Published

 on

whatsapp

The EU’s national data protection regulators have called on the Irish watchdog to conduct an urgent investigation into the recent changes in WhatsApp’s privacy policy and how it shares data with parent company Facebook, amid concerns the companies have violated privacy law.

The regulators stopped short of taking action against Facebook, saying it was still too unclear how the company was using data on WhatsApp users.

The European Data Protection Board, a panel of EU authorities, said Facebook’s practices linked to WhatsApp data should be examined “as a matter of priority” by the Irish privacy watchdog, its main regulator in the region.

Updated terms had been set to be imposed upon users of the Facebook-owned messaging app early this year — but in January Facebook delayed the WhatsApp terms update until May after a major privacy backlash and ongoing confusion over the details of its user data processing.

“Considering the high likelihood of infringements in particular for the purpose of safety, security and integrity of WhatsApp” and other Facebook units “the EDPB considered that this matter requires swift further investigations,” the EU body said in statement.

In Thursday’s decision, the EDPB stopped short of imposing a provisional EU-wide ban on data access, as requested by the Hamburg data privacy commissioner.

The German authority in May imposed a three-month banning order on Facebook to stop it collecting German users’ data from its WhatsApp unit, and asked EU regulators to take a bloc-wide decision.

The Indian government, for example, has repeatedly ordered Facebook to withdraw the new terms. While, in Europe, privacy regulators and consumer protection organizations have raised objections about how opaque terms are being pushed on users — and in May a German data protection authority issued a temporary (national) blocking order.

The Irish data protection commission “has already carried out an in-depth inquiry into WhatsApp’s privacy policy user facing material in the context of its transparency inquiry,” it said in an emailed statement on Thursday.

A draft of its decision was sent to its EU counterparts in December, needing their approval before being able to finalize the probe.

That decision is currently stuck in a EU dispute resolution procedure, failing to get the full backing of all European data watchdogs.

Continue Reading

Trending