Connect with us

Ethical Tech

TikTok joins the Technology Coalition to fight online child abuse

Inside Telecom Staff



online child abuse

Popular Chinese video-sharing app TikTok announced earlier last week that it has joined the Technology Coalition, an organization that works to protect children from online sexual exploitation and abuse.

The social media mammoth joins a setlist of Big Tech names enrolled in the organization such as Adobe, Amazon, Apple Inc., Discord, Dropbox, Google, Microsoft, Facebook and many more, who have pledged to fight online child abuse.

“Community safety is our top priority, and we place utmost care on the safety of our teenage users in particular. Our membership reflects both TikTok’s zero tolerance of child sexual exploitation and that this global challenge requires a collective response,” a statement by TikTok read.

With this membership, the company hopes to deepen their evidence-based approach to intervention and contribute their unique learnings from addressing child safety and exploitation.

TikTok is also joining the board of the Technology Coalition along with a number of committees that aim to advance protections for children online and off and drive greater transparency of evolving threats to child safety, and cracking down on online child abuse.

This membership builds on previous partnerships with leading online safety organizations, including the Family Online Safety Institute, ConnectSafely, National Center for Missing and Exploited Children (NCMEC), WePROTECT Global Alliance, DQ Institute, and the Internet Watch Foundation to help ensure that our policies and features continue to promote a safe and welcoming environment for our community.

“At TikTok, we are deeply committed to the safety of teens on our platform. We do not tolerate content or behavior that perpetuates the abuse, harm, endangerment, or exploitation of minors, as outlined in our Community Guidelines. If we become aware of any such content, we will take immediate action to remove content, terminate accounts, and report cases to NCMEC and law enforcement, as appropriate,” the company statement read.

TikTok is where people come to express themselves creatively, build a community, and explore new ideas. While our creative community includes teens and grandparents alike, the company is working to design age-appropriate experiences for those ages 13-17.

“We offer a number of safeguards to support teens as they begin their digital journey. For instance, accounts of users ages 13-15 are set to private by default, and only people 16 and over can use direct messaging and live stream,” TikTok stressed.

TikTok also offers tools, like Family Pairing features, which aim to encourage conversations among parents, caregivers, and teens as they decide the browsing and privacy settings that are best for their family.

Through Family Pairing, parents and guardians can link their TikTok account to their teens’ to enable a variety of content and privacy settings, such as screen time management and search settings.

“There is no finish line when it comes to protecting the TikTok community. We work each day to learn, adapt, and strengthen our policies and practices to keep our community safe, and we look forward to building on all of these efforts through our partnership with the Technology Coalition,” the company said.


We’re a diverse group of industry professionals from all corners of the world. Our desire is to provide a high-quality telecoms publication that caters to an international market, offering the latest and most relevant telecoms information to businesses, entrepreneurs and enthusiasts.

Ethical Tech

MIT’s anti-misinformation accuracy prompts being tested by tech companies

Inside Telecom Staff




The spread of inaccuracies on social media – including political “fake news,” and misinformation about COVID-19 and vaccines – has led to a plethora of problems in areas as disparate as politics and public health.

To combat the spread of false and misleading information, MIT Sloan School Professor David Rand and his colleagues have been developing effective interventions that technology companies can use to combat misinformation online.

A study published in The Harvard Kennedy School Misinformation Review by Rand and other researchers at MIT’s Sloan School of Management, Political Science Department and Media Lab, as well as the University of Regina, and Google’s Jigsaw unit, introduces a suite of such interventions that prompt people to think about accuracy before sharing.          

“Previous research had shown that most social media users are often fairly adept at spotting falsehoods when asked to judge accuracy,” says Rand. “The problem was that this didn’t always stop them from spreading misinformation because they would simply forget to think about accuracy when deciding what to share. So, we set out to develop prompts that could slow the spread of false news by helping people stop and reflect on the accuracy of what they were seeing before they click ‘share.'”

In April and May 2020, the team showed 9,070 social media users a set of 10 false and 10 true headlines about COVID-19 and measured how likely they were to share each one. The team tested a variety of approaches to help users remember to think about accuracy, and therefore be more discerning in their sharing.

These approaches included having the users rate the accuracy of a neutral (non-COVID) headline before they started the study (thereby making them more likely to think about accuracy when they continued on to decide what news they’d share); secondly, providing a very short set of digital literacy tips, and third, asking users how important it was to them to only share accurate news (almost everyone think it’s very important).

By shifting participants’ attention to accuracy in these various ways, the study showed that it was possible to increase the link between a headline’s perceived accuracy and the likelihood of it being shared – that is, to get people to pay attention to accuracy when deciding what to share – thereby reducing users’ intentions to share false news.

This work isn’t just consigned to the ivory tower.

MIT is working with Jigsaw to see how these approaches can be applied in new ways. Anti-misinformation accuracy prompts offer a fundamentally new approach to reduce the spread of misinformation online by getting out ahead of the problem.

Instead of just playing catch-up with corrections and warnings, anti-misinformation accuracy prompts can help people avoid unwittingly engaging with misinformation in the first place.

“Most internet users want help navigating information quality, but ultimately want to decide for themselves what is true. This approach avoids the challenges of ‘labeling’ information true or false. It’s inherently scalable. And in these initial studies, users found accuracy prompts helpful for navigating information quality, so we’re providing people with tools that help accomplish pre-existing goals,” according to Rocky Cole, one of the Jigsaw researchers on the team.

“From a practical perspective,” says Rand, “This study provides platform designers with a menu of effective accuracy prompts to choose from and to cycle through when creating user experiences to increase the quality of information online. We are excited to see technology companies taking these ideas seriously and hope that on-platform testing and optimization will lead to the adoption of new features to fight online misinformation.” 

Rand and Cole co-authored the paper “Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online,” which was published in the Harvard Kennedy School Misinformation Review, with MIT Ph.D. student Ziv Epstein, MIT Political Science professor Adam Berinsky, University of Regina Assistant Professor Gordon Pennycook, and Google Technical Research Manager Andrew Gully.

Continue Reading

Ethical Tech

European privacy groups challenge facial scan firm Clearview

Associated Press



Privacy campaign groups filed legal complaints Thursday with European regulators against Clearview AI, alleging the facial recognition technology it provides to law enforcement agencies and businesses breaches stringent European Union privacy rules.

Four groups complained to data protection authorities in France, Austria, Greece, Italy and the U.K. about Clearview’s practices. They say the company stockpiled biometric data on more than 3 billion people without their knowledge or permission by “scraping” their images from websites.

The complaints say Clearview didn’t have any legal basis to collect and process this data under the European Union’s General Data Protection Regulation, which covers facial image data. Britain adopted its own version of the EU privacy rules after it left the bloc.

“Clearview AI has never had any contracts with any EU customer and is not currently available to EU customers,” CEO Hoan Ton-That said in a statement.

News of Clearview’s stockpile, first reported by The New York Times, raised concerns that the type of surveillance seen in China could happen in Western democracies.

Privacy International said European data protection laws clearly outline the purposes for which companies can use personal data.

“Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users,” said Ioannis Kouvakas, London-based Privacy International’s legal officer.

Italy’s Hermes Center for Transparency and Digital Human Rights, Greece’s Homo Digitalis and Austria’s noyb were also part of the challenge. The complaints are partly based on requests individuals can file to see what data a company holds on them. Ton-That said Clearview “voluntarily processed” the requests, which “only contain publicly available information, just like thousands of others we have processed.”

Clearview is already facing global scrutiny.

American civil liberties activists filed a similar legal challenge in March that sought to bar Clearview from collecting biometric information in California and force it to delete data on Californians collected from sites including Facebook, Twitter, Google and Venmo.

Meanwhile, privacy watchdogs in Britain, Australia and Canada have opened investigations into the company.

LONDON (AP) — By KELVIN CHAN AP Business Writer.

Continue Reading

Ethical Tech

DC files antitrust case vs Amazon over treatment of vendors

Associated Press



DC files antitrust case vs Amazon over treatment of vendors

The District of Columbia has sued Amazon, accusing the online retail giant of anticompetitive practices in its treatment of sellers on its platform. The practices have raised prices for consumers and stifled innovation and choice in the online retail market, the DC attorney general alleges in an antitrust suit.

The suit filed Tuesday in the District of Columbia court maintains that Amazon has fixed online retail prices through contract provisions and policies it applies to third-party sellers. It alleges these provisions and policies prevent sellers that offer products on Amazon.com from offering their products at lower prices or on better terms on any other online platform, including their own websites.

“We filed this antitrust lawsuit to put an end to Amazon’s illegal control of prices across the online retail market,” DC Attorney General Karl Racine said in a conference call with reporters. “We need a fair online marketplace that expands options available to (District of Columbia) residents and promotes competition, innovation and choice.”

Racine said Amazon, the world’s biggest online retailer, controls 50% to 70% of online market sales.

The suit seeks to end Amazon’s use of the allegedly illegal price agreements as well as unspecified damages and penalties.

Amazon rejected the allegations, saying the relief Racine is seeking “would force Amazon to feature higher prices to customers, oddly going against core objectives of antitrust law.”

“The DC attorney general has it exactly backwards — sellers set their own prices for the products they offer in our store,” the Seattle company said in a prepared statement. “Amazon takes pride in the fact that we offer low prices across the broadest selection, and like any store we reserve the right not to highlight offers to customers that are not priced competitively. “

Founded by Jeff Bezos, the world’s richest individual, Amazon runs an e-commerce empire and ventures in cloud computing, personal “smart” tech and beyond.

Its third-party marketplace, with independent merchants listing millions of their products on the site, is a huge part of Amazon’s business. It has about 2 million sellers on its marketplace, and the company has said that more than half the goods sold on Amazon.com come from third-party sellers. Amazon also makes money by charging third-party sellers fees, bringing in $24 billion in revenue in the first three months of this year, up 64% from the same period in 2020.

Like its Big Tech counterparts Facebook, Google and Apple, Amazon faces multiple legal and political offensives from Congress, federal and state regulators and European watchdogs.

A congressional investigation threw a spotlight on complaints by merchants that sell products on Amazon’s platform about the kinds of practices that the District of Columbia suit focuses on. Officials in California and Washington state also have been reviewing the practices.

European Union regulators filed antitrust charges in November accusing Amazon of using its access to data from sellers that use its platform to gain an unfair advantage over them.

Bezos is stepping down this summer as CEO, to be replaced by Andy Jassy, who runs the cloud-computing business. Bezos will become executive chairman.


AP Retail Writer Joseph Pisani in New York contributed to this report.

Continue Reading