fbpx
Connect with us

Ethical Tech

MIT’s anti-misinformation accuracy prompts being tested by tech companies

Published

 on

anti-misinformation

The spread of inaccuracies on social media – including political “fake news,” and misinformation about COVID-19 and vaccines – has led to a plethora of problems in areas as disparate as politics and public health.

To combat the spread of false and misleading information, MIT Sloan School Professor David Rand and his colleagues have been developing effective interventions that technology companies can use to combat misinformation online.

A study published in The Harvard Kennedy School Misinformation Review by Rand and other researchers at MIT’s Sloan School of Management, Political Science Department and Media Lab, as well as the University of Regina, and Google’s Jigsaw unit, introduces a suite of such interventions that prompt people to think about accuracy before sharing.          

“Previous research had shown that most social media users are often fairly adept at spotting falsehoods when asked to judge accuracy,” says Rand. “The problem was that this didn’t always stop them from spreading misinformation because they would simply forget to think about accuracy when deciding what to share. So, we set out to develop prompts that could slow the spread of false news by helping people stop and reflect on the accuracy of what they were seeing before they click ‘share.'”

In April and May 2020, the team showed 9,070 social media users a set of 10 false and 10 true headlines about COVID-19 and measured how likely they were to share each one. The team tested a variety of approaches to help users remember to think about accuracy, and therefore be more discerning in their sharing.

These approaches included having the users rate the accuracy of a neutral (non-COVID) headline before they started the study (thereby making them more likely to think about accuracy when they continued on to decide what news they’d share); secondly, providing a very short set of digital literacy tips, and third, asking users how important it was to them to only share accurate news (almost everyone think it’s very important).

By shifting participants’ attention to accuracy in these various ways, the study showed that it was possible to increase the link between a headline’s perceived accuracy and the likelihood of it being shared – that is, to get people to pay attention to accuracy when deciding what to share – thereby reducing users’ intentions to share false news.

This work isn’t just consigned to the ivory tower.

MIT is working with Jigsaw to see how these approaches can be applied in new ways. Anti-misinformation accuracy prompts offer a fundamentally new approach to reduce the spread of misinformation online by getting out ahead of the problem.

Instead of just playing catch-up with corrections and warnings, anti-misinformation accuracy prompts can help people avoid unwittingly engaging with misinformation in the first place.

“Most internet users want help navigating information quality, but ultimately want to decide for themselves what is true. This approach avoids the challenges of ‘labeling’ information true or false. It’s inherently scalable. And in these initial studies, users found accuracy prompts helpful for navigating information quality, so we’re providing people with tools that help accomplish pre-existing goals,” according to Rocky Cole, one of the Jigsaw researchers on the team.

“From a practical perspective,” says Rand, “This study provides platform designers with a menu of effective accuracy prompts to choose from and to cycle through when creating user experiences to increase the quality of information online. We are excited to see technology companies taking these ideas seriously and hope that on-platform testing and optimization will lead to the adoption of new features to fight online misinformation.” 

Rand and Cole co-authored the paper “Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online,” which was published in the Harvard Kennedy School Misinformation Review, with MIT Ph.D. student Ziv Epstein, MIT Political Science professor Adam Berinsky, University of Regina Assistant Professor Gordon Pennycook, and Google Technical Research Manager Andrew Gully.

We’re a diverse group of industry professionals from all corners of the world. Our desire is to provide a high-quality telecoms publication that caters to an international market, offering the latest and most relevant telecoms information to businesses, entrepreneurs and enthusiasts.

Ethical Tech

New FTC memo will transform the way big tech operates

Published

 on

FTC

Federal Trade Commission (FTC) Chair Lina Khan recently publicized her policy priorities and vision in a memo that was sent out to staff members on Wednesday. 

Supervised by five commissioners who vote on enforcement actions and policy statements, Khan set in stone the main priorities of the agency in the recent FTC memo: fixing power imbalances, reducing harm on the consumers, and targeting “rampant consolidation.” 

Khan laid out the main focus of the agency, as well as how it can adjust its strategic approach to overcome issues born by “next-generation technologies, innovations, and nascent industries across sectors.” 

FTC’s new list of priorities indicates that tech giants, even though none of them were named, will be under extreme scrutiny going forward. 

The five principles outlined in the FTC memo are the following: 

  1. Conduct a “holistic approach to identifying harms.” Khan noted that the agency should acknowledge that employees, private corporations, as well as consumers, can be equally harmed by antitrust and consumer protection violations. The famous antitrust lawsuits have previously emphasized strictly on consumer harm, as it was mainly concerned with how to price a product to ensure fairness. However, Khan argued in her memo that a more productive approach could be utilized to better assess harm by tech giants, which often offer free of charge platforms in exchange to high levels of engagement. 
  1. Keep an eye on “targeting root causes rather than looking at one-off effects.” Khan explained that the FTC workers should examine how business models or conflicts of interest go against the law. 
  1. Incorporate more “analytical tools and skillsets” for an overall assessment of business methods. 
  1. Enjoy “forward-looking” and work on stepping up quickly when harm is done, this includes focusing on “next-generation technologies, innovations, and nascent industries across sectors.” 
  1. Democratize the FTC through ensuring it’s “in tune with the real problems that Americans are facing in their daily lives.” 

“Research documents how gatekeepers and dominant middlemen across the economy have been able to use their critical market position to hike fees, dictate terms, and protect and extend their market power,” Khan wrote in the memo, adding that “deeply asymmetric relationships between the controlling firm and dependent entities can be ripe for abuse.” 

The FTC chairwoman also included non-compete agreements in her memo, which she says have the ability to restrict workers from which jobs they can take on, as well as impose restrictions on consumer’s right-to-repair. Apple has been criticized in the past for the limit it imposed regarding the number of times users can repair Apple devices they purchased.  

Earlier this year, the FTC vocalized its intentions to fighting these restrictions. 

“Consumers, workers, franchisees, and other market participants are at a significant disadvantage when they are unable to negotiate freely over terms and conditions,” Khan wrote in the memo.

Continue Reading

Ethical Tech

Facebook’s Oversight Board demands answers on celebrity rules

Published

 on

Facebook’s oversight board said on Tuesday that it will set in motion an urgent examination process to inspect whether the social networking platform is mitigating posts for famous personages, leading to a direct content rules breach, according to Wall Street inquiry.

Facebook’s oversight board is an independent group assigned by the platform to observe its moderation policies concerning politicians, athletes, celebrities, and other high-profile users.

The board revealed that it has already initiated an examination plan that demands Facebook executives to submit any data related to the Cross-Check Program, or popularly referred to as” XCheck.” It demanded proof of clarity to determine whether these allegations are true, and from there, it would work accordingly based on the findings.

“In light of recent developments, we are looking into the degree to which Facebook has been fully forthcoming in its responses in relation to cross-check, including the practice of whitelisting,” the board wrote in a statement.

Whitelisting is a cybersecurity strategy where users only act on their personal computers following exclusive administrator permission.

Initially, the XCheck program was initiated to take measures against all kinds of distinguished and famed accounts, which later exponentially grew to involve millions of accounts. 

Presumably, Facebook’s program was established to prohibit “PR fires,” or any type of unwanted press caused by removing photos, posts, and other types of content on the platform. In this case, these high-profile users are immune to any outcome disclosed by XCheck, or any moderating process for that matter. It aims to deliver additional premium control over the platform’s posts.

Thus, by being excluded from the program’s functionality, millions of celebrities are safeguarded from any future regulation on their profile, meaning Facebook is perpetually and intentionally misleading its oversight board on its rules.

“Mark Zuckerberg has publicly said Facebook allows its more than three billion users to speak on equal footing with the elites of politics, culture, and journalism, and that its standards of behavior apply to everyone, no matter their status or frame. In private, the company has built a system that has exempted high-profile users from some or all of its rules,” according to the Wall Street Journal’s report.

At the moment, Facebook’s oversight board will monitor the social networking’s conduct by investigating its cross-check program and will eventually release the findings to the public.

The board’s decision will entirely be based on the tech giant’s transparency regarding its freedom of speech and human rights policies, be it supportive or opposing to its program’s own guidelines.

While Facebook has publicly vowed to follow the board’s demands on its users’ regulations, it also has the right not to submit itself to extensive recommendations as it is not compelled legally to abide by its rules.

Continue Reading

Ethical Tech

U.S. Democrats push for tough data privacy regulations

Published

 on

Australia

The U.S. Congress surely returned to work with full force after a summer recess, asa group of Senate Democrats are now pushing the Federal Trade Commission (FTC) to construct new regulations around data privacy protection.

Democratic Senator Richard Blumenthal led the initiative after garnering eight signatures from his colleagues on a letter that was forwarded to agency Chair Lina Khan on Monday.

The letter’s details revolve around new rules that should be implemented to strengthen cybersecurity, which in return will improve civil rights and give back the consumer what’s rightfully theirs; their privacy.

The letter pointed the finger at Big Tech for having “unchecked access to private personal information” that they use to “create in-depth profiles about nearly all Americans and to protect their market position against competition from startups.”

The senators explained in the letter that previous attempts to hold big tech firms guilty for violating existing data privacy rules were not enough.

“We believe that a national standard for data privacy and security is urgently needed to protect consumers, reinforce civil rights, and safeguard our nation’s cybersecurity,” the senators wrote.

The news comes after U.S. President Joe Biden nominated vocal critic of privacy and facial recognition, Alvaro Bedoya, to acquire the job of the third Democratic FTC commissioner. Bedoya’s past experiences at Georgetown Law is highlighted in his research that delves into the aftermath of technologies like facial recognition on minority groups.

The professor at Georgetown Law’s Center for Privacy and Technology also created a number of surveys aimed at investigating tech’s capability for racial bias. In the past, Bedoya has also served as chief counsel to the Senate Judiciary Subcommittee on Privacy, Technology and the Law under Chairman Senator. A confirmation hearing for Bedoya is yet to be scheduled. However, if it goes according to plan, Bedoya will be able to support the FTC’s mission in coming up with regulations when it comes to data privacy.

Continue Reading

Trending