On Thursday, a bipartisan entity of senators revealed plans to welcome a nondiscrimination bill as pressure rises on the U.S. Senate to legislate a new set of laws prohibiting tech platforms from preferencing the company’s commodities and services over their rivals.
After months of negotiations and hearings, the Senate has finally spoken, and its latest bill targets global retail giant Amazon.
The American Choice and Innovation Online Act, ushered by Senators Amy Klobuchar and Chuck Grassley will forbid Big Tech firms such Apple, Google, and Amazon from abusing their sovereignty to detriment competitor firms that employ their platforms to promote their products.
While the recently legislated bill holds a similar dialect and shares the same name from the Judiciary Committee, the Senate’s version is slightly more distinctive.
“When dominant tech companies exclude rivals and kill competition, it hurts small businesses and can increase costs for YOU,” Klobuchar said in a tweet.
“My new bipartisan legislation with [Grassley] will establish new rules of the road to prevent large companies from boxing out their smaller competitors,” she added.
The legislation news surfaced after Reuters reported Wednesday that the e-commerce titan was abusing marketplace search engine data to replicate famous merchandise and exploit findings leaning towards Amazon’s own replicate products.
In parallel, an examination from The Markup revealed that the retail company positions its commodities before its rivals.
It is worth noting that this isn’t the first time these allegations surfaced. Third-party users have always vocalized their opposition to the way Amazon handles its business. These competitive conducts were a part of an antitrust investigation led by the House for almost a year now.
Apart from Amazon, the bill is also directed at implementing change into Apple and Google’s way of managing their app stores. These tech moguls have excelled at prohibiting other firms from obtaining preference for companies’ first-party apps and software.
The Epic Games butterfly effect
While in this legal case, Apple was the bigger beneficiary from the court’s ruling, yet one thing did not go according to plan for the iPhone parent company, and the same goes for Alphabet’s unit, Google.
Earlier this year, the iOS developer was ordered to permit developers to direct its software developers to send consumers a payment option different than the one extended by Apple. The case that managed to break down the competitive walls the company has set for itself to expand its dominion on the market.
One month after the U.S. federal court revealed its verdict, the iOS developer appealed the ruling, a move that put on hold any future legal actions on the case. Apple’s appeal came after judge Yvonne Rogers governed that the company is abusing its position to relish in anti-competitive behavior.
On another note, the household gaming company revealed in August, court documentation during an anti-trust lawsuit against Google. Within it, Epic laid out an elaborate plot presented by the search engine, showing its tactical scheme to acquire the gaming firm to empower Play Store’s supremacy, after the gaming platform refused to succumb to Google’s Premier Device Program deal.
A deal that offers Android manufacturers exclusive rights to adopt Play Store as a default store, resulting in the substation of third-party payment options.
Even though the ruling was appealed, it still managed to demonstrate to tech companies that competitive behavior from their part will be faced with scrutinizing examination by the federal court.
Heavy tension is on the rise for Congress to execute its authority on online e-commerce platforms and Big Tech app stores, as tech companies are feeding their hunger for power at the expense of small to medium-sized enterprises.
Report: ‘Whole of society’ effort must fight misinformation
Misinformation is jeopardizing efforts to solve some of humanity’s greatest challenges, be it climate change, COVID-19 or political polarization, according to a new report from the Aspen Institute that’s backed by prominent voices in media and cybersecurity.
Recommendations in the 80-page analysis, published Monday, call for new regulations on social media platforms; stronger, more consistent rules for misinformation “superspreaders” who amplify harmful falsehoods and new investments in authoritative journalism and organizations that teach critical thinking and media literacy.
The report is the product of the Aspen Institute’s Commission on Information Disorder, a 16-person panel that includes experts on the internet and misinformation, as well as prominent names such as Prince Harry, the duke of Sussex.
“Hundreds of millions of people pay the price, every single day, for a world disordered by lies,” reads the report’s introduction, written by the commission’s three co-chairs: journalist Katie Couric, former White House cybersecurity official Christopher Krebs and Rashad Robinson, president of the organization Color of Change.
Specifically, the report calls for a national strategy for confronting misinformation, and urges lawmakers to consider laws that would make social media platforms more transparent and accountable — to officials, researchers and consumers.
Another recommendation would strip some of the platforms’ legal immunity when it comes to content promoted by ads, or for lawsuits regarding the implementation of their platform’s designs and features.
The authors of the report blame the proliferation of misinformation on factors including the rapid growth of social media, a decline in traditional local journalism and a loss of trust in institutions.
Falsehoods can prove deadly, as shown by the conspiracy theories and bogus claims about COVID-19 and vaccines that have set back attempts to stop the coronavirus. The report’s authors said misinformation is proving just as damaging when it comes to faith in elections or efforts to fight climate change.
During a briefing on the report’s findings Monday, Couric, Krebs and Robinson stressed that every American has a role to play in fighting misinformation, by reviewing where they get their information, by ensuring that they don’t spread harmful falsehoods, and by fighting the polarization that fuels misinformation.
“The path to making real change is going to require all of us,” Robinson said.
The Aspen Institute has shared its findings with several social media platforms including Facebook. A message seeking a response from that company was not immediately returned on Monday.
The Aspen Institute is a nonpartisan nonprofit based in Washington, D.C. The report was funded by Craig Newmark Philanthropies, a charity founded by the creator of Craigslist.
Meta’s networking platforms enforce hate speech, report shows
Meta shared on Wednesday its latest statistics report addressing the intensity of bullying, hate speech, and harassment on its platforms, as the public’s scrutinizing vigilance centers its gaze on the social network.
The data released by the Big Tech giant’s last quarterly transparency report surfaced as Facebook’s guardian company, Meta, endures an increasingly examining scrutiny directed at its capability of safeguarding its userbase and measures taken to heighten policies’ influence.
“As a social technology company, helping people feel safe to connect and engage with others is central to what we do. But one form of abuse that has unfortunately always existed when people interact with each other is bullying and harassment,” the blog post stated.
“While this challenge isn’t unique to social people tools to protect themselves and also measure how we are doing,” it added.
The report, signified as the first time Facebook ever shares its “prevalence” metrics regarding the intensity of bullying and harassment, showcased how the social network utilized statistics to trail and measure violating content that is typically overpassed by its systems.
In reference to the giant’s report, the prevalence of bullying content ranged between 0.14 to 0.15 percent on Facebook, and 0.05 to 0.06 percent on Instagram, according to Engadget.
“This means bullying and harassment content was seen between 14 and 15 times per every 10,000 views of content on Facebook and between 5 and 6 times per 10,000 views for content on Instagram,” the post added.
Following The Wall Street Journal’s documentation around the social networking’s conduct towards its users, the Facebook Files unmasked how the social platforms had a detrimental effect on users, specifically teens.
The documents revealed that the tech titan was aware of the damaging effect on teens but barely initiated any requisite methodologies to counteract the issues emerging from its platform. The files’ outcomes highlighted Facebook’s knowledge of its impact on teens, and the effect its networking platform has on their mental well-being.
This emphasized the broad variation between how Facebook sees itself in regard to its public opinion and the actual endeavors that are being implemented on its part to reduce the disturbing impact it has on its userbase.
According to the Facebook whistleblower, Frances Haugen, the Menlo-Park-based firm is simply not capable of addressing a massive rate of hate speech on its platform, with Haugen revealing that the titan can only focus on three to five percent of hate speech. This results in most accounts being unobserved, an aspect playing a significant role in contaminating users’ News Feed.
In its defense, the company played all of its cards to derive the public’s wrath from its misconduct towards its users, and the latest prevalence report could be perceived as one of its tactics to beautify its image in the public’s eye.
Yet, one thing the social network’s own research team proved to the world is that Facebook itself is not capable of handling the ever-growing damage it caused on a societal level.
This derives from the fact that the platform’s automated systems cannot be perceived as a reliable source regarding distinguishing harmful content, such as bullying and body image issues, especially if it is not in English.
While the tech giant disclosed in its report that hate speech systematically dropped with each quarter, with prevalence decreasing from 0.5 percent in its second quarter, to 0.3 to the third one, the fact remains that this data is not sufficient compared to the mass of hate speech generated on both social platforms.
Experts believe that Meta – the new façade guardian that Facebook hopes will embellish its image – will not have enough capacity to refurbish its ecosystem to accommodate the public, alongside regulator’s needs to deliver more ethical products, without jeopardizing users well-being at the expense of enlarging its popularity, supremacy, and wealth.
Meta to deploy feature to remove ad targets of sensitive content
Facebook and Instagram’s parent company, Meta, announced Tuesday that it is looking to remove ad targets of sensitive content by prohibiting advertisers’ usage of detailed targeting options based on interaction with sensitive fields.
In a blog post on the Meta for Business blog, the recently rebranded social media platform is seeking to diminish advertisers’ supremacy by blocking ads that target areas, such as race or ethnicity, religious views, political beliefs, sexual orientation, health, and so much more.
“We’ve heard concerns from experts that targeting options like these could be used in ways that lead to negative experiences for people in underrepresented groups,” Meta’s vice president for marketing and ads, Graham Mudd, wrote in the post.
The removal of detailed targeted ads based on sensitive content will initiate on January 19, 2022. This move will shift Meta’s advertising business – accounting for approximately 98 percent of its global revenue.
According to Mudd, the not-so-surprising change in tactics for the Menlo-Park company derives from worries from civil rights specialists and lawmakers. These parties vocalized their concerns regarding a remarkable number of advertisers unethically exploiting the platform’s targeting options.
The most significant aspect is that these ad targets are taking into consideration users’ interactive behavior with the content from Meta products, including Facebook, Instagram, and Messenger.
“The decision to remove these Detailed Targeting options was not easy and we know this change may negatively impact some businesses and organizations,” Mudd explained.
“Some of our advertising partners have expressed concerns about these targeting options going away because of their ability to help generate positive societal change, while others understand the decision to remove them,” he added.
From another stance, digital ad-buying specialists stated that Meta’s latest maneuver will destructively affect both profit and non-profit affairs groups depending on ad targeting to fundraise income, according to The New York Times.
The conglomerate is constructing a plan to remove a variety of its “sensitive” detailed targeting options, following occasions where the social networking platform was forced to extract debatable categories in the past.
This came as a follow-up after it was highlighted that Facebook was weaponizing advertisers with the needed means to differentiate between specific demographic groups or provoke violence through its network.
In the past, advertisers were capable of directing their ads to radical groups, such as anti-Semitic categories and pseudoscience – statements or beliefs that full under pretense scientific claims and facts but not compatible with scientific methods.
Meta’s latest adjustment to its ad targets of sensitive content policy does not change the fact that advertisers can still address specific categorizations on its platform but will simply restructure its framework to maintain the effectiveness of delivering safe services for users and advertisers of its products.
The behemoth’s marketing vice president revealed that even though Meta will oust detailed ad targeting, advertising groups can still use a feature labeled “Engagement Custom Audiences.” The feature will help them access users who have already shown interest in their page.
In parallel, advertisers can also adopt the latest feature to develop a Lookalike Audience feature.
“A lookalike audience is a way your ads can reach new people who are likely to be interested in your business because they share similar characteristics to your existing customer,” the blog post explained.
Meta also revealed that it will deliver its userbase with more control concerning ads generation by minimizing advertisement of specific content, including gambling and weight loss.
How telcos can digitalise their services for the demands of tomorrow
UK to block Facebook parent Meta’s $315M acquisition of Giphy
Panasonic confirms cyber breach to its access data
Google failed to respect ‘Don’t Be Evil’ policy when firing engineers
NEOM: A $500 Billion smart-city to be built in Saudi Arabia
5 Reasons Why… Telecoms is Important in Society
Advantages and drawbacks of Voice Recognition Technology
Telecom Sales Strategies that will Bring You Success in 2020
Cisco VP highlights need for inclusivity when entering the 4IR
Salt Edge looks to steer digitalization of EU’s banking sector
Ian Terblanche, Global Strategic Sales & Channel Director at Sigfox
Steve Lacoff, General Manager of Avalara Communications
- Telecoms3 weeks ago
MTN Group to buy Telkom South Africa, report says
- Cybersecurity3 weeks ago
The advantages and disadvantages of Artificial Intelligence in Cyber Security
- Interviews3 weeks ago
Cisco VP highlights need for inclusivity when entering the 4IR
- Press Releases4 days ago
Comium thanks the Gambian authorities for intervening to resolve Comium conflict, as it welcomes investments from Monty Mobile
- Cryptocurrency4 weeks ago
Squid Game Cryptocurrency Scammers run away with $3.3 Million
- MedTech4 weeks ago
Pfizer says COVID-19 pill cut hospital, death risk by 90%
- Impact4 weeks ago
India, UK to launch global solar grid project at COP26
- Views from the Inside2 weeks ago
Telecoms operators are facing headwinds: here’s how to change course