Connect with us

Ethical Tech

UN urges moratorium on use of AI that imperils human rights



UN urges moratorium on use of AI that imperils human rights

The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.

Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.

Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.

AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.

Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”

Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards.

While countries weren’t mentioned by name in the report, China has been among the countries that have rolled out facial recognition technology — particularly for surveillance in the western region of Xinjiang, where many of its minority Uyghers live. The key authors of the report said naming specific countries wasn’t part of their mandate and doing so could even be counterproductive.

“In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that addresses particular communities,” said Hicks.

She cited several court cases in the United States and Australia where artificial intelligence had been wrongly applied..

The report also voices wariness about tools that try to deduce people’s emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations and lacks scientific basis.

“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,” the report says.

The report’s recommendations echo the thinking of many political leaders in Western democracies, who hope to tap into AI’s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.

European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten people’s safety or rights.

U.S. President Joe Biden’s administration has voiced similar concerns, though it hasn’t yet outlined a detailed approach to curtailing them. A newly formed group called the Trade and Technology Council, jointly led by American and European officials, has sought to collaborate on developing shared rules for AI and other tech policy.

Efforts to limit the riskiest uses of AI have been backed by Microsoft and other U.S. tech giants that hope to guide the rules affecting the technology. Microsoft has worked with and provided funding to the U.N. rights office to help improve its use of technology, but funding for the report came through the rights office’s regular budget, Hicks said.

Western countries have been at the forefront of expressing concerns about the discriminatory use of AI.

“If you think about the ways that AI could be used in a discriminatory fashion, or to further strengthen discriminatory tendencies, it is pretty scary,” said U.S. Commerce Secretary Gina Raimondo during a virtual conference in June. “We have to make sure we don’t let that happen.”

She was speaking with Margrethe Vestager, the European Commission’s executive vice president for the digital age, who suggested some AI uses should be off-limits completely in “democracies like ours.” She cited social scoring, which can close off someone’s privileges in society, and the “broad, blanket use of remote biometric identification in public space.”


Ethical Tech

Facebook personnel were asked to restrain news



Once more, the social networking giant is under the spotlight.

Facebook’s employees have effortlessly acted to restrain right-wing platforms, ignoring managers’ objections to prevent any future political clash on its platform, reported by the Wall Street Journal (WSJ).

In-house debates between the tech giant’s managers and employees were driven by recent worries that Facebook is inversely dealing with news outlets based on their political stance.

The WSJ’s report highlighted how Facebook dealt with Breitbart’s news, with the titan’s employees pursuing the website’s News Tab function, by removing certain information following protests concerning George Floyd’s death last year.

“I can also tell you that saw drops in trust in CNN 2 years ago: would we take the same approach for them too?” a senior researcher responded after following an employee’s question about removing Breitbart from Facebook.

Facebook’s vice president of global affairs, Nick Clegg, informed employees that “we need to steel ourselves for more bad headlines in the upcoming days, I’m afraid.”

Clegg’s statement comes as a follow-up to WSJ’s latest report in a series of groundbreaking blows around Facebook’s way of managing news on its platform, in addition to its ever-growing thirst for profit at the expense of its users.

It seems that voices are rising against the titan’s misconduct towards its users, as a new whistleblower emerged to the scene on Friday and informed the Securities and Exchange Commission (SEC) that the company has endlessly disregarded worries around spreading hate speech and the infectious rollout of false information out of fear it would jeopardize its monetary growth.

While the new whistleblower’s name has yet to be revealed, the individual submitted the testimony under oath. In addition, the testimony added that one Facebook communications official, Tucker Bunds, perceived hate speech as a “flash in the pan” and went further to say that even though “some legislation will get pissy,” Facebook is “printing money in the basement.”

In parallel, an employee who worked at the company informed The Post that the whistleblower’s statements about Tucker Bounds are truthful.

“That’s how Tucker talks,” the former employee stated.

“The Tucker quote, as much as I disagree with it, really does reflect the attitude during 2017,” he added.

Facebook’s whistleblower Frances Haugen’s statement to the SEC encouraged other employees to come forward and speak against the company’s misconduct to enlarge its financial growth at the expense of its users. At the end of the day, the social networking giant managed to grow its supremacy while operating in the dark.

Continue Reading

Ethical Tech

U.S. Senate anti-discrimination bill weighs on Big Tech firms



On Thursday, a bipartisan entity of senators revealed plans to welcome a nondiscrimination bill as pressure rises on the U.S. Senate to legislate a new set of laws prohibiting tech platforms from preferencing the company’s commodities and services over their rivals.

After months of negotiations and hearings, the Senate has finally spoken, and its latest bill targets global retail giant Amazon.

The American Choice and Innovation Online Act, ushered by Senators Amy Klobuchar and Chuck Grassley will forbid Big Tech firms such Apple, Google, and Amazon from abusing their sovereignty to detriment competitor firms that employ their platforms to promote their products.

While the recently legislated bill holds a similar dialect and shares the same name from the Judiciary Committee, the Senate’s version is slightly more distinctive.

“When dominant tech companies exclude rivals and kill competition, it hurts small businesses and can increase costs for YOU,” Klobuchar said in a tweet.

“My new bipartisan legislation with [Grassley] will establish new rules of the road to prevent large companies from boxing out their smaller competitors,” she added.

The legislation news surfaced after Reuters reported Wednesday that the e-commerce titan was abusing marketplace search engine data to replicate famous merchandise and exploit findings leaning towards Amazon’s own replicate products.

In parallel, an examination from The Markup revealed that the retail company positions its commodities before its rivals.

It is worth noting that this isn’t the first time these allegations surfaced. Third-party users have always vocalized their opposition to the way Amazon handles its business. These competitive conducts were a part of an antitrust investigation led by the House for almost a year now.

Apart from Amazon, the bill is also directed at implementing change into Apple and Google’s way of managing their app stores. These tech moguls have excelled at prohibiting other firms from obtaining preference for companies’ first-party apps and software.

The Epic Games butterfly effect

While in this legal case, Apple was the bigger beneficiary from the court’s ruling, yet one thing did not go according to plan for the iPhone parent company, and the same goes for Alphabet’s unit, Google.

Earlier this year, the iOS developer was ordered to permit developers to direct its software developers to send consumers a payment option different than the one extended by Apple. The case that managed to break down the competitive walls the company has set for itself to expand its dominion on the market. 

One month after the U.S. federal court revealed its verdict, the iOS developer appealed the ruling, a move that put on hold any future legal actions on the case. Apple’s appeal came after judge Yvonne Rogers governed that the company is abusing its position to relish in anti-competitive behavior.

On another note, the household gaming company revealed in August, court documentation during an anti-trust lawsuit against Google. Within it, Epic laid out an elaborate plot presented by the search engine, showing its tactical scheme to acquire the gaming firm to empower Play Store’s supremacy, after the gaming platform refused to succumb to Google’s Premier Device Program deal.

A deal that offers Android manufacturers exclusive rights to adopt Play Store as a default store, resulting in the substation of third-party payment options.

Even though the ruling was appealed, it still managed to demonstrate to tech companies that competitive behavior from their part will be faced with scrutinizing examination by the federal court.

Heavy tension is on the rise for Congress to execute its authority on online e-commerce platforms and Big Tech app stores, as tech companies are feeding their hunger for power at the expense of small to medium-sized enterprises.

Continue Reading

Ethical Tech

Android’s apps trail users’ device interaction, research finds



Google, device manufacturers, and third-party apps could be probing deeper into users’ on-device habits by creating a tailing road to survey their every interaction through Android’s apps OS, a Trinity College’s study revealed.

Data breaches and password leaks have taken the world by a swoop the minute a global conceptualization broke out that no one is safe. Some of the most prominent names in the tech industry found themselves heavily exposed to some of the most memorable data breaches since the emergence of the digital era, such as Facebook, Microsoft, Yahoo, and of course, Google, with its 2018 data breach.

Now, digital privacy has presented itself as a fundamental element to safeguard companies, governmental agencies, and even users from any malicious attack that could fall upon any establishment’s “secure” infrastructure to reach our personal accounts.

For those who have a preferential turn towards Android phones and are worried about their privacy – as they should be – you probably covered the basic steps to maintain the defensive barrier against any security violation on any device.

The decision of securing your devices is rightfully yours, this aspect is undeniable, but what if all the extra measures you are taking to shield your devices are not enough? What if all those steps still cannot prevent any future hack?

Dublin’s Trinity College researchers published a paper elaborating how Android mobile OS, specifically devices created by Samsung, Xiaomi, Huawei, and Realme, transmit a significant volume of information to the OS developer and third-party platforms. Some of these platforms are Google, Microsoft, LinkedIn, Facebook, and many more.

image credit: Trinity College Research

It goes without saying that at heart, users always knew their devices were not safe from outside digital privacy violation, but what does it mean coming from the manufacturers and the OS developers themselves? Particularly if we perceive this from an aspect that the Big Tech companies have been promoting their “safeguarding users’ privacy” ideology when in reality, they too have a hand in breaching their own digital privacy rules?

“The analysis of whether mobile apps disclosed sensitive information to their associated back-end servers has been the focus of much research, especially with the view of risks such as user de-anonymization, location tracking, behavior profiling, and cross-linking of data by different stakeholders in the device/software supply chain,” the paper revealed.

The most acute part of this scenario is that even if users were willing to opt-out, they can’t. The approach is embedded into the core of their device’s software, users have no option, or a choice for that matter, of reconfiguring its settings.

Most of the liability falls on what is known as “system apps” that come pre-installed into the system by the hardware manufacturer to provide a number of services, such as camera or messaging apps. These services are referred to as “read-only memory” (ROM) and are embedded into Android devices and cannot be deleted or modified.

Trinity’s research unveiled that these applications are endlessly sending the device’s data to the manufacturing company, which in return is sending the latter to more than one third-party app, even if users did not access the apps.

For example, when Microsoft and Samsung partnered up to branch Android and Windows together, this unity had more than meets the eye.

When Samsung devices come bundled up with Microsoft bloatware, packaged with the third-party app, LinkedIn, the hard-coded networking platform continuously relays to Microsoft servers detailed data about the device, such as unique identifiers and the amount of Microsoft apps installed on the user’s phone. 

In parallel, the aggregated pinged data is sent to third-party analytics providers that the apps might intertwine with. Meaning, Google Analytics also has access to the device’s data, as this plug-in is a pre-installed system app embedded into the core Android software.

As for the hard-coded apps that demand advanced embedded credentials, often used on a day-to-day basis, send an exponentially larger mass of data revealing each interaction made on the platform, such as when and for how long users are using the app. These platforms share with Google Analytics certain specifics data with the search engine’s analytical branch.

The research paper single-handedly addressed a multitude of scenarios where these platforms are in direct breach of some undisclosed privacy breach. 

Now, it is true that none of these data records can single out one device from a profusion of devices. However, when intertwined, they breed the ultimate “fingerprint” wielded to track any device, even if users opt-out.

In its defense, the search engine giant does include several “developer rules” aimed at thwarting certain invasive apps. These rules inform developers that they cannot connect a device’s special ad ID with a more tenacious element for any genre of ad-related function.

“I reset, a new advertising identifier must not be connected to a previous advertising identifier or data derived from a previous advertising identifier without the explicit consent of the user,” Google elaborated on the matter.

“You must abide by a user’s ‘Opt-out of Interest-based Advertising’ or ‘Opt-out of Ads Personalization setting. If a user has enabled this setting, you may not use the advertising identifier for creating user profiles for advertising purposes or for targeting users with personalized advertising,” the company added in its statement.

While these tracking features are embedded into the nucleus of Android devices, the question remains, what is the manufacturers’ role in this tracking framework? Is it simply to track the users’ movement, or is there a superior purpose that we are not familiar with?

Chances are, as regulators are continuously probing into these tech companies’ demeanor in breaching digital data privacy, this too might get exposed to a much more elevated scale of investigative regulatory behavior, if federal governmental figures recognized this issue as an identifiable form of data breach. 

Continue Reading