Risky uses of artificial intelligence that threaten people’s safety or rights such as live facial scanning should be banned or tightly controlled, European Union officials said Wednesday as they outlined an ambitious package of proposed regulations to rein in the rapidly expanding technology.
The draft regulations from the EU’s executive commission include rules for applications deemed high risk such as AI systems to filter out school, job or loan applicants. They would also ban artificial intelligence outright in a few cases considered too risky, such as government “social scoring” systems that judge people based on their behavior.
The proposals are the 27-nation bloc’s latest move to maintain its role as the world’s standard-bearer for technology regulation, as it tries to keep up with the world’s two big tech superpowers, the U.S. and China. EU officials say they are taking a four-level “risk-based approach” that seeks to balance important rights such as privacy against the need to encourage innovation.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.”
To be sure, the draft rules have a long way to go before they take effect. They need to be reviewed by the European Parliament and the European Council and could be amended in a process that could take several years, though officials declined to give a specific timeframe.
Previous EU tech regulation efforts have been far reaching and influential, earning it a reputation as a pioneer. Vestager, also the bloc’s competition chief, filed aggressive antitrust challenges against Silicon Valley giants like Google years before such action became fashionable. The EU was also early to the data privacy battle with stringent rules known as General Data Protection Regulation, or GDPR, that became the de facto global standard.
However, results have been mixed: Google still retains its online dominance and EU privacy cases against global tech companies are backed up. Officials are also working on updating the EU’s digital rulebook to protect internet users from harmful material or rogue traders.
Under the AI proposals, unacceptable uses would also include manipulating behavior, exploiting children’s vulnerabilities or using subliminal techniques.
“It can be a case where a toy uses voice systems to manipulate a child into doing something dangerous,” Vestager told a media briefing. “Such uses have no place in Europe and therefore we propose to ban them.”
The proposals include a prohibition in principle on controversial “remote biometric identification,” such as the use of live facial recognition to pick people out of crowds in real time, because “there is no room for mass surveillance in our society,” Vestager said.
There will, however, be an exception for narrowly defined law enforcement purposes such as searching for a missing child or a wanted person or preventing a terror attack. But some EU lawmakers and digital rights groups want the carve-out removed over fears it could be used by authorities to justify widespread future use of the technology, which they say is intrusive and inaccurate.
Biometric and mass surveillance technology “in our public spaces undermines our freedom and threatens our open societies,” said Patrick Breyer, an EU Pirate party lawmaker. “We cannot allow the discrimination of certain groups of people and the false incrimination of countless individuals by these technologies”
Other AI applications are considered high risk because they “interfere with important aspects of our lives,” Vestager said, including criminal courts, law enforcement, critical infrastructure such as transportation — think software for self-driving cars — and management of migration, asylum and border control. But their use is allowed provided operators follow rules including using high quality data to minimize discrimination and having a human in charge.
Herbert Swaniker, a technology lawyer at law firm Clifford Chance, compared the proposals to GDPR, which affect companies worldwide.
“With GDPR, we saw the EU’s rules reach every corner of the world and apply pressure on countries globally to reach a new international gold standard,” Swaniker said. “We can expect this too for AI regulation. This is just the beginning.”
The draft regulations also cover AI applications that pose “limited risk,” such as chatbots which should be labeled so people know they are interacting with a machine. Most AI applications, such as email spam filters, will be unaffected or covered by existing consumer protection rules, officials said.
To help develop standards and enforce the rules, which would apply to anyone providing an AI system in the EU or using one that affects people in the bloc, the commission proposes setting up a European Artificial Intelligence Board.
Violations could result in fines of up to 30,000 euros (more than $36,000), or for companies, up to 6% of their global annual revenue, whichever is higher, although Vestager said authorities would first ask providers to fix their AI products or remove them from the market.
EU officials, trying to catch up with the Chinese and American tech industries, said the rules would encourage the industry’s growth by raising trust in artificial intelligence systems and by introducing legal clarity for companies.
LONDON (AP) — By KELVIN CHAN
Microsoft pledges to let EU users keep data inside bloc
Microsoft is pledging to let business and public sector customers in the European Union keep cloud computing data inside the 27-nation bloc to avert concerns about U.S. government access to sensitive information.
Microsoft “will go beyond our existing data storage commitments and enable you to process and store all your data in the EU,” said Brad Smith, the U.S. technology giant’s president.
“In other words, we will not need to move your data outside the EU,” Smith wrote in a blog post Thursday.
Microsoft is responding to customers that want stronger commitments on so-called data residency, Smith said. The updates will apply to the company’s core cloud services including Azure, Microsoft 365, and Dynamics 365.
Transatlantic data protection has been a growing concern since the European Union’s top court struck down a data sharing agreement last year known as Privacy Shield. The court said the agreement, which allowed businesses to transfer data to the U.S. under the EU’s strict data privacy rules, was invalid because it didn’t go far enough to prevent the American government from snooping on user data.
Microsoft, which operates data centers in 13 European countries including France, Germany and Switzerland, will challenge any government request for an EU public sector or commercial customer’s personal data when there’s a lawful basis for doing so, Smith said.
Lawmakers call YouTube Kids a ‘wasteland of vapid’ content
A House subcommittee is investigating YouTube Kids, saying the Google-owned video service feeds children inappropriate material in “a wasteland of vapid, consumerist content” so it can serve them ads.
The inquiry comes despite Google agreeing to pay $170 million in 2019 to settle allegations that YouTube collected personal data on children without their parents’ consent.
In a letter sent Tuesday to YouTube CEO Susan Wojcicki, the U.S. House Oversight and Reform subcommittee on economic and consumer policy said YouTube does not do enough to protect kids from material that could harm them. Instead it relies on artificial intelligence and creators’ self-regulation to decide what videos make it on to the platform, according to the letter from the committee’s chairman, Illinois Democrat Raja Krishnamoorthi.
And despite changes in the wake of the 2019 settlement, the letter notes, YouTube Kids still shows ads to children. But instead of basing it on kids’ online activity, it now targets it based on the videos they are watching.
YouTube said it has sought to provide kids and families with protections and controls enabling them to view age-appropriate content. It also emphasized that the 2019 settlement was over the regular YouTube platform, not the kids version.
“We’ve made significant investments in the YouTube Kids app to make it safer and to serve more educational and enriching content for kids, based on principles developed with experts and parents,” the company said.
The congressional investigation comes a year into the pandemic that has shuttered schools and left parents who are working from home increasingly reliant on services such as YouTube to keep kids occupied. This has led to a rethinking of “screen time” rules and guilt over the amount of time kids spend in front of screens, with some experts recommending that parents focus on quality, not quantity.
But lawmakers say YouTube Kids is anything but quality.
“YouTube Kids spends no time or effort determining the appropriateness of content before it becomes available for children to watch,” the letter says. “YouTube Kids allows content creators to self-regulate. YouTube only asks that they consider factors including the subject matter of the video, whether the video has an emphasis on kids characters, themes, toys or games, and more.”
Kids under 13 are protected by a 1998 federal law that requires parental consent before companies can collect and share their personal information.
Under the 2019 settlement, Google agreed to work with video creators to label material aimed at kids. It said it would limit data collection when users view such videos, regardless of their age.
But lawmakers say even after the settlement, YouTube Kids, which launched in 2015, continued to exploit loopholes and advertise to children. While it does not target ads based on viewer interests the way the main YouTube service does, it tracks information about what kids are watching in order to recommend videos. It also collects personally identifying device information.
There are also other, sneaky ways ads are reaching children. A “high volume” of kids’ videos, the letter says, smuggle hidden marketing and advertising with product placements by “children’s influencers,” who are often children themselves.
“YouTube does not appear to be trying to prevent such problematic marketing,” the letter says. The House research team found that only 4% of videos it looked at had a “high educational value” offering developmentally appropriate material.
The kids app has helped turn YouTube into an increasingly more attractive outlet for the advertising sales that generate most of the profits for Google and its corporate parent, Alphabet, which is based in Mountain View, California.
YouTube brought in nearly $20 billion in ad revenue last year, more than doubling from its total just three years ago. The video site now accounts for about 13% of Google’s total ad sales, up from slightly more than 8% in 2017.
The House subcommittee is recommending YouTube turn off advertisements completely for kids aged 7 and under. It also asks that it give parents the ability to turn off the “autoplay” feature, which is not currently possible (though parents are able to set a timer to limit their kids’ video watching).
The lawmakers are asking YouTube to provide them with information on YouTube Kids’ top videos, channels and revenue information, as well as average time spent and number of videos watched, per user, among other information.
By BARBARA ORTUTAY.
Lawmakers press Big Tech CEOs on speech responsibility
The CEOs of tech giants Facebook, Twitter and Google faced a grilling in Congress Thursday as lawmakers tried to draw them into acknowledging their companies’ roles in fueling the January insurrection at the U.S. Capitol and rising COVID-19 vaccine misinformation.
In a hearing by the House Energy and Commerce Committee, lawmakers pounded Facebook CEO Mark Zuckerberg; Sundar Pichai, the CEO of Google, which owns YouTube; and Twitter chief Jack Dorsey over their content policies, use of consumers’ data and children’s media use.
Republicans raised long-running conservative grievances, unproven, that the platforms are biased against conservative viewpoints and censor material based on political or religious viewpoints.
There is increasing support in Congress for legislation to rein in Big Tech companies.
“The time for self-regulation is over. It’s time we legislate to hold you accountable,” said Rep. Frank Pallone, D-N.J., the committee’s chairman.
That legislative momentum, plus the social environment of political polarization, hate speech and violence against minorities, was reflected in panel members’ impatience as they questioned the three executives. Several lawmakers demanded yes-or-no answers and repeatedly cut the executives off.
“We always feel some sense of responsibility,” Pichai said. Zuckerberg used the word “nuanced” several times to insist that the issues can’t be boiled down. “Any system can make mistakes” in moderating harmful material, he said.
Shortly after the hearing began, it became clear that most of the lawmakers had already made up their minds that the big technology companies need to be regulated more rigorously to rein in their sway over what people read and watch online.
In a round of questioning that served as both political theater and a public flogging, lawmakers called out the CEOs for creating platforms that enabled the spread of damaging misinformation about last year’s U.S. presidential election and the current COVID-19 vaccine, all in a relentless pursuit of profit and higher stock prices.
Lawmakers also blamed the companies’ services for poisoning the minds of children and inciting the deadly insurrection at the Capitol, as well as contributing to the more recent mass murders in Atlanta and Boulder, Colorado.
The three CEOs staunchly defended their companies’ efforts to weed out the increasingly toxic content posted and circulated on services used by billions of people, while noting their efforts to balance freedom of speech.
“I don’t think we should be the arbiters of truth and I don’t think the government should be either,” Dorsey said.
Democrats are laying responsibility on the social media platforms for disseminating false information on the November election and the “Stop the Steal” voting fraud claims fueled by former President Donald Trump, which led to the deadly attack on the Capitol. Rep. Mike Doyle, a Pennsylvania Democrat, told the CEOs that the riot “started and was nourished on your platforms.”
Support is building for Congress to impose new curbs on legal protections regarding speech posted on their platforms. Both Republicans and Democrats — including President Joe Biden as a candidate — have called for stripping away some of the protections under so-called Section 230 of a 25-year-old telecommunications law that shields internet companies from liability for what users post.
The tech CEOs defended the legal shield under Section 230, saying it has helped make the internet the forum of free expression that it is today. Zuckerberg, however, again urged the lawmakers to update that law to ensure it’s working as intended. He added a specific suggestion: Congress could require internet platforms to gain legal protection only by proving that their systems for identifying illegal content are up to snuff.
Trump enjoyed special treatment on Facebook and Twitter until January, despite spreading misinformation, pushing false claims of voting fraud, and promulgating hate. Facebook banned Trump indefinitely a day after rioters egged on by Trump swarmed the Capitol. Twitter soon followed, permanently disabling Trump’s favored bullhorn.
Facebook hasn’t yet decided whether it will banish the former president permanently. The company punted that decision to its quasi-independent Oversight Board — sort of a Supreme Court of Facebook enforcement — which is expected to rule on the matter next month.
Researchers say there’s no evidence that the social media giants are biased against conservative news, posts or other material, or that they favor one side of political debate over another.
Democrats, meanwhile, are largely focused on hate speech and incitement that can spawn real-world violence. An outside report issued this week found that Facebook has allowed groups — many tied to QAnon, boogaloo and militia movements — to extol violence during the 2020 election and in the weeks leading up to the deadly riots on the Capitol.
With the tone and tenor of Thursday’s hearing set early in the hearing, many internet and Twitter users seemed more interested in Dorsey’s fresh buzz cut and trimmed bread. His newly groomed appearance captured immediate attention because it was a stark contrast to his scraggly beard that drew comparisons to Rasputin in last year’s remote appearances before Congress.
Another point of curiosity: a mysterious clock in Dorsey’s kitchen that displayed sets of figures that seemed to be randomly changing in a way that made it clear it had nothing to do with the time of day. The tech blog Gizmodo eventually revealed the device was a “BlockClock” that shows the latest prices of cryptocurrencies like bitcoin and ethereum.
WASHINGTON (AP) — By MARCY GORDON and BARBARA ORTUTAY
NTSB: Tesla owner got into driver’s seat before deadly crash
EXPLAINER: Why the Colonial Pipeline hack matters
FBI names pipeline cyberattackers as company promises return
Global semiconductor market to reach $522 billion in 2021, report finds
NEOM: A $500 Billion smart-city to be built in Saudi Arabia
5 Reasons Why… Telecoms is Important in Society
Advantages and drawbacks of Voice Recognition Technology
Telecom Sales Strategies that will Bring You Success in 2020
- Technology4 weeks ago
Domino’s collaborates with Nuro on test launch of autonomous pizza delivery
- Telecoms2 weeks ago
Huawei to launch two 6G network satellites in July, report finds
- Fintech4 weeks ago
Mastercard, Geidea team up to bring contactless payments to Saudi Arabia
- Fintech4 weeks ago
Facial recognition for payments to reach 1.4 billion users by 2025
- Fintech3 weeks ago
‘Britcoin’ digital currency being considered by UK
- Impact4 weeks ago
United Airlines highlighting new emphasis on sustainable fuel along with global corporate partners
- News3 weeks ago
NASA’s Perseverance Mars Rover extracts first oxygen from red planet
- Views from the Inside2 weeks ago
Software testing market: technological innovations and demand by 2026