Top science advisers to President Joe Biden are calling for a new “bill of rights” to guard against powerful new artificial intelligence technology.
The White House’s Office of Science and Technology Policy on Friday launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character.
Biden’s chief science adviser, Eric Lander, and the deputy director for science and society, Alondra Nelson, also published an opinion piece in Wired magazine detailing the need to develop new safeguards against faulty and harmful uses of AI that can unfairly discriminate against people or violate their privacy.
“Enumerating the rights is just a first step,” they wrote. “What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this ‘bill of rights,’ or adopting new laws and regulations to fill gaps.”
This is not the first time the Biden administration has voiced concerns about harmful uses of AI, but it’s one of its clearest steps toward doing something about it.
European regulators have already taken measures to rein in the riskiest AI applications that could threaten people’s safety or rights. European Parliament lawmakers took a step this week in favor of banning biometric mass surveillance, though none of the bloc’s nations are bound to Tuesday’s vote that called for new rules blocking law enforcement from scanning facial features in public spaces.
Political leaders in Western democracies have said they want to balance a desire to tap into AI’s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.
A federal document filed Friday seeks public comments from AI developers, experts and anyone who has been affected by biometric data collection.
The software trade association BSA, backed by companies such as Microsoft, IBM, Oracle and Salesforce, said it welcomed the White House’s attention to combating AI bias but is pushing for an approach that would require companies to do their own assessment of the risks of their AI applications and then show how they will mitigate those risks.
“It enables the good that everybody sees in AI but minimizes the risk that it’s going to lead to discrimination and perpetuate bias,” said Aaron Cooper, the group’s vice president of global policy.
EC wants to ban facial recognition in regulatory motion
The European Commission (EC) has finally commenced its strategy to curb Artificial Intelligence (AI) influence by voting on the ban of facial recognition in Europe’s law enforcement agencies until further regulations legislation.
EU revealed through research that the adoption of facial recognition for law enforcement is resulting in more harm than good and that the utilization of AI still has grounds for improvement in regard to its decision-making process.
The EC approach towards facial recognition systems implementation in its governmental police agencies has proved to generate discrimination, privacy invasion, jeopardizes personal data safeguard, dignified human respect between each other, and risks the freedom of speech and information.
“These potential risks are aggravated in the sector of law enforcement and criminal justice, as they may affect the presumption of innocence, the fundamental right to liberty and the security of the individual and an effective remedy and fair trial,” the European Parliament addressed its opposition to the technology.
Members of the European Parliament (MEP) welcomed contraceptive measures supporting the “permanent ban” on a multitude of “automated analysis and/or recognition” advanced technologies by law enforcement entities in issues dealing with criminal conduct.
The facial recognition ban will strictly concentrate on outlawing any genre of biometric surveillance implementation in public agencies to benefit police bureaus, with a vote passing 36 to 24, with six members refraining from submitting their vote.
In parallel, a moratorium will be fixated on technologies that gather private data by employing measures, such as “gait, fingerprints, DNA, voice, and other biometric and behavioral signs.” With the voting process’ completion, the MEP also voted in favor of extracting any approaches that allow private companies to manage or have any type of control over facial recognition databases.
From their stance, MEPs are demonstrating their point of view by stating that this simply is not the appropriate time to initiate the implementation of facial recognition in governmental entities as it still has not been addressed from a regulatory aspect.
The technology’s legal framework has not been structured to ensure the safety of personal privacy, before admitting it with police forces for use.
When AI became globally available and adopted for facial recognition technology, some of the biggest companies in the field rushed to promote their innovative apparatus to be shared with governments.
One of the promoted biometric identification services is Clearview AI.
The reason behind the Parliament’s unyielding opposition against this service – and similar services – is its process followed to collect its data of individuals. Clearview AI obtains a voluminous database of more than three billion images, illegally assembled from social networks and different segments from the internet.
The EC’s list of tasks will fixate on creating the required ecosystem before deploying the technology into its governmental entities. The Commission will construct the required guidelines and laws to be presented to the MEPs for approval or rejection.
From there, the Parliament will vote on a prospective piece of legislation, the Artificial Intelligence Act.
The AI Act will be the first-ever developed legal framework to regulate AI, as it will address the threats imposed by extensive adoption of the technology and will direct a high level of innovation support in the field of intelligent retrieval.
The Artificial Intelligence Act will strictly work on legislating the usage of algorithms and artificial intelligence in Europe.
Under the same legal guardianship, the EC has initiated an investigation into Apple’s incorporation of Apple Pay into applications and sites, a tactic that violates the Commission’s strict rules against competitive behavior.
Europe began prepping its scrutinizing plan against the iPhone maker for prohibiting access to its Near-Field Communication (NFC) chips in Apple products.
NFC chips are embedded into all smart devices, and they permit the integration of data with other devices. In Apple’s case, the company created its private NFC chips that allow data sharing between its devices via Apple Pay, a private technology strictly used for one set of products and not shared with other companies.
What’s the difference between robotics and artificial intelligence?
When people hear the word robot, it is their interests that shape the image in their head. Engineers may imagine a shiny factory line with hundreds of mechanical appendages picking away at the conveyer belt, while sci-fi fans may imagine Star Wars’ R2D2’s snarky squeaking voice giving their owner attitude while on the job. The difference between robotics and artificial intelligence is a foundational one.
AI does not have to exist in a robot, and a robot does not need an AI to exist. When they do, however, we get some fascinating results.
AI comes from computer science, programs that can learn by themselves using sensors, input, or machine learning from a data base and simulations. Robots on the other hand, are a results engineering.
Think of what makes a robot, well, a robot. A vacuum cleaner is just a machine until you put wheels on it, make it smaller, and give it the ability to move by itself. But a vacuum cleaner on wheels does not a smart cleaner-bot make. So, what does?
Robots do specific tasks. Screwing caps onto toothpaste tubes, grabbing an ice cream cone, and dipping it in molten chocolate, or placing a door onto a car as part of a production line. Those are all examples of robots. Simple robots with no hint of intelligence. What makes a robot intelligent is its ability to input information, assess and earn from the information, and adjust its reaction accordingly.
If we take the previously mentioned robot vacuum cleaner, and we give it the ability to not only sense its surroundings but learn from them, then that is an AI robot. Over time the cleaner will know the layout of the house blindly, which areas are risky to traverse without getting stuck, and even which areas haven’t been cleaned in a while.
Where robots are the body, the AI would be the brain.
The use of artificial intelligence in robotics is already beginning to spread today, with applications such as carrying and transporting goods in warehouses, factories, and hospitals, cleaning offices and large equipment, inventory management and item stocking, and exploring hazardous environments.
In short, the difference between robotics and artificial intelligence is profound, but the two do overlap significantly, and that will only increase in the future as automation, digitization and industry 4.0 in general proliferate rapidly.
U.S. Government agencies implement AI-based facial recognition
The U.S. Government Accountability Office (GAO) report revealed on Thursday that at least 10 governmental agencies such as the Department of Defense and Department of Homeland Security (DHS) implemented facial recognition technology into its systems, disclosing a debatable use of Artificial Intelligence (AI).
In a previous report, GAO also publicized how federal agencies acquired useful systems utilized to monitor the installation of privately contracted facial recognition systems.
Out of 24 agencies, 19 governmental agencies managing research and development (R&D) invested in facial recognition technology projects from experiments, with methods to improve staff identification to research on AI’s accurate depiction and how it is affected by aging and race.
Some of the most influential governmental agencies using facial recognition technology are the departments of Defense and Justice. According to the report, these two departments are administrating AI into their systems to properly enforce domestic law enforcement, physical security, and national security.
“As the use of FRT continues to expand, members of Congress, academics, and advocacy organizations have highlighted the importance of developing a comprehensive understanding of how it is used by federal agencies,” the report stated.
One of the systems mentioned in the report is labeled “TacID Guard Dog,” and it was adopted by the Department of Energy. The agency acquired the system in December 2019 from Secure Planet Inc., after spending an estimate of $150,000 to “monitor entry and exit from controlled locations.”
TacID Guard Dog is “a real time on-the-move biometric capture, screening and altering solution for border control, defense, access control and perimeter security environments,” according to the company’s site.
It is worth mentioning that the same system is implemented in the Department of Defense, to be tested for a year before fully acquiring it.
The report also mentioned that the Department of Homeland Security (DHS) also entered the facial recognition sector through project arrangements with the British and Australian governments, with transactions also occurring with Mexican and Guatemalan state officials, according to DHS.
By choosing to run a set of facial recognition systems, DHS adopted the Automated Biometric Identification system at border crossings. In parallel, the FBI’s Facial Analysis Comparison and Evaluation service (FACE) provides a wide matching capacity in criminal investigations.
Experts believe racially biased facial recognition is a sizable reason for the government to reconsider its implementation in its governmental agencies. For example, the states of Main and Massachusetts succeeded in limiting AI use, while on the other hand, Customs and Border Protection proceeds in extending the execution of facial recognition in its airports.
With a bundle of news emerging addressing biased facial recognition in the world’s leading recognition algorithms, a prejudiced approach in identifying people based on line of age, race, and ethnicity is misleading and could lead to heavy consequences in governmental institutions.
Facebook personnel were asked to restrain news
Chinese tech regulations morph U.S. firms’ course of action
China’s 5G smartphone rollout hits 70% in 2021
Vodafone adds 7,000 software engineers to target digital services
NEOM: A $500 Billion smart-city to be built in Saudi Arabia
5 Reasons Why… Telecoms is Important in Society
Advantages and drawbacks of Voice Recognition Technology
Telecom Sales Strategies that will Bring You Success in 2020
- Press Releases3 weeks ago
Comium Gambia workers are begging the president to interfere and save their families!
- Press Releases3 weeks ago
Comium surprised of PURA’s decision to suspend the network despite dues settlement and promising negotiations
- Press Releases4 days ago
Will Comium case be taken to another level, expose corruption and take authority heads to court?
- News3 weeks ago
In California, some buy machines that make water out of air
- Telecoms4 weeks ago
India’s relief package may not resurrect Vodafone Idea
- News3 weeks ago
Outage highlights how vital Facebook has become worldwide
- Ethical Tech4 weeks ago
Take a look at Facebook’s internal research over mental health on teens
- Views from the Inside4 weeks ago
Improving customer retention in telecoms: A digital-first mindset