fbpx
Connect with us

Artificial Intelligence

What will superintelligent AI be capable of doing ?

Published

 on

The growing speed Artificial Intelligent (AI) is reaching will make it one of the most challenging tasks for humanity to control. Theoretically, studies have proved that super intelligent AI will be impossible to curb by humans, a species in the next decades that will be far more inferior to machine learning intelligence. So, what will superintelligent AI be capable of doing ?

While it is still in its early stages of being created, super intelligent AI has proved itself a worthy opponent of humanity. It is indeed playing a fundamental role in the growth of our species, yet, one factor remains unstable, the existential development of AI.

Experts believe empirical technological reckoning will break down walls on humanity, but that will not come to fruition for decades.

Artificial super intelligence can be found in all computational aspects of our time. With artificial intelligence examples ranging from some of the most basic games such as Chess, Jeopardy, all the way to answer almost impossible mathematical questions, processes that would’ve taken years for humanity to fulfill.

Super intelligent machines have reached a much more comprehensive level, surpassing all recognized human mind limitations. As for what will superintelligent AI be capable of doing, computer scientist at the Autonomous University of Madrid, Manuel Alfonseca, highlighted that “the question about whether superintelligence could be controlled if created is quite old.”

“It goes back at least to Asimov’s First Law of Robotics, in the 1940,” he added.

The law goes under three pillars that structure and set the ground rules under one umbrella that a robot may not injure a human being.

  • First Law: A robot may not inflict pain on a human or expose the human to harm.
  • Second Law: A robot must abide by any order directed from a human, as long as it does not contradict the first law. 
  • Third Law: A robot must safeguard its own survival; under the exception, as long as it does not contradict the first two laws. 

These laws mainly focus on a philosophical sense rather than a logical one. The ambiguity following these laws redirects the meaning behind each one. While they indeed discuss not inflicting harm on a human, the details in the law have never been truly addressed.

This means that specific alterations need to happen for super intelligent AI to be controlled with the two common ideas. The capacity of these intelligent machines could be specified with certain limits, such as significantly disconnecting AI from specific technical devices to detach its connection with the outside world. However, this would hinder AI’s superior power, making it less capable of answering human needs.

The second idea addresses how artificial super intelligence could be programmed in a certain way to accomplish objectives only beneficial for humanity by implementing ethical principles into its coding. However, that is relatively far-fetched, given it too, has its limits.

Such an idea will heavily rely on a particular algorithmic behavior, ensuring that AI cannot harm anyone under any circumstance. This can happen by replicating the AI’s first behavior and analyzing it for malicious harmful intention.

Researchers have debunked this methodology, disclosing that our age’s current standard of computing cannot handle the creation of such an algorithm.

Digital super intelligence is encircling every technological aspect of everyone’s lives. Machines run by computers follow their programming and code. If the AI is programmed to inflict harm on another human being, that is exactly what it will do if a human would stand in the way of the machine trying to fulfill its purpose.

It is common knowledge that artificial super intelligence is connected to the internet; this aspect is one of the main supporters of this machine’s survival. Through that connection, AI accesses human data to learn independently. In the future, this intellectual machinery could reach a point where it can substitute existing programs and obtain power over any machine online worldwide.

Scientists and philosophers have wondered whether humanity will even be weaponized with the needed capabilities to stand against super intelligent AI. As a response, a group of computer scientists implemented theoretical calculations to reveal that it would be profoundly inconceivable and unachievable for humanity to win the battle against digital super intelligence.

“A super intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned them. The question, therefore, arises whether this could at some point become uncontrollable and dangerous for humanity,” said Manuel Cebrian, leader of the Digital Mobilization Group at the Center for Humans and Machines.

For the future, analysts have vocalized their conceptualization of what will superintelligent AI be capable of doing. These code-driven systems will enhance human capacities and effectiveness but also potentially expose human autonomy, agency, and capabilities to grave threats.

Daryn is a technical writer with thorough history and experience in both academic and digital writing fields.

Artificial Intelligence

NYC aims to be first to rein in AI hiring tools

Published

 on

NYC aims to be first to rein in AI hiring tools

Job candidates rarely know when hidden artificial intelligence tools are rejecting their resumes or analyzing their video interviews. But New York City residents could soon get more say over the computers making behind-the-scenes decisions about their careers.

A bill passed by the city council in early November would ban employers from using automated hiring tools unless a yearly bias audit can show they won’t discriminate based on an applicant’s race or gender. It would also force makers of those AI tools to disclose more about their opaque workings and give candidates the option of choosing an alternative process — such as a human — to review their application.

Proponents liken it to another pioneering New York City rule that became a national standard-bearer earlier this century — one that required chain restaurants to slap a calorie count on their menu items.

Instead of measuring hamburger health, though, this measure aims to open a window into the complex algorithms that rank the skills and personalities of job applicants based on how they speak or what they write. More employers, from fast food chains to Wall Street banks, are relying on such tools to speed up recruitment, hiring and workplace evaluations.

“I believe this technology is incredibly positive but it can produce a lot of harms if there isn’t more transparency,” said Frida Polli, co-founder and CEO of New York startup Pymetrics, which uses AI to assess job skills through game-like online assessments. Her company lobbied for the legislation, which favors firms like Pymetrics that already publish fairness audits.

But some AI experts and digital rights activists are concerned that it doesn’t go far enough to curb bias, and say it could set a weak standard for federal regulators and lawmakers to ponder as they examine ways to rein in harmful AI applications that exacerbate inequities in society.

“The approach of auditing for bias is a good one. The problem is New York City took a very weak and vague standard for what that looks like,” said Alexandra Givens, president of the Center for Democracy & Technology. She said the audits could end up giving AI vendors a “fig leaf” for building risky products with the city’s imprimatur.

Givens said it’s also a problem that the proposal only aims to protect against racial or gender bias, leaving out the trickier-to-detect bias against disabilities or age. She said the bill was recently watered down so that it effectively just asks employers to meet existing requirements under U.S. civil rights laws prohibiting hiring practices that have a disparate impact based on race, ethnicity or gender. The legislation would impose fines on employers or employment agencies of up to $1,500 per violation — though it will be left up to the vendors to conduct the audits and show employers that their tools meet the city’s requirements.

The City Council voted 38-4 to pass the bill on Nov. 10, giving a month for outgoing Mayor Bill De Blasio to sign or veto it or let it go into law unsigned. De Blasio’s office says he supports the bill but hasn’t said if he will sign it. If enacted, it would take effect in 2023 under the administration of Mayor-elect Eric Adams.

Julia Stoyanovich, an associate professor of computer science who directs New York University’s Center for Responsible AI, said the best parts of the proposal are its disclosure requirements to let people know they’re being evaluated by a computer and where their data is going.

“This will shine a light on the features that these tools are using,” she said.

But Stoyanovich said she was also concerned about the effectiveness of bias audits of high-risk AI tools — a concept that’s also being examined by the White House, federal agencies such as the Equal Employment Opportunity Commission and lawmakers in Congress and the European Parliament.

“The burden of these audits falls on the vendors of the tools to show that they comply with some rudimentary set of requirements that are very easy to meet,” she said.

The audits won’t likely affect in-house hiring tools used by tech giants like Amazon. The company several years ago abandoned its use of a resume-scanning tool after finding it favored men for technical roles — in part because it was comparing job candidates against the company’s own male-dominated tech workforce.

There’s been little vocal opposition to the bill from the AI hiring vendors most commonly used by employers. One of those, HireVue, a platform for video-based job interviews, said in a statement this week that it welcomed legislation that “demands that all vendors meet the high standards that HireVue has supported since the beginning.”

The Greater New York Chamber of Commerce said the city’s employers are also unlikely to see the new rules as a burden.

“It’s all about transparency and employers should know that hiring firms are using these algorithms and software, and employees should also be aware of it,” said Helana Natt, the chamber’s executive director.

Continue Reading

Artificial Intelligence

The Future of Machine Learning and Quantum Computing

Published

 on

Since the rise of the fourth industrial revolution, digital innovation acted as one of the main pillars for global evolution, the future of machine learning with various tech companies adopting algorithm-based artificial intelligence (AI) will heighten the industry’s universal value.

Artificial intelligence and machine learning are both associated as they are the most adopted technologies to structure intelligent systems.

“AI is a bigger concept to create intelligent machines that can simulate human thinking capabilities and behavior, whereas machine learning is an application or subset of AI that allows machines to learn from data without being programmed explicitly,” according to JavaTpoint.

The overtake of the digital era has led the world to quickly adapt to the levitation of technology, while the growth of the tech industry is exponential, everything else remains linear. Or at least not following similar haste. Machine learning is one of the key factors leading to this unrestrained growth, and AI will sustain its escalation in the years to come.

So, what will the future of machine learning hold for the global innovative descend? And why machine learning is the future? Let’s dive into it.

Machine learning obtains the authority to deliver remarkable alterations to the future’s path in various sectors. The technology market is expected to reach a valuation of $117.19 billion by 2027, growing from $8.43 billion in 2019.

A multitude of businesses began adopting machine learning algorithms to heighten predictions and business decisions. This AI subfield has presented itself as a prominent factor to structure the future, varying on different fields, ranging from quantum computing, Auto Machine Learning (AutoML), diversified sectors, and much more.

Quantum computing relies heavily on machine learning

Even though it is still in its Research and Development (R&D) phase, various machine learning and Big Tech companies are grinding their way into this field by heavily investing in the rise of quantum machine learning.

Quantum computing is one of the few elements with high-level capabilities of taking machine learning abilities to the next level. The speed of its operations will permit faster data processing, delivering a sophisticated level of quantum mechanics to solve complicated problems.

It is worth mentioning that there is still no commercially deployed quantum computer available. However, heavy investments are being poured into the industry.

Auto Machine learning has the potential to alter the future’s landscape 

This aspect of machine learning automates the procedure of employing algorithms to fulfill real-life tasks. For instance, AutoML can be used to locate an algorithm that can be employed or to identify whether any algorithms are missing.

AutoML can automate certain machine learning models, such as data pre-processing to improve data quality, feature engineering to assist in creating more adjustable features on input data, feature extraction to deliver new features to enhance predictions, and much more.

Incorporating machine learning with different sectors

Machine learning has been identified as the overpass that will take the world into the inevitable digital fate. While the general conceptualization spread now mainly refers to the far future, the incorporation of machine learning and deep learning into our day-to-day lives is perceived as the first small leap into what the future will hold.

Various industries have begun adopting machine learning technologies to deliver radical advancements to each industry’s functionalities.

For example, machine learning will play a fundamental role in the health sectors, including the pharma sector. The healthcare industry produces intense data sums. By implementing machine learning strategies, it can improve predictions and treatments. This deployment will happen by enabling analysis of a much larger range of data drawn on previous studies, individual demographics, and health reports to deliver accurate predictions.

While some industries are still in their R&D phase concerning machine learning, some have already optimized their business strategies with the help of machine learning techniques.

Despite being in the early stages, manufacturers began deploying machine learning into their business strategies in 2020. The technology’s tools have helped examine equipment performance and state, predict product quality, and estimate energy usage.

With the ever-expanding advancement in technologies, various industries are expected to include more machine-learning programmed robots into their premises in the upcoming future.

AI innovations boil down to two concepts

To truly understand the different layers of AI, one must dive into the classification of AI and then machine learning. While machine learning is a subfield of AI, deep learning is also a subdivision of machine learning. All three complete each other, and one cannot exist without the other.

The most comprehensible manner of explaining the contrast between these two concepts is to know that deep learning, is in fact, machine learning. More accurately, deep learning is perceived to be the evolution of the latter.

Deep learning organizes algorithms in tiers, structuring an “artificial neural network” to learn and make decisions on its own. It employs these programmable neural networks that support machines to deliver better forecasts and decisions without human assistance.

As it is known, the future of digitalization relies heavily on data, and it is no exposé either that the future of deep learning also depends on the same factor.

AI deep learning represents the foundation to how AI can replicate humans to obtain a specific genre of knowledge by using machine learning. The more algorithms work, the more knowledge it gains, the more accurate its prediction becomes.

In essence, AI deep learning deploys a neural network to replicate animal intelligence with three tiers of neurons, the input layer, the hidden layer(s), and the output layer. Intertwined relations between these neurons are related to weight and assessing the significance of the input value.

Data is the sustenance that drives and trains the network, as rehearsing the data and evaluating it will create a Cost Function that delivers results how much the AI is distant from the accurate output.

AI, specifically machine learning, will alter the future in every sense, be it for industries or humans. This intelligent technology has acted as the leading artificer in the emergence and optimization of technologies, such as robotics, manufacturing, various sectors, the Internet of Things, and more to come.

With the speed artificial intelligence is going, the future upholds a prominent role for machine learning and the contribution it will deliver to act as a technological pioneer in the inevitable future.

Continue Reading

Artificial Intelligence

White House proposes tech ‘bill of rights’ to limit AI harms

Published

 on

White House proposes tech 'bill of rights' to limit AI harms

Top science advisers to President Joe Biden are calling for a new “bill of rights” to guard against powerful new artificial intelligence technology.

The White House’s Office of Science and Technology Policy on Friday launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character.

Biden’s chief science adviser, Eric Lander, and the deputy director for science and society, Alondra Nelson, also published an opinion piece in Wired magazine detailing the need to develop new safeguards against faulty and harmful uses of AI that can unfairly discriminate against people or violate their privacy.

“Enumerating the rights is just a first step,” they wrote. “What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this ‘bill of rights,’ or adopting new laws and regulations to fill gaps.”

This is not the first time the Biden administration has voiced concerns about harmful uses of AI, but it’s one of its clearest steps toward doing something about it.

European regulators have already taken measures to rein in the riskiest AI applications that could threaten people’s safety or rights. European Parliament lawmakers took a step this week in favor of banning biometric mass surveillance, though none of the bloc’s nations are bound to Tuesday’s vote that called for new rules blocking law enforcement from scanning facial features in public spaces.

Political leaders in Western democracies have said they want to balance a desire to tap into AI’s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.

A federal document filed Friday seeks public comments from AI developers, experts and anyone who has been affected by biometric data collection.

The software trade association BSA, backed by companies such as Microsoft, IBM, Oracle and Salesforce, said it welcomed the White House’s attention to combating AI bias but is pushing for an approach that would require companies to do their own assessment of the risks of their AI applications and then show how they will mitigate those risks.

“It enables the good that everybody sees in AI but minimizes the risk that it’s going to lead to discrimination and perpetuate bias,” said Aaron Cooper, the group’s vice president of global policy.

Continue Reading

Trending