Regulating AI: striking while the iron is hot

Regulating AI

Bias has always accompanied us throughout our journey in human history.

It has ushered us into safety and steered us away from the potential dangers of the elements surrounding us, be them natural or otherwise.

Yet, years, decades, and centuries later, humanity has stumbled into a gigantic faux pas in terms of how we apply bias in our modern times, and how we translate it into the technologies that we use on the daily.

And the most threatening technology suffering from this misuse is artificial intelligence (AI).

Multiple studies have found that AI will elevate worldwide economies, the state of how we live our lives, how governments will harness its power to better serve its citizens, how modern medicine will utilize AI to eliminate human error – leading to longer life-expectancy rates.

While adoption has already started, it still hasn’t been advanced enough for accessibility of the general masses; in other words, there’s still a lot of work to do to before integrating the tech into our day-to-day lives.

A survey conducted by McKinsey reported, half of respondents from various industries say their organizations have adopted AI in at least one function. The business functions in which organizations adopt AI remain largely unchanged from the 2019 survey; with service operations, product or service development, and marketing and sales again taking the top spots.

“By industry, respondents in the high-tech and telecom sectors are again the most likely to report AI adoption, with the automotive and assembly sector falling just behind them (down from sharing the lead last year),” McKinsey’s survey reported.

Currently, the most common uses of AI have been for inventory and parts optimization, pricing and promotion, customer-service analytics, and sales and demand forecasting.

While all these functions are not as ‘sexy’ as we want them to be, they’ve championed increases in company revenue streams, as well as decreasing costs across the board, from optimization of talent management, contact-center automation, and warehouse automation.

“More than two-thirds of respondents who reported AI adoption, have witnessed an increase in revenue, and decreasing operational costs,” the survey highlighted.

The sluggish rate at which AI is being adopted, as well as it’s primary use to replace, optimize, and deliver analytics is a good thing; meaning that the tech hasn’t reached its full appeal for the mainstream.

This provides valuable time for the governments of the world to sit down with Big Techs and start a conversation over the incessant need to form proper regulations that would protect humanity from the potential disasters that AI can be used to achieve.

Chief Executive of Google and its parent company Alphabet, Sundar Pichai, warned in an Op-Ed published by the Financial Times, that now is the time for the world to be clear-eyed about what could go wrong if the decision of how to properly use it is left to market forces.

“There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone,” Pichai wrote.

The challenges mentioned by the Google CEO are worrisome to say the least, especially since many of them have started to appear worldwide.

According to a survey done by Capgemini, a Paris-based technology services consulting company, overall, 65 percent of executives said, “they were aware of the issue of discriminatory bias” with these systems, and several respondents said their company had been negatively impacted by their AI systems.

“Six-in-10 organizations had attracted legal scrutiny and nearly one-quarter (22 percent) have experienced consumer backlash within the last three years due to decisions reached by AI systems,” the survey highlighted.

In parallel, cause for concern can be found in companies that lack employees responsible for the ethical implementation of AI systems, despite backlash, legal scrutiny, and awareness of potential bias.

“About half (53 percent) of respondents said that they had a dedicated leader responsible for overseeing AI ethics. Moreover, about half of organizations have a confidential hotline where employees and customers can raise ethical issues with AI systems,” the survey added.

Be that as it may, there are high consumer expectations with regards to AI and its accountability within companies; Nearly seven-in-10 expect a company’s AI models to be “fair and free of prejudice and bias against me or any other person or group.”

In parallel, 67 percent of customers said they expect a company to “take ownership of their AI algorithms” when these systems “go wrong.”

Bias is a nuanced issue in AI development.

The tricky part to this is that we cannot condemn the biases shown by AI as being the malicious or personal views of their developers, or the deliberate feeding of malicious information to the algorithm throughout the machine learning process.

Because these AI system and programs are trained using vast amounts of data, it allows the algorithm to easily pick up patterns from the already existing range of published materials that contain linked words that lead to bias in gender, race, and etc.

A prior incident with Apple serves as a perfect example.

Users noticed that writing words like ‘CEO’ resulted in iOS offering up the ‘male businessman’ emoji by default. While the algorithms that Apple uses are a closely guarded secret, similar matters of gender assumptions in AI platforms have been seen.

This explains a function within machine learning called word embedding, which the action of looking at specific words and coupling and associating them with a specific gender, race, etc. based on already existing data, would lead to drawing patterns.

“If these machine learning algorithms find more examples of words like ‘men’ in close proximity within these text data sets, they then use this as a frame of reference to associate these positions with males going forward,” as John Murray, Senior Editor at Binary District, perfectly explained.

Murray argued that this word-embedding model of machine learning can highlight problems of existing societal prejudices and cultural assumptions that have a history of being published, but data engineers can also introduce other avenues of bias by their use of restrictive data sets.

As Pichai wrote in his Op-Ed “now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.”

While governments of the world are starting to open these debates to better regulate AI, their approaches widely vary.

For example, the EU is currently debating whether to lay out a five-year ban on facial recognition software from public spaces, as the continent fears of emulating China’s concerning approach of using the technology to surveil its people, as it impedes the public’s right to move anonymously in public.

Banning the tech would be pressing the pause button on a technological issue that would eventually need to be thoroughly regulated as it gets more and more sophisticated over time.

However, the United States is considering a much softer approach, as it issued a memorandum outlining regulatory principles that are designed to avoid government “overreach” that impinges on innovation.

However, the Google CEO has stressed the need for “international alignment. To get there, we need agreement on core values,” in his column, as it will prove vital to developing and enforcing regulatory standards.

Google have already taken their stance on the matter, by refusing to sell facial recognition software, in contrast to their rival Microsoft, as Pichai pointed at the dangers of the tech if misused.

However, Lenovo seems to have a different opinion.

At a media event in Italy, Per Overgaard, Executive Director of Lenovo’s Data Center Group’s (DCG) EMEA region argued that the view and use of AI also will change as Millennials and the ensuing generations become a larger part of the workforce.

“Having grown up with technology, they will change the way business uses and thinks about AI and other emerging innovations. They will see it as simply another tool in their toolbox that can be used to change the way they work and to reinvent their companies,” Overgaard was quoted as saying.

Adding that “this is what the Millennial generation is going to give us, a new perspective in how to work with technology for the good.”

While humanity is considered to be the apex predator on the planet, we’re faced with taming technologies that will forever dictate the very nature of how human evolution will move forward.

Now is the time to ignite this conversation, and as the age-old saying goes “do not wait to strike till the iron is hot; but make it hot by striking.”