Connect with us


It’s time to start regulating AI from within

Yehia El Amine




This year has been a roller coaster ride for the United States, from the emergence and swift spread of the Covid-19 pandemic to dreadful natural disasters, and the ongoing trade war the U.S. has been waging against China.

The main headline that has dominated the news and even shaped the narrative of the U.S. presidential elections is racism. Following the killing of George Floyd due to police brutality, a massive wave of protests touched almost every city, calling for the end of racism in America.

Even during the protests, violence erupted far and wide between riot police and demonstrators due to increased tensions that later lead to local police individually targeting activists when the storm weathered.

But how? How were local police able to recognize or even target specific individuals within an ocean of people? The answer is simple: Artificial Intelligence (AI) and facial recognition.  

The malicious use of the technology was even noticed by tech giants far and wide, to the extent where companies such as Microsoft, Amazon and IBM publicly announced they would no longer allow police departments access to their facial recognition technology.

Microsoft President Brad Smith was widely quoted as saying his company wouldn’t sell facial-recognition technology to police departments in the U.S., “until we have a national law in place, grounded in human rights, that will govern this technology.”

It has became common knowledge that inventors of the tech including Alphabet, Amazon, Facebook, IBM and Microsoft – as well as individuals like Stephen Hawking and Elon Musk – believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence.

What’s concerning to these companies is AI’s liability to making errors, particularly in recognizing people of color and those in other underrepresented groups.

The reality of now

According to a survey done by Capgemini, a Paris-based technology services consulting company, overall, 65 percent of executives said, “they were aware of the issue of discriminatory bias” with these systems, and several respondents said their company had been negatively impacted by their AI systems.

“Six-in-10 organizations had attracted legal scrutiny and nearly one-quarter (22 percent) have experienced consumer backlash within the last three years due to decisions reached by AI systems,” the survey highlighted.

In parallel, cause for concern can be found in companies that lack employees responsible for the ethical implementation of AI systems, despite backlash, legal scrutiny, and awareness of potential bias.

“About half (53 percent) of respondents said that they had a dedicated leader responsible for overseeing AI ethics. Moreover, about half of organizations have a confidential hotline where employees and customers can raise ethical issues with AI systems,” the survey added.

Be that as it may, there are high consumer expectations with regards to AI and its accountability within companies; Nearly seven-in-10 expect a company’s AI models to be “fair and free of prejudice and bias against me or any other person or group.”

In parallel, 67 percent of customers said they expect a company to “take ownership of their AI algorithms” when these systems “go wrong.”

Learning from Singapore

Many have praised the actions and steps that Singapore has taken with their adoption of artificial intelligence, as the country has played it right since the vey beginning.

The country launched a reference guide for AI use within the country to maintain its ethical use on both business and consumer ends. The AI Ethics & Governance Body of Knowledge (BoK) provides a reference guide for business leaders and IT professionals on the ethical aspects related to the development as well as deployment of AI technologies.

The book, which was powered by industry group Singapore Computer Society (SCS),was put together based on the expertise of more than 60 individuals from multi-disciplinary backgrounds, with the aim to aid in the “responsible, ethical, and human-centric” deployment of AI for competitive advantage.

The BoK was developed based on Singapore’s latest Model AI Governance Framework, which was updated in January 2020, and will be regularly updated as the local digital landscape evolves, SCS said during its launch.

Many experts from both the private and public sectors have hailed this step as being the ideal model for carefully outlining the positive and negative outcomes of AI adoption, and looks at the technology’s potential to support a “safe” ecosystem when utilized properly.

The wrong side of Pandora’s Box

First thing’s first, AI ethics do not come in a box with your order.

Take a moment to think about how many uses AI already has and will have later down the line as we edge closer to 5G.

There are varying values for different companies across a multitude of industries, which is why data, and an AI ethics program must be tailored to specific businesses along with regulatory needs that are relevant to the company.

Let’s examine a few problems that would and have already surfaced.

1. Affecting our behavior and interactions

It is a natural thing that AI-powered bots have become better and better at emulating human conversations and relationships.

The case can be made with a bot named Eugene Goostman.

Back in 2015, Goostman was able to win the Turing Challenge for the first time, which is a challenge that places humans to chat via text with an unknown entity and then guess if the entity was human or a machine.

The AI was successfully able to fool more than half of the participants into thinking that they had been talking to an actual human being.

This is somewhat scary, since this achievement is merely the beginning of an age that looks to increase interactions between humans and well-spoken machines, especially for customer service or sales.

“While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships,” a report by the World Economic Forum (WEF) highlighted.

This has already been seen through different means with the ability to trigger the reward centers in the brain without us being exactly aware of it; just think about the myriad of click-bait headlines we seamlessly scroll through.

“These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive,” WEF explained.

Which is a big cause for concern since tech addiction is slowly but effectively becoming the new frontier of human dependency.

2. Bias in AI

It is a known fact that AI is much faster at processing information than human capacity, but that doesn’t mean it’s fair and neutral.

A known pioneer of AI solutions and tech is Google, with its most basic example being its flagship Google Lens feature, that allows your smartphone camera to identify objects, people, locations, scenes and many more.

This feature in and of itself holds great power that could be twisted in both extremes of the moral spectrum. Missing the name of an object is one thing but missing a mark on racial sensitivity is another.

An AI system that mistakes a Samsung for a OnePlus is no big deal, but a software that’s used to predict future criminals showing bias against black people is something completely different.

Which is why the role of human judgment will prove to be integral to how the technology should be used.

“We shouldn’t forget that AI systems are created by humans, who can be biased and judgmental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change,” the report by WEF said.

3. Guarding AI from the forces of evil

As technology evolves with a primary reason to aid humanity, there will always exist some who seek to use it to wreak havoc on others.

These fights won’t take place on the battlefield but in cyberspace, making them even more damaging due to their ability to individually tap into anyone.

With this in mind, cybersecurity’s importance will become paramount in the battles to come; after all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.

4. Keeping humanity at the wheel

Humans have always been nature’s apex predators; our dominance is a mere physical one, but that of ingenuity and expanding intelligence.

We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

However, since AI is miles ahead when it comes to handling different outcomes, it poses a serious question that must be answered sooner rather than later.

Will it, one day, have the same advantage over us?

Humanity needs to find an alternative solution to stop if this case develops, since simply pulling the plug won’t be enough to halt an advanced machine that could anticipate this move and attempt to defend itself.

As we continue to evaluate the risks that accompany such life-changing tech, humanity must stop itself from asking more questions and attempt to define and act upon the counter measures that assure us of a cleaner approach to tech.


Yehia is an investigative journalist and editor with extensive experience in the news industry as well as digital content creation across the board. He strives to bring the human element to his writing.


How 5G gaming might be every gamers dream come true

Adnan Kayyali



5G gaming

Think of the most graphicly stunning, highest resolution video game you can.

Now think of being emersed in such a fantasy world with a razor-sharp response time, exploring and playing with your friends with zero delays in connection. This is the world that 5G gaming technology is promising both players and esports audiences.

With the advancements in cloud and edge computing, gamers come ever closer to realizing an experience beyond realism, free of lag, and no latency between the players decision and the in-game action.

Such are the results of advancing cloud and edge computing technologies and partnerships such as that between Bethesda and Verizon, with their one of a kind project Orion, pushing to accelerate the advancement of this new horizon of entertainment technology.

Mobile Games expanding capabilities

Not every gamer boasts a power-hungry gaming pc or the latest console.

The joy of playing video games have long been accessible through mobile games and have been on a steady rise in popularity. With 5G gaming technology, and cloud and edge computing infrastructure now becoming more widespread and mainstream, phone users could play AAA-quality games with the electronics they already heave in their pockets.

With most of the heavy processing, computational power and graphical rendering happening outside the physical phone, on the cloud platform, you will not have to worry about your phone melting in the middle of a point streak.

This could be the start of something big in the mobile games industry if creators are no longer limited by processing power or slow mobile connection to deliver the perfect gaming experience to the more casual audience, which would only grow that demographic.

Putting the R in AR

The barely detectable delay between your perception of a falling object, your brain’s analysis and decision, and your muscle’s reaction to catch the object is around 20 milliseconds. This is what we call latency.

 In an AR/VR setting, what developers and gamers want more than anything is to make they’re already captivating and engrossing worlds even more seamless and immersive. With cloud enabled 5G gaming technology, the delay between snapping your fingers in real life and in the game, world will be zero to none, allowing you catch and throw objects, and to slap your virtual friends in real-time.

On top of that, the tools needed to dice into these immersive virtual worlds, headset and sometimes sensory gloves and controllers, can not only interact seamlessly and in cohesion, but there will be no wire in sight. This is something that anyone with a headset would hope for, to have that annoying vine hanging down your side unplugged and tucked away.

Continue Reading


Google Cloud, SpaceX team up to merge data centers

Inside Telecom Staff



data centers

SpaceX announced last week that it will be teaming up with Google Cloud to integrate its Starlink ground stations within Google’s data centers properties to provide businesses with seamless, secure access to the cloud and Internet with Google Cloud infrastructure.

The partnership will see both companies deliver data, cloud services, and applications to customers at the network edge, leveraging Starlink’s ability to provide high-speed broadband internet around the world and Google Cloud’s infrastructure.

According to a joint statement, SpaceX will begin to locate Starlink ground stations within Google data centers properties, enabling the secure, low-latency, and reliable delivery of data from more than 1,500 Starlink satellites launched to orbit to-date to locations at the network edge via Google Cloud.

In parallel, Google Cloud’s high-capacity private network will support the delivery of Starlink’s global satellite internet service, bringing businesses and consumers seamless connectivity to the cloud and Internet, and enabling the delivery of critical enterprise applications to virtually any location.

“Applications and services running in the cloud can be transformative for organizations, whether they’re operating in a highly networked or remote environment,” said Urs Hölzle, Senior Vice President, Infrastructure at Google Cloud. “We are delighted to partner with SpaceX to ensure that organizations with distributed footprints have seamless, secure, and fast access to the critical applications and services they need to keep their teams up and running.”

Organizations with broad footprints, like public sector agencies, businesses with presences at the network edge, or those operating in rural or remote areas, often require access to applications running in the cloud, or to cloud services like analytics, artificial intelligence, or machine learning.

Connectivity from Starlink’s constellation of low-Earth-orbit satellites provides a path for these organizations to deliver data and applications to teams distributed across countries and continents, quickly and securely, the statement said.

“Combining Starlink’s high-speed, low-latency broadband with Google’s infrastructure and capabilities provides global organizations with the secure and fast connection that modern organizations expect,” said SpaceX President and Chief Operating Officer Gwynne Shotwell. “We are proud to work with Google to deliver this access to businesses, public sector organizations, and many other groups operating around the world.”

According to both companies these new capabilities are expected to be available in the second half of 2021.

Continue Reading


Officials: Tesla Autopilot probed in fatal California crash

Associated Press



Tesla Autopilot

A Tesla involved in a fatal crash on a Southern California freeway last week may have been operating on Autopilot before the wreck, according to the California Highway Patrol.

The May 5 crash in Fontana, a city 50 miles (80 kilometers) east of Los Angeles, is also under investigation by the National Highway Traffic Safety Administration. The probe is the 29th case involving a Tesla that the federal agency has probed.

In the Fontana crash, a 35-year-old man was killed when his Tesla Model 3 struck an overturned semi on a freeway about 2:30 a.m. The driver’s name has not yet been made public. Another man was seriously injured when the electric vehicle hit him as he was helping the semi’s driver out of the wreck.

The CHP announced Thursday that its preliminary investigation had determined that the Tesla’s partially automated driving system called Autopilot “was engaged” prior to the crash.

However on Friday, the agency walked back its previous declaration.

“To clarify,” a new CHP statement said, “There has not been a final determination made as to what driving mode the Tesla was in or if it was a contributing factor to the crash.”

At least three people have died in previous U.S. crashes involving Autopilot.

The CHP initially said it was commenting on the Fontana crash because of the “high level of interest” about Tesla crashes and because it was “an opportunity to remind the public that driving is a complex task that requires a driver’s full attention.”

The federal safety investigation comes just after the CHP arrested another man who authorities have said was in the back seat of a Tesla that was driving this week on Interstate 80 near Oakland with no one behind the wheel.

CHP has not said if officials have determined whether the Tesla in the I-80 incident was operating on Autopilot, which can keep a car centered in its lane and a safe distance behind vehicles in front of it.

But it’s likely that either Autopilot or “Full Self-Driving” were in operation for the driver to be in the back seat. Tesla is allowing a limited number of owners to test its self-driving system.

Tesla, which has disbanded its public relations department, did not respond Friday to an email seeking comment. The company says in owner’s manuals and on its website that both Autopilot and “Full Self-Driving” are not fully autonomous and that drivers must pay attention and be ready to intervene at any time.

Autopilot at times has had trouble dealing with stationary objects and traffic crossing in front of Teslas.

In two Florida crashes, from 2016 and 2019, cars with Autopilot in use drove beneath crossing tractor-trailers, killing the men driving the Teslas. In a 2018 crash in Mountain View, California, an Apple engineer driving on Autopilot was killed when his Tesla struck a highway barrier.

Tesla’s system, which uses cameras, radar and short-range sonar, also has trouble handling stopped emergency vehicles. Teslas have struck several firetrucks and police vehicles that were stopped on freeways with their flashing emergency lights on.

For example, the National Highway Traffic Safety Administration in March sent a team to investigate after a Tesla on Autopilot ran into a Michigan State Police vehicle on Interstate 96 near Lansing. Neither the trooper nor the 22-year-old Tesla driver was injured, police said.

After the Florida and California fatal crashes, the National Transportation Safety Board recommended that Tesla develop a stronger system to ensure drivers are paying attention, and that it limit use of Autopilot to highways where it can work effectively. Neither Tesla nor the safety agency took action.

In a Feb. 1 letter to the U.S. Department of Transportation, NTSB Chairman Robert Sumwalt urged the department to enact regulations governing driver-assist systems such as Autopilot, as well as testing of autonomous vehicles. NHTSA has relied mainly on voluntary guidelines for the vehicles, taking a hands-off approach so it won’t hinder development of new safety technology.

Sumwalt said that Tesla is using people who have bought the cars to test “Full Self-Driving” software on public roads with limited oversight or reporting requirements.

“Because NHTSA has put in place no requirements, manufacturers can operate and test vehicles virtually anywhere, even if the location exceeds the AV (autonomous vehicle) control system’s limitations,” Sumwalt wrote.

He added: “Although Tesla includes a disclaimer that ‘currently enabled features require active driver supervision and do not make the vehicle autonomous,’ NHTSA’s hands-off approach to oversight of AV testing poses a potential risk to motorists and other road users.”

NHTSA, which has authority to regulate automated driving systems and seek recalls if necessary, seems to have developed a renewed interest in the systems since President Joe Biden took office.


Continue Reading