Connect with us

Feature Articles

It’s time to start regulating AI from within

Yehia El Amine




This year has been a roller coaster ride for the United States, from the emergence and swift spread of the Covid-19 pandemic to dreadful natural disasters, and the ongoing trade war the U.S. has been waging against China.

The main headline that has dominated the news and even shaped the narrative of the U.S. presidential elections is racism. Following the killing of George Floyd due to police brutality, a massive wave of protests touched almost every city, calling for the end of racism in America.

Even during the protests, violence erupted far and wide between riot police and demonstrators due to increased tensions that later lead to local police individually targeting activists when the storm weathered.

But how? How were local police able to recognize or even target specific individuals within an ocean of people? The answer is simple: Artificial Intelligence (AI) and facial recognition.  

The malicious use of the technology was even noticed by tech giants far and wide, to the extent where companies such as Microsoft, Amazon and IBM publicly announced they would no longer allow police departments access to their facial recognition technology.

Microsoft President Brad Smith was widely quoted as saying his company wouldn’t sell facial-recognition technology to police departments in the U.S., “until we have a national law in place, grounded in human rights, that will govern this technology.”

It has became common knowledge that inventors of the tech including Alphabet, Amazon, Facebook, IBM and Microsoft – as well as individuals like Stephen Hawking and Elon Musk – believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence.

What’s concerning to these companies is AI’s liability to making errors, particularly in recognizing people of color and those in other underrepresented groups.

The reality of now

According to a survey done by Capgemini, a Paris-based technology services consulting company, overall, 65 percent of executives said, “they were aware of the issue of discriminatory bias” with these systems, and several respondents said their company had been negatively impacted by their AI systems.

“Six-in-10 organizations had attracted legal scrutiny and nearly one-quarter (22 percent) have experienced consumer backlash within the last three years due to decisions reached by AI systems,” the survey highlighted.

In parallel, cause for concern can be found in companies that lack employees responsible for the ethical implementation of AI systems, despite backlash, legal scrutiny, and awareness of potential bias.

“About half (53 percent) of respondents said that they had a dedicated leader responsible for overseeing AI ethics. Moreover, about half of organizations have a confidential hotline where employees and customers can raise ethical issues with AI systems,” the survey added.

Be that as it may, there are high consumer expectations with regards to AI and its accountability within companies; Nearly seven-in-10 expect a company’s AI models to be “fair and free of prejudice and bias against me or any other person or group.”

In parallel, 67 percent of customers said they expect a company to “take ownership of their AI algorithms” when these systems “go wrong.”

Learning from Singapore

Many have praised the actions and steps that Singapore has taken with their adoption of artificial intelligence, as the country has played it right since the vey beginning.

The country launched a reference guide for AI use within the country to maintain its ethical use on both business and consumer ends. The AI Ethics & Governance Body of Knowledge (BoK) provides a reference guide for business leaders and IT professionals on the ethical aspects related to the development as well as deployment of AI technologies.

The book, which was powered by industry group Singapore Computer Society (SCS),was put together based on the expertise of more than 60 individuals from multi-disciplinary backgrounds, with the aim to aid in the “responsible, ethical, and human-centric” deployment of AI for competitive advantage.

The BoK was developed based on Singapore’s latest Model AI Governance Framework, which was updated in January 2020, and will be regularly updated as the local digital landscape evolves, SCS said during its launch.

Many experts from both the private and public sectors have hailed this step as being the ideal model for carefully outlining the positive and negative outcomes of AI adoption, and looks at the technology’s potential to support a “safe” ecosystem when utilized properly.

The wrong side of Pandora’s Box

First thing’s first, AI ethics do not come in a box with your order.

Take a moment to think about how many uses AI already has and will have later down the line as we edge closer to 5G.

There are varying values for different companies across a multitude of industries, which is why data, and an AI ethics program must be tailored to specific businesses along with regulatory needs that are relevant to the company.

Let’s examine a few problems that would and have already surfaced.

1. Affecting our behavior and interactions

It is a natural thing that AI-powered bots have become better and better at emulating human conversations and relationships.

The case can be made with a bot named Eugene Goostman.

Back in 2015, Goostman was able to win the Turing Challenge for the first time, which is a challenge that places humans to chat via text with an unknown entity and then guess if the entity was human or a machine.

The AI was successfully able to fool more than half of the participants into thinking that they had been talking to an actual human being.

This is somewhat scary, since this achievement is merely the beginning of an age that looks to increase interactions between humans and well-spoken machines, especially for customer service or sales.

“While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships,” a report by the World Economic Forum (WEF) highlighted.

This has already been seen through different means with the ability to trigger the reward centers in the brain without us being exactly aware of it; just think about the myriad of click-bait headlines we seamlessly scroll through.

“These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive,” WEF explained.

Which is a big cause for concern since tech addiction is slowly but effectively becoming the new frontier of human dependency.

2. Bias in AI

It is a known fact that AI is much faster at processing information than human capacity, but that doesn’t mean it’s fair and neutral.

A known pioneer of AI solutions and tech is Google, with its most basic example being its flagship Google Lens feature, that allows your smartphone camera to identify objects, people, locations, scenes and many more.

This feature in and of itself holds great power that could be twisted in both extremes of the moral spectrum. Missing the name of an object is one thing but missing a mark on racial sensitivity is another.

An AI system that mistakes a Samsung for a OnePlus is no big deal, but a software that’s used to predict future criminals showing bias against black people is something completely different.

Which is why the role of human judgment will prove to be integral to how the technology should be used.

“We shouldn’t forget that AI systems are created by humans, who can be biased and judgmental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change,” the report by WEF said.

3. Guarding AI from the forces of evil

As technology evolves with a primary reason to aid humanity, there will always exist some who seek to use it to wreak havoc on others.

These fights won’t take place on the battlefield but in cyberspace, making them even more damaging due to their ability to individually tap into anyone.

With this in mind, cybersecurity’s importance will become paramount in the battles to come; after all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.

4. Keeping humanity at the wheel

Humans have always been nature’s apex predators; our dominance is a mere physical one, but that of ingenuity and expanding intelligence.

We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

However, since AI is miles ahead when it comes to handling different outcomes, it poses a serious question that must be answered sooner rather than later.

Will it, one day, have the same advantage over us?

Humanity needs to find an alternative solution to stop if this case develops, since simply pulling the plug won’t be enough to halt an advanced machine that could anticipate this move and attempt to defend itself.

As we continue to evaluate the risks that accompany such life-changing tech, humanity must stop itself from asking more questions and attempt to define and act upon the counter measures that assure us of a cleaner approach to tech.


Yehia is an investigative journalist and editor with extensive experience in the news industry as well as digital content creation across the board. He strives to bring the human element to his writing.


Ways for remote workers to stop cybercriminals

Yehia El Amine




The COVID-19 pandemic has drastically changed the way humans interact with each other across the board, handshakes have switched to fist bumps, massive conferences have gone digital in the form of webinars, and more importantly, employees have built makeshift offices within the comfort of their own homes.

According to Shefali Roy, former CCO & COO at TrueLayer, a UK-based FinTech firm, working from home has become the new norm.

“People are working longer and harder, which can be a big cause for concern with regards to employee burnout since they’re on high alert at all times due to the sudden merge of workstations and home comfort,” Roy said during a the MoneyFest 2020 webinar.

Thus, it isn’t strange for employees to start asking their employers about their work-from-home policy.

While remote working offers safety from a physical virus, it exposes employees to threatening digital viruses. Cybercriminals have taken advantage of this shift in the workplace and have targeted their sights around remote employees across the board.

According to a report published by Kaspersky there have been almost 726 million confirmed cyberattacks since the beginning of the year; “This has put 2020 on course to rack up somewhere in the region of 1.5 billion cyberattacks for the year,” the report stated.

While some companies have rejuvenated their IT security teams to deal with threats, many other companies haven’t and a big number of businesses are exposed to these breaches every day.

This leaves workers to fend for themselves against sophisticated cybercriminals’ intent on stealing their information and wreak havoc on businesses.

Fret not, according to the National Cyber Security Alliance, a U.S.-based cybersecurity non-profit, there are a number of ways that can help you protect your sensitive company information while venturing out of the digital safety of the office:

  • Think before you click. Cybercriminals are taking advantage of people seeking information on COVID-19. They are distributing malware campaigns that impersonate organizations like WHO, CDC, and other reputable sources by asking you to click on links or download outbreak maps. Slow down. Don’t click. Go directly to a reputable website to access the content.
  • Lock down your login. Create long and unique passphrases for all accounts and use multi-factor authentication (MFA) wherever possible. MFA will fortify your online accounts by enabling the strongest authentication tools available, such as biometrics or a unique one-time code sent to your phone or mobile device.
  • Connect to a secure network and use a company-issued Virtual Private Network (VPN) to access any work accounts. Home routers should be updated to the most current software and secured with a lengthy, unique passphrase. Employees should not be connecting to public Wi-Fi to access work accounts unless using a VPN.
  • Separate your network so your company devices are on their own Wi-Fi network, and your personal devices are on their own.
  • Always keep devices with you or stored in a secure location when not in use. Set auto log-out if you walk away from your computer and forget to log out.
  • Limit access to the device you use for work. Only the approved user should use the device (family and friends should not access a work-issued device).
  • Use company-approved/vetted devices and applications to collaborate and complete your tasks. Don’t substitute your preferred tools with ones that have been vetted by the company’s security team.
  • Update your software. Before connecting to your corporate network, be sure that all Internet-connected devices ‒including PCs, smartphones, and tablets ‒ are running the most current versions of software. Updates include important changes that improve the performance and security of your devices.

While employees can arm themselves with these helpful tips to fend off cyberattacks and breaches, remote workers can still educate themselves on how to spot phishing and ransomware attempts.

There are more than a handful of hints that could flag emails as suspicious or malicious, such as:

  1. Strange requests: these types of emails tend to give out information that’s out of the ordinary, maybe an unexpected request or one that isn’t directly relevant to you. The most likely case is that it’s a typical phishing email, even if the domain came from within your very own organization, call the sender and ask.
  2. Generic salutations: If someone is sending you an email and not addressing you personally, then chances are the sender doesn’t know who you are. Best-case scenario, it could be a marketing campaign, or the worst-case scenario is that you’re being targeted.
  3. Spelling errors: especially during emails, people will always double and triple check their emails for typos and spelling errors to remain professional. Thus, finding these errors are ‘phishy’ so beware!
  4. Be wary of attachments: this is exactly how cybercriminals worm their way into computers, which is why if the sender or email seems suspicious, chances are, the virus is laying in wait in the attachment.
  5. Shady URLs: hiding or spoofing links is the easiest thing to pull off, since the URL could take you to a different destination to where a link reads; although staying away from it is the best course of action, you could always hover over the link to check if the destination leads to where you expect it to.
  6. You’ve won our competition:while these traps can obviously be spotted, people are still falling for these schemes in 2020. Always remember, if it’s too good to be true, then it most likely is, so stay away.
  7. Scaremongering: A common approach used by cybercriminals is to claim something like “your account has been breached!”. This creates a sense of urgency and vulnerability and can prevent people from thinking clearly. If the claims in the email were true, would the sender really tell you in this way? Always check through a different means of communication.
  8. Change of behavior: Maybe you’ve received an email from somebody you trust such as your boss, or colleague, but the language used is different from normal. Maybe it’s too formal or informal. Maybe the email signature isn’t the normal one used. You’re probably used to the way these individuals talk to you, so if it’s not normal, something weird might be going on.

As time passes, and technologies get more and more advanced, so do cybercriminals, as they stay up to date with the technological winds of change to further find their weak points. Thus, employees who choose to stay remote have a responsibility toward their employers to remain safe online, as the damages are no longer measured on an individual level, but can take down entire organizations.

Continue Reading

Feature Articles

The importance of IoMT security across the healthcare system

Karim Hussami




In our hyper-connected world, advancing technology in IoT is bringing promise to many systems across industry sectors.

The Internet of Medical Things or IoMT which is a subset of the Internet of Things is one of the many emerging technologies that has impacted the healthcare system and our lives.

Hospitals and medical centers depend on smart devices for doctors to monitor their patients and their medical situations quickly and efficiently. In addition, these devices offer more precise analysis and earlier recognition of medical issues with the help of information flow.

According to a report published by Deloitte, “Hospitals in the U.S. have an average of 15 smart medical devices per bed, while the IoMT market is expected to reach $52 billion by 2022.”

Security risks for smart devices

IoMT, like any other technological device, is also subject to security risks such as cyberattacks. Malicious activities have increased in number in the last few years targeting medical institutions and being the cause of major disruption in the healthcare system, financial losses, which has lowered patient’s confidence in healthcare.

For example, hackers disabled computer systems at Düsseldorf University Hospital in Germany last September and led to the death of a patient while doctors attempted to transfer her to another hospital. The ransomware attack scrambles data, making computer systems inoperable.

The hospital’s President Arne Schönbohm said hackers took advantage of a well-known vulnerability in a piece of VPN (virtual private network) software developed by Citrix and warned other organizations to protect themselves from the flaw.

The need to implement robust IoMT security solutions in the medical industry has never been more important. Encryptions and conducting a secure boot – making sure that when a device is turned on, none of its configurations have been modified – are some of the basic yet fundamental security measures providers and manufacturers of IoT devices can take.

Other important security measures:

  • A defense strategy should be put in place and implemented with multiple layers of security available to protect against any risk. Make sure that authentication is properly followed, device access is limited, and device-to-device communication is monitored carefully.
  • The IoT device should be tested before it is put into production. Monitoring device security should be done throughout its life cycle to ensure fewer vulnerabilities. After the machine has been produced, security measures should be incorporated into its design such as conducting a risk assessment before the device is released for use in the market. Authentication measures should be built into the device.
  • Create an environment for teaching the culture of security, where the IT department can inform employees about issues and their dangers on the system or company they work for. In addition, conducting regular trainings to recognize vulnerabilities, cyber threats, risks and anomalies will speed up breach response.

Cyberattacks will never simply vanish. No matter the level of precautions we take, there will always be a degree of risk but making sure devices are secure and teams are vigilant and prepared, may help reduce overall disruption caused by cybercrime.

Continue Reading

Feature Articles

Taiwan: plans that will enable Fintech firms to access more customer data

Karim Hussami



Taiwan: plans that will enable Fintech firms to access more customer data

An open database of information is highly relevant for enterprises to get an idea of people’s needs and preferences which will give companies a chance to improve the quality of their products and services and help cultivate new ones.

The Joint Credit Information Center (JCIC) in Taiwan is planning to establish a database for local financial technology firms to obtain information on consumers’ credit risk information, the Financial Supervisory Commission (FSC) reported.

One of the ways in which financial service providers tend to use or deliver innovative services, is by adopting new technology. This has led Taiwan financial industry to spend over $700 million in 2017, on FinTech R&D and solutions in the areas of AI, AML, biometrics, blockchain, cloud services, cybersecurity, data analytics, payment, among other tech initiatives.

More info, better service

Taiwan’s information technology infrastructure is well-developed, with 90% 4G penetration and 80% mobile penetration, according to the International Trade Administration. “Taiwan is a strong market for e-commerce, online entertainment, mobile payment, and other technology-driven services.”

According to FSC Banking Bureau, electronic payment users exceeded eight million people in April 2020.

Respectively, information about consumers is a crucial part in company’s businesses and continuity as well as its success, that is why sharing is essential to progress.

After Fintech companies held a meeting in June 2020 with Taiwan’s Financial Supervisory Commission (FSC) Chairman Thomas Huang, suggestions circulated during the discussion noting that the center should make its data accessible to the fintech firms for the fact that the type of information it provides could help with developing various financial products or services.

As plans go ahead, the database would be launched in October 2021, according to the Banking Bureau, adding that fintech companies could also use the National Development Council’s open data service.

According to the Taipei Times, up until this time, 426 financial sector companies including local banks, securities firms, credit cooperatives, insurance providers and credit card issuers are among the businesses that have benefitted from JCIC’s raw data – currently not including Fintech enterprises.

Consumer approval before gaining access

Accessing information related to consumers is not as simple as one might think because it depends on customer approval and whether they agree to share their personal preferences online for a specific service.

Banking Bureau Chief Secretary Phil Tong said, “With consumers’ approval, the agency (JCIC) would provide their lending and repayment data to the companies, including how much money they have borrowed, what kind of loans they have taken and whether they have repaid on time.”

According to sources, the new database will not include consumers’ raw data and will follow personal data protection rules. The JCIC doesn’t share customers actual financial records.

Obviously, the new normal in business practice is for companies to obtain information about their customers, whether by their own efforts or by the help of a third party. Today, data enables growth.

Continue Reading