Connect with us

Ethical Tech

Apple to scan U.S. iPhones for images of child sexual abuse



Apple to scan U.S. iPhones for images of child sexual abuse

Apple unveiled plans to scan U.S. iPhones for images of child sexual abuse, drawing applause from child protection groups but raising concern among some security researchers that the system could be misused, including by governments looking to surveil their citizens.

The tool designed to detected known images of child sexual abuse, called “neuralMatch,” will scan images before they are uploaded to iCloud. If it finds a match, the image will be reviewed by a human. If child pornography is confirmed, the user’s account will be disabled and the National Center for Missing and Exploited Children notified.

Separately, Apple plans to scan users’ encrypted messages for sexually explicit content as a child safety measure, which also alarmed privacy advocates.

The detection system will only flag images that are already in the center’s database of known child pornography. Parents snapping innocent photos of a child in the bath presumably need not worry. But researchers say the matching tool — which doesn’t “see” such images, just mathematical “fingerprints” that represent them — could be put to more nefarious purposes.

Matthew Green, a top cryptography researcher at Johns Hopkins University, warned that the system could be used to frame innocent people by sending them seemingly innocuous images designed to trigger matches for child pornography. That could fool Apple’s algorithm and alert law enforcement. “Researchers have been able to do this pretty easily,” he said of the ability to trick such systems.

Other abuses could include government surveillance of dissidents or protesters. “What happens when the Chinese government says, ‘Here is a list of files that we want you to scan for,'” Green asked. “Does Apple say no? I hope they say no, but their technology won’t say no.”

Tech companies including Microsoft, Google, Facebook and others have for years been sharing digital fingerprints of known child sexual abuse images. Apple has used those to scan user files stored in its iCloud service, which is not as securely encrypted as its on-device data, for child pornography.

Apple has been under government pressure for years to allow for increased surveillance of encrypted data. Coming up with the new security measures required Apple to perform a delicate balancing act between cracking down on the exploitation of children while keeping its high-profile commitment to protecting the privacy of its users.

But a dejected Electronic Frontier Foundation, the online civil liberties pioneer, called Apple’s compromise on privacy protections “a shocking about-face for users who have relied on the company’s leadership in privacy and security.”

Meanwhile, the computer scientist who more than a decade ago invented PhotoDNA, the technology used by law enforcement to identify child pornography online, acknowledged the potential for abuse of Apple’s system but said it was far outweighed by the imperative of battling child sexual abuse.

“Is it possible? Of course. But is it something that I’m concerned about? No,” said Hany Farid, a researcher at the University of California at Berkeley, who argues that plenty of other programs designed to secure devices from various threats haven’t seen “this type of mission creep.” For example, WhatsApp provides users with end-to-end encryption to protect their privacy, but also employs a system for detecting malware and warning users not to click on harmful links.

Apple was one of the first major companies to embrace “end-to-end” encryption, in which messages are scrambled so that only their senders and recipients can read them. Law enforcement, however, has long pressured the company for access to that information in order to investigate crimes such as terrorism or child sexual exploitation.

Apple said the latest changes will roll out this year as part of updates to its operating software for iPhones, Macs and Apple Watches.

“Apple’s expanded protection for children is a game changer,” John Clark, the president and CEO of the National Center for Missing and Exploited Children, said in a statement. “With so many people using Apple products, these new safety measures have lifesaving potential for children.”

Julia Cordua, the CEO of Thorn, said that Apple’s technology balances “the need for privacy with digital safety for children.” Thorn, a nonprofit founded by Demi Moore and Ashton Kutcher, uses technology to help protect children from sexual abuse by identifying victims and working with tech platforms.

But in a blistering critique, the Washington-based nonprofit Center for Democracy and Technology called on Apple to abandon the changes, which it said effectively destroy the company’s guarantee of “end-to-end encryption.” Scanning of messages for sexually explicit content on phones or computers effectively breaks the security, it said.

The organization also questioned Apple’s technology for differentiating between dangerous content and something as tame as art or a meme. Such technologies are notoriously error-prone, CDT said in an emailed statement. Apple denies that the changes amount to a backdoor that degrades its encryption. It says they are carefully considered innovations that do not disturb user privacy but rather strongly protect it.

Separately, Apple said its messaging app will use on-device machine learning to identify and blur sexually explicit photos on children’s phones and can also warn the parents of younger children via text message. It also said that its software would “intervene” when users try to search for topics related to child sexual abuse.

In order to receive the warnings about sexually explicit images on their children’s devices, parents will have to enroll their child’s phone. Kids over 13 can unenroll, meaning parents of teenagers won’t get notifications.

Apple said neither feature would compromise the security of private communications or notify police.

Ethical Tech

January 6th Committee subpoenas tech giants for Capitol riot 



The House’s assigned Committee to investigate the January 6th Capitol riot issued subpoenas to question four tech giants concerning their platform’s role in endorsing the attach and its causes. 

Alphabet Inc., Meta Platforms Inc., Reddit Inc., and Twitter Inc. were subpoenaed by the committee following what it considered an insufficient collaboration when it came to providing adequate answers to the events that took place on January 6th, 2020

The authoritarian entity is requesting a further, more accurate flow of information from the tech giants, requiring official records connected to the effect of misinformation in altering the 2020 elections, domestic extremism, and foreign influence in the elections, according to The Wall Street Journal. 

“Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps – if any – social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violent,” said Representative Bennie Thompson (Dem., Miss), the committee’s chairman. 

“It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions,” he added. 

The reason behind the investigation is mainly to assess the role and effect these social networking platforms have on shaping the general opinion and the role they play in enforcing specific conceptualizations when it comes to politics. 

The Committee’s attention will be primarily directed towards the role the Big Tech titans played in overturning the 2020 election results and establishing which companies could’ve been aware of the falsely spread of extremism that led to the attack on the Capitol. 

In a letter sent to Google’s parent companies, Alphabet Inc., Thompson revealed that the conglomerate’s video platform was used as means to communicate plans concerning the riot, highlighting the platform’s role in encouraging the spread of misinformation before the elections. 

“We’ve been actively cooperating with the Select Committee since they started their investigation, responding substantially to their requests for documents, and are committed to working with Congress through this process,” Alphabet said its statement.     

In parallel, in another letter sent to Facebook and Instagram’s parent company, Meta Inc., Thompson reference public records that its social networking platforms were heavily relied on to propagate messages of violence and was used as a reliable means to rally individuals to question the election’s outcomes. 

“Meta has produced documents to the committee on a schedule committee staff requested – and we will come to do so,” Andy Stone, Meta spokesman, said in a statement. 

As for Reddit, the social news aggregate’s spokesperson showcased its utmost readiness to accommodate the Committee’s demands after receiving the subpoena. 

Twitter refrained from releasing an official statement on the matter or commenting. 

The recently emerged round of subpoenas came a day after asking the House of Republican leader of California, Kevin McCarthy, to voluntarily deliver documentation of a conversation he had at the time with former President Donald Trump, before, during, and after the January 6th Capitol events. 

McCarthy refused to submit these documentations under the pretense that the demand was politically motivated. 

Continue Reading

Ethical Tech

French regulatory authority fines Google, Facebook for Cookie tracking



On Tuesday, Alphabet’s unit, Google, was hit with a $169 million fine by France’s data privacy watchdog, Commission Nationale de L’information et des Libertés (CNIL), for implementing restrictions for users to decline cookies – online trackers.

Facebook’s parent company, Meta Inc, was also caught in the regulatory crossfire, as it was also fined $67.82 million for a similar reason, according to the Commission’s statement.

“In April 2021, the CNIL conducted an online investigation on this website and found that, while it offers a button to accept cookies immediately, it does not offer an equivalent solution (button or other) enabling the user to refuse the deposit of cookies as easily,” according to the regulatory document.

“Several clicks are required to refuse all cookies, as opposed to a single one to accept them. The CNIL also noted that the button allowing the user to refuse cookies is located at the bottom of the second window and is entitled ‘Accept Cookies,’” it added.

The tech giants were given a deadline of three months to alter their Cookies policies in the country.

In the search engine’s case, the CNIL uncovered that Alphabet’s sites, including YouTube, do not have the same issue that of Facebook’s. Users can easily accept all cookies with one click, but they must go through various menu items to refuse them.

This showcases that the company is intentionally driving users towards what is more beneficial to it.

The European Union (EU) and the CNIL consider the use of cookies as one of the most significant elements that could help them build the needed framework to base their data privacy regulation on.

To tech companies, on the other hand, cookies are considered the key pillar that helps them develop accurately targeted digital ad campaigns.

“When you accept cookies, it’s done in just one click,” said CNIL’s head for data protection and sanctions, Karin Kiefer, in a statement.

“Rejecting cookies should be as easy as accepting them,” she added.

According to the EU’s law, when users submit their data online, it’s happening with their own free will and complete understanding of the decision. In this case, the French regulatory authority judges Facebook and Google’s behavior as trickery and misleading citizens for their own benefit by forcefully deploying what is referred to as “dark patterns.”

Dark patterns are user interfaces that force users to accept a policy or agree to install cookies – a decision they wouldn’t typically make. In this case, for consumers to not give their consent, they will have to exit the page.

This presents a direct breach of EU laws, given it wrangles users’ consent.

In the event these tech giants failed – or disregarded – the authoritarian demand, they’d be risking a daily fine of almost $113 million.

This also includes Google and Facebook’s responsibility to deliver French users with more straightforward tools to decline cookies and secure consumers’ consent. In parallel, the CNIL stated that both tech giants are to provide an immediate acceptance of cookies.

“People trust us to respect their right to privacy and keep them safe. We understand our responsibility to protect that trust and are committing to further changes and active work with the CNIL in light of this decision,” a Google spokesperson said in a statement.

As for Facebook, the social networking mogul refused to comment on the matter.

Continue Reading

Ethical Tech

How Doomscrolling shaped the path of the future




COVID-19 came, and with it came the reshaping of human habits. We went from constantly checking our phones for entertainment purposes to the endless scrolling for news updates, be it pandemic-related, politics, or any kind of news that would deliver a higher clarity to where the world was going.

November 19, a date will be forever remembered as the day when the world’s dynamic drastically shifted to take a new norm. From then, our days started just how they ended, with us clutching to our phones, scrolling on our favorite social media platform with the hope to get a glimpse of hope to show us that the world is not going to obliviate itself.

While that seems like such a far-fetched scenario, one cannot deny or even disregard the fact that at the time, the global conceptualization of the pandemic shock that hit it left humanity in a state of limbo, living but not living.

And what major factor relied on this? Tech companies, and specifically social media platforms.

From the minute we opened our eyes, our attention was focused on informing and updating our knowledge to get some glimpse of logical clarity through the social media platform. That’s when Facebook, Instagram, Twitter, and other platforms emerged to the scene as a substitute to news outlets to get a much more direct source of information.

And without us knowing, we, as users, gave birth to a new phenomenon, “doomscrolling.”

Sitting at home, in our living rooms or bedroom, with our families or alone, with one companion never leaving our side. Our smartphones.

Doomscrolling is “falling into deep, morbid rabbit holes filled with the coronavirus content, agitating myself to the point of physical discomfort, erasing any hope of a good night’s sleep.”

Typically, everyone has their late-night scroll, where those last 15 to 30 minutes of just laying in bed and scrolling through our favorite social networking platform brings some unconventional relaxing relief before we call it quits for the day. But the only difference between that scroll and doomscrolling is the content recommended for us and the content that feeds our curiosity and triggers some sort of psychological reaction.

The constant observation of the world collapsing around us has left individuals in some state of the uncontrollable void as users endlessly seek news about COVID-19 deaths, unemployment, climate change, racial injustice, and much more.

According to a student studying communication and social media at the University of Michigan’s School of Information, Nicole Ellison thinks. At the same time, these platforms provide endless facts – constantly changing and being updated – in reality, none actually offer a solution to the problem presented through them.

So, where does that leave us? In a state of cognitive processing, trying its very best to provide some sort of analysis to help the user comprehend what is happening.

But that never comes to fruition.

Now, many people are raising questions about the benefit of platforms such as Instagram, Facebook, and Twitter, if they do not provide answers but only raise hypotheses in our minds.

Various studies highlighted that while social media could have some detrimental effects on humans, they also trigger a positive brain response. But one must not forget that these platforms also play a fundamental role in triggering feelings of anxiety and depression.

“In a situation like that, we engage in these more narrow immediate survival-oriented behaviors. We’re in a fight-or-flight mode,” Ellison informed Wired.

“Combine that with the fact that, socially, many of us are not going into work and standing around the coffee maker engaging in collective sense-making, and the result is we don’t have a lot of those social resources available to us in the same way.”

One thing that needs to be taken into consideration, though, and it’s that as humans, we have a much bigger tendency to focus on the negative more than the positive. Meaning, while these platforms showcase in our feed’s bad news, we must not forget that these recommendations are based on an algorithm that learns from the users’ behavior.

The only reason this content is being presented to the users is that they focused their attention on this type of content, triggering recommendations of similar nature.

“Since the 1970s, we know of the ‘mean world syndrome’ – the belief that the world is a more dangerous place to in than it actually is – as a result of long-term exposure to violence-related content on television,” research scientist Mesfin Bekalu said.

“So, doomscrolling can lead to the same-long term effects on mental health unless we mount interventions that address users’ behavior and guide the design of social media platform in ways that improve mental health and well-being,” he added.

While the effect of doomscrolling is primarily dependable on the content presented on the device, it also depends on who is interacting with this content. Due to the overwhelming attachment to devices, each person’s personality affects doomscrolling.

Some have already recognized the issue and just steered back from indulging in such behavior, as they cannot see themselves single-handedly participating in doomscrolling, saying they will not indulge in any activity that would hurt them through a tiny device.

Despite social media helping the world stay connected and building a strong association between people from all around the globe, it has also become means for people to highlight the injustice happening in the world.

From the Black Lives Movement triggered by the tragic murder of George Floyd to the siege on Gaza, people are now informed about what is happening in the world. This societal awakening these platforms deliver comes at the expense of users’ mental health, which can be very draining.

The compulsion began with the overtake of the pandemic and magnified in recent months, as humanity tried to find a coping mechanism in a time where the already existing ones had been stripped away from being confined in a closed space.

Knowledge is bliss, valid. But what if this knowledge is one of the primary triggers dooming our mental state? Being informed can slave people, spark mental health issues the person is not ready to deal with or does not even know they are suffering somehow. The overwhelm of tragedy can quickly take over someone, and it does not serve a purpose. The past two years were challenging, but to prevent mental burnout, they must be aware of what is causing the problem.

And, here comes the responsibility of social media platforms in controlling, or more like reforming the kind of content they deliver to their users. One thing is clear, tech companies related to social networking will not structure the right reforms to themselves, so, regulatory approaches are well-needed.

Continue Reading