fbpx
Connect with us

Ethical Tech

How Doomscrolling shaped the path of the future

Published

 on

Doomscrolling

COVID-19 came, and with it came the reshaping of human habits. We went from constantly checking our phones for entertainment purposes to the endless scrolling for news updates, be it pandemic-related, politics, or any kind of news that would deliver a higher clarity to where the world was going.

November 19, a date will be forever remembered as the day when the world’s dynamic drastically shifted to take a new norm. From then, our days started just how they ended, with us clutching to our phones, scrolling on our favorite social media platform with the hope to get a glimpse of hope to show us that the world is not going to obliviate itself.

While that seems like such a far-fetched scenario, one cannot deny or even disregard the fact that at the time, the global conceptualization of the pandemic shock that hit it left humanity in a state of limbo, living but not living.

And what major factor relied on this? Tech companies, and specifically social media platforms.

From the minute we opened our eyes, our attention was focused on informing and updating our knowledge to get some glimpse of logical clarity through the social media platform. That’s when Facebook, Instagram, Twitter, and other platforms emerged to the scene as a substitute to news outlets to get a much more direct source of information.

And without us knowing, we, as users, gave birth to a new phenomenon, “doomscrolling.”

Sitting at home, in our living rooms or bedroom, with our families or alone, with one companion never leaving our side. Our smartphones.

Doomscrolling is “falling into deep, morbid rabbit holes filled with the coronavirus content, agitating myself to the point of physical discomfort, erasing any hope of a good night’s sleep.”

Typically, everyone has their late-night scroll, where those last 15 to 30 minutes of just laying in bed and scrolling through our favorite social networking platform brings some unconventional relaxing relief before we call it quits for the day. But the only difference between that scroll and doomscrolling is the content recommended for us and the content that feeds our curiosity and triggers some sort of psychological reaction.

The constant observation of the world collapsing around us has left individuals in some state of the uncontrollable void as users endlessly seek news about COVID-19 deaths, unemployment, climate change, racial injustice, and much more.

According to a student studying communication and social media at the University of Michigan’s School of Information, Nicole Ellison thinks. At the same time, these platforms provide endless facts – constantly changing and being updated – in reality, none actually offer a solution to the problem presented through them.

So, where does that leave us? In a state of cognitive processing, trying its very best to provide some sort of analysis to help the user comprehend what is happening.

But that never comes to fruition.

Now, many people are raising questions about the benefit of platforms such as Instagram, Facebook, and Twitter, if they do not provide answers but only raise hypotheses in our minds.

Various studies highlighted that while social media could have some detrimental effects on humans, they also trigger a positive brain response. But one must not forget that these platforms also play a fundamental role in triggering feelings of anxiety and depression.

“In a situation like that, we engage in these more narrow immediate survival-oriented behaviors. We’re in a fight-or-flight mode,” Ellison informed Wired.

“Combine that with the fact that, socially, many of us are not going into work and standing around the coffee maker engaging in collective sense-making, and the result is we don’t have a lot of those social resources available to us in the same way.”

One thing that needs to be taken into consideration, though, and it’s that as humans, we have a much bigger tendency to focus on the negative more than the positive. Meaning, while these platforms showcase in our feed’s bad news, we must not forget that these recommendations are based on an algorithm that learns from the users’ behavior.

The only reason this content is being presented to the users is that they focused their attention on this type of content, triggering recommendations of similar nature.

“Since the 1970s, we know of the ‘mean world syndrome’ – the belief that the world is a more dangerous place to in than it actually is – as a result of long-term exposure to violence-related content on television,” research scientist Mesfin Bekalu said.

“So, doomscrolling can lead to the same-long term effects on mental health unless we mount interventions that address users’ behavior and guide the design of social media platform in ways that improve mental health and well-being,” he added.

While the effect of doomscrolling is primarily dependable on the content presented on the device, it also depends on who is interacting with this content. Due to the overwhelming attachment to devices, each person’s personality affects doomscrolling.

Some have already recognized the issue and just steered back from indulging in such behavior, as they cannot see themselves single-handedly participating in doomscrolling, saying they will not indulge in any activity that would hurt them through a tiny device.

Despite social media helping the world stay connected and building a strong association between people from all around the globe, it has also become means for people to highlight the injustice happening in the world.

From the Black Lives Movement triggered by the tragic murder of George Floyd to the siege on Gaza, people are now informed about what is happening in the world. This societal awakening these platforms deliver comes at the expense of users’ mental health, which can be very draining.

The compulsion began with the overtake of the pandemic and magnified in recent months, as humanity tried to find a coping mechanism in a time where the already existing ones had been stripped away from being confined in a closed space.

Knowledge is bliss, valid. But what if this knowledge is one of the primary triggers dooming our mental state? Being informed can slave people, spark mental health issues the person is not ready to deal with or does not even know they are suffering somehow. The overwhelm of tragedy can quickly take over someone, and it does not serve a purpose. The past two years were challenging, but to prevent mental burnout, they must be aware of what is causing the problem.

And, here comes the responsibility of social media platforms in controlling, or more like reforming the kind of content they deliver to their users. One thing is clear, tech companies related to social networking will not structure the right reforms to themselves, so, regulatory approaches are well-needed.

Ethical Tech

January 6th Committee subpoenas tech giants for Capitol riot 

Published

 on

The House’s assigned Committee to investigate the January 6th Capitol riot issued subpoenas to question four tech giants concerning their platform’s role in endorsing the attach and its causes. 

Alphabet Inc., Meta Platforms Inc., Reddit Inc., and Twitter Inc. were subpoenaed by the committee following what it considered an insufficient collaboration when it came to providing adequate answers to the events that took place on January 6th, 2020

The authoritarian entity is requesting a further, more accurate flow of information from the tech giants, requiring official records connected to the effect of misinformation in altering the 2020 elections, domestic extremism, and foreign influence in the elections, according to The Wall Street Journal. 

“Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps – if any – social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violent,” said Representative Bennie Thompson (Dem., Miss), the committee’s chairman. 

“It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions,” he added. 

The reason behind the investigation is mainly to assess the role and effect these social networking platforms have on shaping the general opinion and the role they play in enforcing specific conceptualizations when it comes to politics. 

The Committee’s attention will be primarily directed towards the role the Big Tech titans played in overturning the 2020 election results and establishing which companies could’ve been aware of the falsely spread of extremism that led to the attack on the Capitol. 

In a letter sent to Google’s parent companies, Alphabet Inc., Thompson revealed that the conglomerate’s video platform was used as means to communicate plans concerning the riot, highlighting the platform’s role in encouraging the spread of misinformation before the elections. 

“We’ve been actively cooperating with the Select Committee since they started their investigation, responding substantially to their requests for documents, and are committed to working with Congress through this process,” Alphabet said its statement.     

In parallel, in another letter sent to Facebook and Instagram’s parent company, Meta Inc., Thompson reference public records that its social networking platforms were heavily relied on to propagate messages of violence and was used as a reliable means to rally individuals to question the election’s outcomes. 

“Meta has produced documents to the committee on a schedule committee staff requested – and we will come to do so,” Andy Stone, Meta spokesman, said in a statement. 

As for Reddit, the social news aggregate’s spokesperson showcased its utmost readiness to accommodate the Committee’s demands after receiving the subpoena. 

Twitter refrained from releasing an official statement on the matter or commenting. 

The recently emerged round of subpoenas came a day after asking the House of Republican leader of California, Kevin McCarthy, to voluntarily deliver documentation of a conversation he had at the time with former President Donald Trump, before, during, and after the January 6th Capitol events. 

McCarthy refused to submit these documentations under the pretense that the demand was politically motivated. 

Continue Reading

Ethical Tech

French regulatory authority fines Google, Facebook for Cookie tracking

Published

 on

On Tuesday, Alphabet’s unit, Google, was hit with a $169 million fine by France’s data privacy watchdog, Commission Nationale de L’information et des Libertés (CNIL), for implementing restrictions for users to decline cookies – online trackers.

Facebook’s parent company, Meta Inc, was also caught in the regulatory crossfire, as it was also fined $67.82 million for a similar reason, according to the Commission’s statement.

“In April 2021, the CNIL conducted an online investigation on this website and found that, while it offers a button to accept cookies immediately, it does not offer an equivalent solution (button or other) enabling the user to refuse the deposit of cookies as easily,” according to the regulatory document.

“Several clicks are required to refuse all cookies, as opposed to a single one to accept them. The CNIL also noted that the button allowing the user to refuse cookies is located at the bottom of the second window and is entitled ‘Accept Cookies,’” it added.

The tech giants were given a deadline of three months to alter their Cookies policies in the country.

In the search engine’s case, the CNIL uncovered that Alphabet’s sites, including YouTube, do not have the same issue that of Facebook’s. Users can easily accept all cookies with one click, but they must go through various menu items to refuse them.

This showcases that the company is intentionally driving users towards what is more beneficial to it.

The European Union (EU) and the CNIL consider the use of cookies as one of the most significant elements that could help them build the needed framework to base their data privacy regulation on.

To tech companies, on the other hand, cookies are considered the key pillar that helps them develop accurately targeted digital ad campaigns.

“When you accept cookies, it’s done in just one click,” said CNIL’s head for data protection and sanctions, Karin Kiefer, in a statement.

“Rejecting cookies should be as easy as accepting them,” she added.

According to the EU’s law, when users submit their data online, it’s happening with their own free will and complete understanding of the decision. In this case, the French regulatory authority judges Facebook and Google’s behavior as trickery and misleading citizens for their own benefit by forcefully deploying what is referred to as “dark patterns.”

Dark patterns are user interfaces that force users to accept a policy or agree to install cookies – a decision they wouldn’t typically make. In this case, for consumers to not give their consent, they will have to exit the page.

This presents a direct breach of EU laws, given it wrangles users’ consent.

In the event these tech giants failed – or disregarded – the authoritarian demand, they’d be risking a daily fine of almost $113 million.

This also includes Google and Facebook’s responsibility to deliver French users with more straightforward tools to decline cookies and secure consumers’ consent. In parallel, the CNIL stated that both tech giants are to provide an immediate acceptance of cookies.

“People trust us to respect their right to privacy and keep them safe. We understand our responsibility to protect that trust and are committing to further changes and active work with the CNIL in light of this decision,” a Google spokesperson said in a statement.

As for Facebook, the social networking mogul refused to comment on the matter.

Continue Reading

Ethical Tech

Apple removes CSAM tool from blog, says plans did not change

Published

 on

Apple has discreetly removed all content related to its child sexual abuse material (CSAM) tool from its blog, following harsh criticism from various organizations and individuals. 

First presented in August, the CSAM tool is set to distinguish and expose unethical images on iPhones and iPads. The communications safety measure incorporates different approaches to identify this content, such a scanning users’ iCloud Photos libraries for CSAM, communication safety to alert kids and their parents when receiving or sending explicit sexual images, and enhanced Siri’s CSAM management. 

Once the announcement was publicized, the feature took heavy backlash from experts, particularly university and security researchers, privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook’s former security chief Alex Stamos, policy groups, politicians. 

While numerous companies, individuals, and organizations vocally voiced their disgruntlement with the safety detection, the most prominent opposers were Apple employees, given it represents a direct breach of users’ privacy rights, even if they were children.

The controversy surrounding the CSAM tool rose from the fact that the company would be extracting hashes of users’ iCloud Photos and weighing the comparison with a database of famous hashes of sexual abuse imagery. 

According to MacRumors, the blog alteration happened between December 10 and 13. While some vital information has been extracted from the iOS developer’s blog, two of three detection features remained.

The first one is developed to inform children of receiving nudity photos via SMS or iMessage. The second delivers supplementary information whenever the user searches for child exploitation content via Siri, Spotlight, or Safari Search. 

Released earlier this week with the latest iOS 15.2 update, “Expanded Protections for Children,” demonstrated a much more controversial approach to detecting CSAM.

“Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” the tech giant stated back in September. 

Now, Apple Spokesperson Shane Bauer explained that the company’s stance on this matter is still the same since the initial announcement of postponing the release of the CSAM tool. 

It is worth highlighting that the iPhone parent’s statement did not announce a complete cancelation of the CSAM detection, given its functionalities are still available on the site.

To Apple, this method provides it with the needed means to report users to authoritarian entities by searching if these individuals are known for posting such photos without being forced to jeopardize its customers’ privacy. 

In parallel, the tech titan also said user data encryption cannot be impacted while the analysis runs on their devices.

One thing is accurate though, once the CSAM tool launch date is announced, Apple will have provided all three child-protection features with different purposes. 

Continue Reading

Trending