fbpx
Connect with us

Ethical Tech

U.S. Copyright Office endorses right to repair

Published

 on

The U.S. Copyright Office is intensifying its legal defense in improving digital devices by submitting its latest anti-circumvention exemptions to the Digital Millennium Copyright Act (DMCA) to forbid breaking software copyright protection.

Section 1201 anti-circumvention exemptions are recommended by the Register of Copyrights every three years.

A procedure that delivers legal protections distributed on various fronts, ranging from unlocking cellphones to breaking DVD videos for classrooms. The periodic renewal of these exceptions was highlighted this year on the recently evolving repair proposals from the Electronic Frontier Foundation, iFixit, and other institutions.

The DCMA ruling deems it unlawful to circumvent technological measures utilized to prohibit unauthorized access to copyright works, such as copyright books, movies, video games, and computer software.

“The petitions did a good job of showing commonalities across different types of devices,” Kevin Amer, Acting General Counsel, elaborated on the Act with reporters.

“We are also aware of some of the efforts that the executive branch has undertaken in this area. We think that this exemption will be useful and will help to facilitate that type of activity,” he added.

From another aspect, various governmental agencies, in addition to federal and state lawmakers, presented their own set of policies to defend the right to repair.

The Federal Trade Commission (FTC) promised to counteract unlawful business practices prohibiting private repair stores from doing independent work. While the DMCA does not address these issues, lawmakers are directing their attention towards assisting in removing any legal obstacle imposed by the devices’ manufacturers.

Section 1201’s ruling addressed exemptions focused on converting videos to obtainable formats to individuals with disabilities by allowing them to add captions before any request for a subtitled version.

In parallel, the recently expanded policy unties a 2015 exemption that gives access to medical device data, allowing it to cover devices that are not incorporated into authorized third parties to patients to access it.

Even though the Librarian of Congress’ Section 1201 is deemed quite controversial in concept, however, its vitality remains in halting digital right management software directed at securing copyright media. The alteration made to the Act’s section indicates that educators, security researchers, and repair technicians will welcome necessary changes to give some type of support.

The U.S.’ oldest federal cultural institution accepted the proposals in its latest ruling and will take effect on October 28th.

Daryn is a technical writer with thorough history and experience in both academic and digital writing fields.

Ethical Tech

Report: ‘Whole of society’ effort must fight misinformation

Published

 on

Report 'Whole of society' effort must fight misinformation

Misinformation is jeopardizing efforts to solve some of humanity’s greatest challenges, be it climate change, COVID-19 or political polarization, according to a new report from the Aspen Institute that’s backed by prominent voices in media and cybersecurity.

Recommendations in the 80-page analysis, published Monday, call for new regulations on social media platforms; stronger, more consistent rules for misinformation “superspreaders” who amplify harmful falsehoods and new investments in authoritative journalism and organizations that teach critical thinking and media literacy.

The report is the product of the Aspen Institute’s Commission on Information Disorder, a 16-person panel that includes experts on the internet and misinformation, as well as prominent names such as Prince Harry, the duke of Sussex.

“Hundreds of millions of people pay the price, every single day, for a world disordered by lies,” reads the report’s introduction, written by the commission’s three co-chairs: journalist Katie Couric, former White House cybersecurity official Christopher Krebs and Rashad Robinson, president of the organization Color of Change.

Specifically, the report calls for a national strategy for confronting misinformation, and urges lawmakers to consider laws that would make social media platforms more transparent and accountable — to officials, researchers and consumers.

Another recommendation would strip some of the platforms’ legal immunity when it comes to content promoted by ads, or for lawsuits regarding the implementation of their platform’s designs and features.

The authors of the report blame the proliferation of misinformation on factors including the rapid growth of social media, a decline in traditional local journalism and a loss of trust in institutions.

Falsehoods can prove deadly, as shown by the conspiracy theories and bogus claims about COVID-19 and vaccines that have set back attempts to stop the coronavirus. The report’s authors said misinformation is proving just as damaging when it comes to faith in elections or efforts to fight climate change.

During a briefing on the report’s findings Monday, Couric, Krebs and Robinson stressed that every American has a role to play in fighting misinformation, by reviewing where they get their information, by ensuring that they don’t spread harmful falsehoods, and by fighting the polarization that fuels misinformation.

“The path to making real change is going to require all of us,” Robinson said.

The Aspen Institute has shared its findings with several social media platforms including Facebook. A message seeking a response from that company was not immediately returned on Monday.

The Aspen Institute is a nonpartisan nonprofit based in Washington, D.C. The report was funded by Craig Newmark Philanthropies, a charity founded by the creator of Craigslist.

Continue Reading

Ethical Tech

Meta’s networking platforms enforce hate speech, report shows

Published

 on

Meta shared on Wednesday its latest statistics report addressing the intensity of bullying, hate speech, and harassment on its platforms, as the public’s scrutinizing vigilance centers its gaze on the social network.

The data released by the Big Tech giant’s last quarterly transparency report surfaced as Facebook’s guardian company, Meta, endures an increasingly examining scrutiny directed at its capability of safeguarding its userbase and measures taken to heighten policies’ influence.

“As a social technology company, helping people feel safe to connect and engage with others is central to what we do. But one form of abuse that has unfortunately always existed when people interact with each other is bullying and harassment,” the blog post stated.

“While this challenge isn’t unique to social people tools to protect themselves and also measure how we are doing,” it added.

The report, signified as the first time Facebook ever shares its “prevalence” metrics regarding the intensity of bullying and harassment, showcased how the social network utilized statistics to trail and measure violating content that is typically overpassed by its systems.

In reference to the giant’s report, the prevalence of bullying content ranged between 0.14 to 0.15 percent on Facebook, and 0.05 to 0.06 percent on Instagram, according to Engadget.

“This means bullying and harassment content was seen between 14 and 15 times per every 10,000 views of content on Facebook and between 5 and 6 times per 10,000 views for content on Instagram,” the post added.

Following The Wall Street Journal’s documentation around the social networking’s conduct towards its users, the Facebook Files unmasked how the social platforms had a detrimental effect on users, specifically teens.

The documents revealed that the tech titan was aware of the damaging effect on teens but barely initiated any requisite methodologies to counteract the issues emerging from its platform. The files’ outcomes highlighted Facebook’s knowledge of its impact on teens, and the effect its networking platform has on their mental well-being.

This emphasized the broad variation between how Facebook sees itself in regard to its public opinion and the actual endeavors that are being implemented on its part to reduce the disturbing impact it has on its userbase.

According to the Facebook whistleblower, Frances Haugen, the Menlo-Park-based firm is simply not capable of addressing a massive rate of hate speech on its platform, with Haugen revealing that the titan can only focus on three to five percent of hate speech. This results in most accounts being unobserved, an aspect playing a significant role in contaminating users’ News Feed.

In its defense, the company played all of its cards to derive the public’s wrath from its misconduct towards its users, and the latest prevalence report could be perceived as one of its tactics to beautify its image in the public’s eye.

Yet, one thing the social network’s own research team proved to the world is that Facebook itself is not capable of handling the ever-growing damage it caused on a societal level.

This derives from the fact that the platform’s automated systems cannot be perceived as a reliable source regarding distinguishing harmful content, such as bullying and body image issues, especially if it is not in English.

While the tech giant disclosed in its report that hate speech systematically dropped with each quarter, with prevalence decreasing from 0.5 percent in its second quarter, to 0.3 to the third one, the fact remains that this data is not sufficient compared to the mass of hate speech generated on both social platforms.

Experts believe that Meta – the new façade guardian that Facebook hopes will embellish its image – will not have enough capacity to refurbish its ecosystem to accommodate the public, alongside regulator’s needs to deliver more ethical products, without jeopardizing users well-being at the expense of enlarging its popularity, supremacy, and wealth.

Continue Reading

Ethical Tech

Meta to deploy feature to remove ad targets of sensitive content

Published

 on

Facebook and Instagram’s parent company, Meta, announced Tuesday that it is looking to remove ad targets of sensitive content by prohibiting advertisers’ usage of detailed targeting options based on interaction with sensitive fields.

In a blog post on the Meta for Business blog, the recently rebranded social media platform is seeking to diminish advertisers’ supremacy by blocking ads that target areas, such as race or ethnicity, religious views, political beliefs, sexual orientation, health, and so much more.

“We’ve heard concerns from experts that targeting options like these could be used in ways that lead to negative experiences for people in underrepresented groups,” Meta’s vice president for marketing and ads, Graham Mudd, wrote in the post.

The removal of detailed targeted ads based on sensitive content will initiate on January 19, 2022. This move will shift Meta’s advertising business – accounting for approximately 98 percent of its global revenue.

According to Mudd, the not-so-surprising change in tactics for the Menlo-Park company derives from worries from civil rights specialists and lawmakers. These parties vocalized their concerns regarding a remarkable number of advertisers unethically exploiting the platform’s targeting options.

The most significant aspect is that these ad targets are taking into consideration users’ interactive behavior with the content from Meta products, including Facebook, Instagram, and Messenger.

“The decision to remove these Detailed Targeting options was not easy and we know this change may negatively impact some businesses and organizations,” Mudd explained.

“Some of our advertising partners have expressed concerns about these targeting options going away because of their ability to help generate positive societal change, while others understand the decision to remove them,” he added.

From another stance, digital ad-buying specialists stated that Meta’s latest maneuver will destructively affect both profit and non-profit affairs groups depending on ad targeting to fundraise income, according to The New York Times. 

The conglomerate is constructing a plan to remove a variety of its “sensitive” detailed targeting options, following occasions where the social networking platform was forced to extract debatable categories in the past.

This came as a follow-up after it was highlighted that Facebook was weaponizing advertisers with the needed means to differentiate between specific demographic groups or provoke violence through its network.

In the past, advertisers were capable of directing their ads to radical groups, such as anti-Semitic categories and pseudoscience – statements or beliefs that full under pretense scientific claims and facts but not compatible with scientific methods.

Meta’s latest adjustment to its ad targets of sensitive content policy does not change the fact that advertisers can still address specific categorizations on its platform but will simply restructure its framework to maintain the effectiveness of delivering safe services for users and advertisers of its products.

The behemoth’s marketing vice president revealed that even though Meta will oust detailed ad targeting, advertising groups can still use a feature labeled “Engagement Custom Audiences.” The feature will help them access users who have already shown interest in their page.

In parallel, advertisers can also adopt the latest feature to develop a Lookalike Audience feature.

“A lookalike audience is a way your ads can reach new people who are likely to be interested in your business because they share similar characteristics to your existing customer,” the blog post explained.

Meta also revealed that it will deliver its userbase with more control concerning ads generation by minimizing advertisement of specific content, including gambling and weight loss.

Continue Reading

Trending