Surge in Harmful Content on Meta Platforms After Policy Changes, Survey Finds


 A recent survey has raised alarms about a significant increase in harmful content, including hate speech, across Meta's platforms like Facebook, Instagram, and Threads. The spike comes after Meta made major changes to its content moderation policies earlier this year, particularly in the United States. In January, the company ended its partnerships with third-party fact-checkers in the U.S. and shifted to a system called "Community Notes," where regular users are tasked with identifying and debunking false information. This approach, similar to one used by X, has drawn criticism for being less effective at curbing misinformation.

The decision to move away from professional fact-checking has been viewed by some as an effort to align with the priorities of the current U.S. administration, which has voiced concerns about tech platforms limiting free speech, particularly for conservative viewpoints. Alongside this shift, Meta also relaxed its rules around certain topics, allowing users to make derogatory comments about others' gender or sexual orientation, such as accusing them of "mental illness" or "abnormality." These changes mark a significant departure from the stricter content moderation standards Meta had developed over nearly a decade.

The survey, conducted by digital and human rights organizations like UltraViolet, GLAAD, and All Out, gathered responses from around 7,000 active users of Meta’s platforms. The findings paint a troubling picture: one in six users reported experiencing some form of gender-based or sexual violence online, while two-thirds said they had encountered hateful or violent content. An overwhelming 92 percent of respondents expressed concern about the rise in harmful material and felt less protected from being targeted. Additionally, 77 percent of users said they felt less safe sharing their opinions freely, leading to increased self-censorship.

Meta has downplayed the impact of these policy changes. In its May quarterly report, the company claimed that enforcement errors in the U.S. had been cut in half since January, with no significant increase in harmful content overall. However, the survey’s authors argue that Meta’s report fails to capture the real-world experiences of users facing targeted harassment and hate. Jenna Sherman, campaign director at UltraViolet, emphasized that social media is no longer just a platform but a space where people live, work, and connect. She criticized Meta’s leadership for reversing years of progress in content moderation, leaving vulnerable users exposed to harm.

The organizations behind the survey are calling for urgent action. They urge Meta to commission an independent review to assess the impact of its policy changes and to restore the stronger content moderation practices that were in place before January. The International Fact-Checking Network has also warned that expanding these relaxed policies globally could have severe consequences for Meta’s fact-checking programs, which operate in over 100 countries and 26 languages, including partnerships with groups like Agence France-Presse.

As concerns grow, the debate over balancing free speech and user safety on social media continues to intensify. The survey’s findings highlight the challenges of relying on community-driven moderation and the risks of loosening rules designed to protect users from harm. For now, many users feel less safe and less free to express themselves, raising questions about the future of online spaces.

Source: www.abs-cbn.com/news

Comments

Popular posts from this blog

Protect Your Business from Cyber Threats with Bitdefender Antivirus

PLDT Enterprise Enhances Mobile Security with Silent Authentication

Introducing our HRIS: Simplify Your HR Processes with a Robust and Integrated Solution