IndiaLifestyle

Hate Speech On Facebook Decline For 4th Quarter In A Row

On Tuesday, Facebook released its Q3 2021 Transparency Report, including a report on compliance with key content and community standards.

The Most Viewed Content report shows a clear pattern in the Top 20 Most Viewed Posts, while the Community Standards Compliance Report shows that the spread of hate speech on Facebook has declined for the fourth straight quarter.

The new brand meta revealed that Facebook’s hate speech continued to decline for the fourth quarter in a row this year. The company also reported for the first time in Q3 (Q3) that the prevalence of hate speech on Instagram was 0.02%.

 In the September quarter (Q3), 0.03% (3 hate speech views per 10,000 Facebook content views) increased from 0.05% per 10,000 content views or five hate-speech views in Q2.

Meta said in a statement: “Hate speech continues to decline, including improved personalization, due to improvements in our technology and rating changes that reduce problematic content in our news feeds.”

Social networks removed 13.6 million units of Facebook content for violating its violence and incitement policy.

The meta said, “On Instagram, we removed 3.3 million pieces of this content with a 96.4% proactive detection rate.”

Facebook

 In Q3, bullying, and harassment content penetration rate was 0.140.15%, 14-15 views of bullying and harassment content per 10,000 views of Facebook content. For

  Instagram, 0.050.06%, or 5-6 views per 10,000 views of Instagram content.

 “We removed 9.2 million bullying and offensive content from Facebook with a 59.4% proactive rate. We removed 7.8 million threatening and offensive content from Instagram with an 83.2% proactive rate,” the company said.

Top 20 Most Viewed Posts

As in Q2, when Facebook released the first edition of its High Viewed Content report, the top 20 posts accounted for about 0.1% of all US content views. “Our goal is to provide transparency to the most viewed content and reveal that a huge number of people can view a single piece of content, but Facebook is so large that content can only represent a small fraction of the content. As a percentage of US news feed views, “in short, different people don’t often see the same content in news feeds,” the company said.

How  Meta Helped  In Decreasing Of Hate Speech Content  On Facebook

facebook

  • We believe the most critical measures to focus on are dissemination, the amount of hate speech people see on the platform, and how to reduce it using any tool.
  • Our technology is making a significant impact on reducing hate speech on Facebook. In the most recent Community Standards Compliance Report, penetration has decreased by more than half in the past three quarters, and it represents 0.05% of content viewed or about five viewings for every 10,000 users.
  • We use technology to reduce the spread of hate speech in several ways. It helps to proactively detect hate speech, guide reviewers, and eliminate policy violations. It can also facilitate the distribution of potentially inappropriate content. All these tools working together affect dissemination.
  • In 2016, content moderation efforts were primarily based on user reports. So, we’ve created a technology that proactively detects inappropriate content before people report it. Our advanced detection rate reflects this. However, we provide prevalence data to help you determine how many hate speeches are found in your application.

 Data from the leaked documents is used to create a story that the techniques we use to combat hate speech are inadequate and purposely skew our progress. Not true. We don’t want to see hate on platforms like users or advertisers, and we are transparent in our work to remove it. This document shows that our commitment to honesty is a long journey. We can’t be perfect, but our team constantly evolves our systems, identifies problems, and finds solutions.

 According to a recent report, our approach to combating hate speech is far narrower than it is, ignoring that hate speech prevalence has dropped to 0.05% or five views per 10,000 people on Facebook. Therefore, we think prevalence is the most important metric to use as it shows how much hate speech occurs on Facebook.

 Focusing solely on removing content is the wrong way to deal with hate speech. Because using technology to eliminate hate speech is only one way to counter it. Before we get rid of it, we need to determine which one is an expression of hate. This may be an expression of contempt, but if you are unsure whether you meet the removal requirements, our technology may reduce content distribution or discourage you from recommending groups, pages, or people who regularly publish content that may violate Google policies. We also use technology to flag content for further review.

 The automatic content deletion threshold is high. Otherwise, you risk making more content mistakes that look like hate speech but don’t harm the very people we want to protect, such as those who describe or criticize experiences with hate speech.

 Another measure of misunderstanding is the predictive detection rate. It tells us how good our technology is at finding content before people tell us. It tells us what content we removed and how much we saw. In 2016, most content removals were based on what users said. We decided that we needed to do better, so we developed techniques to identify potentially inappropriate content so that no one would talk about it.

Facebook

When we started reporting hate speech rates, only 23.6% of the content we removed was proactively detected by our system. Most of what we removed was discovered by humans. Now that number is over 97%.

However, our aggressive bidding doesn’t tell us what we’re missing out on, nor does it explain the number of our efforts, including what we’re doing to reduce the spread of problematic content. This is why we focus on prevalence and consistently describe it as the most important indicator. The penetration rate tells us what people see as inoffensive content because we missed it. This is how we most objectively evaluate progress, as it provides the complete picture. We talk about the fad in our Community Standards Compliance Report every quarter, and we talk about it in the Transparency Center.

The penetration rate is how we measure work internally, so we use the same metric externally.

We know our work in this area will never be complete, but the fact that prevalence has decreased by nearly 50% over the last three quarters shows that our efforts are paying off. In addition, the Community Standards Compliance Report noted that most crashes could be attributed to improved and hardened AI systems.

We worked with international experts to develop the indicator. We are also the only company that volunteered to audit independently.

The quarterly report includes many of the most comprehensive metrics to give people a complete picture. In addition, we’ve worked with international experts in measurement, statistics, and other fields to provide an independent public assessment to ensure we’re measuring the right thing. They generally agreed with our approach but also guided how we could improve. You can read the full report here. We have also committed to an independent audit with EY, a global audit firm, to accurately measure and report our performance.

Facebook

How We Reduce the Prevalence of Hate Speech

Our efforts are having a significant impact on reducing how much hate speech people see on Facebook. The prevalence of hate speech is currently at about 0.05% of content viewed, or about five views per every 10,000, down by almost 50% in the last three quarters.

These points are illustrative of the efforts taken to combat hate speech.

  • Removing accounts, Groups, Pages, and events for violating our community standards
  • Filtering problematic Groups, Pages, and content from recommendations across our services
  • Reducing the distribution of likely violating content or content borderline to our Community Standards
  • Adding warning screens and checks to proactively prevent someone from posting something that may be hateful
  • Prioritizing content review so that the most viral or severe content is reviewed first
  • Removing content that violates our policies for hate speech

Facebook

Why do we care?

 Knowing the content types that get the most views can help social media campaigns, but marketers still need to prioritize the content types that resonate with their target audience the most. Additionally, decreasing hate speech on Facebook gives brands more peace of mind to maintain their presence on the platform.

 These transparency reports are part of a more significant effort to rebuild Facebook’s reputation, including a rebrand to Meta. The platform has been criticized for years for handling misinformation and hate speech. In addition, the number of youth Facebook users is expected to decline by 45% over the next two years, potentially affecting future revenue and advertiser reach. As time goes on, we will find out whether these efforts will enable the platform and increase its value for marketers.

edited and proofread y: nikita sharma

 

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button