Fact Buster 101

  • Home
  • Fact Buster 101

Fact Buster 101 We fight fake news,
misinformation, harmful content using a global network of fact-checking partners

The idea that we allow misinformation to fester on our platform, or that we somehow benefit from this content, is wrong....
02/11/2022

The idea that we allow misinformation to fester on our platform, or that we somehow benefit from this content, is wrong. Facebook is the only major social media platform with a global network of more than 70 fact- checking partners, who review content in different languages around the world. Content identified as false by our fact-checking partners is labelled and down-ranked in News Feed. Misinformation that has the potential to contribute to imminent violence, physical harm, and voter suppression is removed outright, including misinformation about COVID-19.
We don’t want hate speech on our platform and work to remove it, despite what the film says. While even one post is too many, we’ve made major improvements on this. We removed over 22 million pieces of hate speech in the second quarter of 2020, over 94% of which we found before someone reported it—an increase from a quarter earlier when we removed 9.6 million posts, over 88% of which we found before some reported it to us.
We know our systems aren’t perfect and there are things that we miss. But we are not idly standing by and allowing misinformation or hate speech to spread on Facebook.

01/11/2022

How to spot Fake News

Facebook is also taking steps now to make sure we’re prepared to deal with another type of misinformation: deepfakes. Th...
01/11/2022

Facebook is also taking steps now to make sure we’re prepared to deal with another type of misinformation: deepfakes. These videos, which use AI to show people doing and saying things they didn’t actually do or say, can be difficult for even a trained reviewer to spot. In addition to tasking our
AI Red team
to think ahead and anticipate potential problems, we’ve deployed a state-of-the-art deepfake detection model with eight deep neural networks, each using the EfficientNet backbone. It was trained on videos from a unique dataset commissioned by Facebook for the
Deepfake Detection Challenge (DFDC) The DFDC is an open, collaborative initiative organized by Facebook and other industry leaders and academic experts. The dataset's 100K videos have been shared with other researchers to help them develop new tools to address the challenge of deepfakes.

In order to identify new deepfake videos that our systems haven’t seen before, we use a new data synthesis technique to update our models in near real time. When a new deepfake video is detected, we generate new, similar deepfake examples to serve as large-scale training data for our deepfake detection model. We call this method GAN of GANs (GoG) because it generates examples using generative adversarial networks (GANs), a machine learning architecture where two neural networks compete with each other to become more accurate. GoG lets us continuously fine-tune our system so it is more robust and generalized for dealing with potential future deepfakes.

These systems -- and the others described in the companion blog posts linked above -- are in production now to help us catch misinformation on our platform. As we have said in previous blog posts and as we tell ourselves every day, there’s much more work to do. But AI research is advancing rapidly and we are getting better and better and taking these new technologies and putting them to use quickly. We believe the new computer vision systems and language tools we’re developing in the lab today will help us do better at catching harmful content on our platforms tomorrow. It will take long-term investments and a coordinated effort from researchers, engineers, policy experts, and others across our company. But it will continue to be our priority and we’re confident we can continue to find new ways to use AI to better protect people.

01/11/2022

META IS DEPLOYING TOOLS TO DETECT DEEPFAKES

Are businesses doing enough to prevent the dangers of misinformation?Most business leaders believed they were doing ‘fai...
01/11/2022

Are businesses doing enough to prevent the dangers of misinformation?

Most business leaders believed they were doing ‘fairly well’ to prevent the dangers of misinformation. Leading the way were those in the information technology and services sector, with 41% going a step further and saying they are doing very well. This was followed by professionals in financial services (34%) and healthcare (30%). The lowest proportion was among those in marketing and communications at 21%. This shows it is not enough for businesses to be actively taking protective steps against the dangers of misinformation – they also need to communicate their expertise to be seen as credible, trustworthy experts.

So, what are businesses doing to combat the dangers of misinformation vs disinformation? Businesses in the regions we surveyed said that they were focusing on building trust with employees, clients and affiliates to combat the dangers of misinformation and disinformation (89% in both North America and APAC, and 86% in Europe). Next in importance for businesses in APAC and North America to offset the effects of misinformation is the involvement in policy and governance (85% and 83%). Companies in Europe, on the other hand, prefer to invest in reliable brand messaging and communications (82%) to protect against the effects of misinformation.

Demystifying data followed next for those in the APAC region (82%), North America (78%) and Europe (74%). Only 35% of business leaders on the whole thought that their companies were taking an active approach when it came to demystifying data to mitigate the dangers of misinformation.

Here's how Facebook is using AI to help detect misinformationArtificial Intelligence is a critical tool to help protect ...
01/11/2022

Here's how Facebook is using AI to help detect misinformation

Artificial Intelligence is a critical tool to help protect people from harmful content. It helps us scale the work of human experts, and proactively take action, before a problematic post or comment has a chance to harm people.

Facebook has implemented a range of policies and products to deal with misinformation on our platform. These include adding warnings and more context to content rated by third-party fact-checkers, reducing their distribution, and removing misinformation that may contribute to imminent harm. But to scale these efforts, we need to quickly spot new posts that may contain false claims and send them to independent fact-checkers — and then work to automatically catch new iterations, so fact-checkers can focus their time and expertise fact-checking new content.

From March 1st through Election Day, we displayed warnings on more than 180 million pieces of content viewed on Facebook by people in the US that were debunked by third-party fact checkers. Our AI tools both flag likely problems for review and automatically find new instances of previously identified misinformation. We’re making progress, but we know our systems are far from perfect.

As with
hate speech
, this poses difficult technical challenges. Two pieces of misinformation might contain the same claim but express it very differently, whether by rephrasing it, using a different image, or switching the format from graphic to text. And since current events change rapidly, especially in the run-up to an election, a new piece of misinformation might focus on something that wasn’t even in the headlines the day before.

To better apply warning labels at scale, we needed to develop new AI technologies to match near-duplications of known misinformation at scale. Building on
SimSearchNet
, Facebook has deployed SimSearchNet++, an improved image matching model that is trained using self-supervised learning to match variations of an image with a very high degree of precision and improved recall. It’s deployed as part of our end-to-end image indexing and matching system, which runs on images uploaded to Facebook and Instagram.

SimSearchNet++ is resilient to a wider variety of image manipulations, such as crops, blurs, and screenshots. This is particularly important with a visuals-first platform such as Instagram. SimSearchNet++’s distance metric is more predictive of matching, allowing us to predict more matches and do so more efficiently. For images with text, it is able to group matches at high precision using optical character recognition (OCR) verification. SimSearchNet++ improves recall while still maintaining extremely high precision, so it’s better able to find true instances of misinformation while triggering few false positives. It is also more effective at grouping collages of misinformation.

Address


Website

Alerts

Be the first to know and let us send you an email when Fact Buster 101 posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Videos

Shortcuts

  • Address
  • Alerts
  • Videos
  • Claim ownership or report listing
  • Want your business to be the top-listed Media Company?

Share