Press "Enter" to skip to content

Tag: MediaReview

MediaReview Testing Expands to a Global Userbase

The Duke Reporters’ Lab is launching the next phase of development of MediaReview, a tagging system that fact-checkers can use to identify whether a video or image has been manipulated.

Conceived in late 2019, MediaReview is a sibling to ClaimReview, which allows fact-checkers to clearly label their articles for search engines and social media platforms. The Reporters’ Lab has led an open development process, consulting with tech platforms like Google, YouTube and Facebook, and with fact-checkers around the world.

Testing of MediaReview began in April 2020 with the Lab’s FactStream partners: PolitiFact, FactCheck.org and The Washington Post. Since then, fact-checkers from those three outlets have logged more than 300 examples of MediaReview for their fact-checks of images and videos.

We’re ready to expand testing to a global audience and we’re pleased to announce that fact-checkers can now add MediaReview to their fact-checks through Google’s Fact Check Markup Tool, a tool which many of the world’s fact-checkers currently use to create ClaimReview. This will bring MediaReview testing to more fact-checkers around the world, the next step in the open process that will lead to a more refined final product.

ClaimReview was developed through a partnership of the Reporters’ Lab, Google, Jigsaw, and Schema.org. It provides a standard way for publishers of fact-checks to identify the claim being checked, the person or entity that made the claim, and the conclusion of the article. This standardization enables search engines and other platforms to highlight fact-checks, and can power automated products such as the FactStream and Squash apps being developed in the Reporters’ Lab.

Likewise, MediaReview aims to standardize the way fact-checkers talk about manipulated media. The goal is twofold: to allow fact-checkers to provide information to the tech platforms that a piece of media has been manipulated, and to establish a common vocabulary to describe types of media manipulation. By communicating clearly in consistent ways, independent fact-checkers can play an important role in informing people around the world.

The Duke Reporters’ Lab has led the open process to develop MediaReview, and we are eager to help fact-checkers get started with testing it. Contact Joel Luther for questions or to set up a training session. International Fact-Checking Network signatories who have questions about the process can contact the IFCN.

For more information, see the new MediaReview section of our ClaimReview Project website.

Comments closed

What is MediaReview?

MediaReview is a schema – a tagging system that web publishers can use to identify different kinds of content. Built specifically for fact-checkers to identify manipulated images and videos, we think of it as a sibling to ClaimReview, the schema developed by the Reporters’ Lab that allows fact-checkers to identify their articles for search engines and social media platforms.

By tagging their articles with MediaReview, publishers are essentially telling the world, “this is a fact-check of an image or video that may have been manipulated.” The goal is twofold: to allow fact-checkers to provide information to the tech platforms that a piece of media has been manipulated, and to establish a common vocabulary to describe types of media manipulation.

We hope these fact-checks will provide the tech companies with valuable new signals about misinformation. We recognize that they are independent from the journalists doing the fact-checking and it is entirely up to them if, and how, they use the signals. Still, we’re encouraged by the interest of the tech companies in this important journalism. By communicating clearly with them in consistent ways, independent fact-checkers can play an important role in informing people around the world.

Who created MediaReview?

The idea for a taxonomy to describe media manipulation was first proposed at our 2019 Tech & Check conference by Phoebe Connelly and Nadine Ajaka of the Washington Post. Their work eventually became The Fact Checker’s Guide to Manipulated Video, which heavily inspired the first MediaReview proposal.

The development of MediaReview has been an open process. A core group of representatives from the Reporters’ Lab, the tech companies, and the Washington Post led the development, issuing open calls for feedback throughout the process. We’ve worked closely with the International Fact Checking Network to ensure that fact-checkers operating around the world have been able to provide feedback. 

You can still access the first terminology proposal and the first structured data proposal, as well as comments offered on those documents.

What is the current status of MediaReview?

MediaReview is currently in pending status on Schema.org, which oversees the tagging that publishers use, which means it is still under development. 

The Duke Reporters’ Lab is testing the current version of MediaReview with several key fact-checkers in the United States: FactCheck.org, PolitiFact and The Washington Post.

You can see screenshots of our current MediaReview form, including working labels and definitions here: Claim Only, Video, Image.

We’re also sharing test MediaReview data as it’s entered by fact-checkers. You can access a spreadsheet of fact-checks tagged with MediaReview here.

How can I offer feedback?

Through our testing with fact-checkers and with an ever-expanding group of misinformation experts, we’ve identified a number of outstanding issues that we’re soliciting feedback on. Please comment on the linked Google Doc with your thoughts and suggestions.

We’re also proposing new Media Types and Ratings to address some of the outstanding issues, and we’re seeking feedback on those as well.

Comments closed

We want your feedback on the MediaReview tagging system

Last fall, we launched an ambitious effort to develop a new tagging system for fact-checks of fake videos and images. The idea was to take the same approach that fact-checkers use when they check claims by politicians and political groups, a system called ClaimReview, and build something of a sequel. We called it MediaReview.

For the past nine months, Joel Luther, Erica Ryan and I have been talking with fact-checkers, representatives of the tech companies and other leaders in the battle against misinformation. Our ever-expanding group has come up with a great proposal and would love your feedback.

Like ClaimReview, MediaReview is schema – a tagging system that web publishers can use to identify different kinds of content. By tagging their articles, the publishers are essentially telling the world, “This is a fact-check on this politician on this particular claim.” That can be a valuable signal to tech companies, which can decide if they want to add labels to the original content or demote its standing in a feed, or do nothing. It’s up to them.

(Note: Google and Facebook have supported the work of The Reporters’ Lab and have given us grants to develop MediaReview.)

ClaimReview, which we developed with Google and Schema.org five years ago, has been a great success. It is used by more than half of the world’s fact-checkers and has been used to tag more than 50,000 articles. Those articles get highlighted in Google News and in search results on Google and YouTube.

We’re hopeful that MediaReview will be equally successful. By responding quickly to fake videos and bogus images, fact-checkers can provide the tech platforms with vital information about false content that might be going viral. The platforms can then decide if they want to take action.

The details are critical. We’ve based MediaReview on a taxonomy developed by the Washington Post. We’re still discussing the names of the labels, so feel free to make suggestions about the labels – or anything.

You can get a deeper understanding of MediaReview in this article in NiemanLab.

You can see screenshots of our current MediaReview form, including working labels and definitions here: Claim Only, Video, Image.

You can see our distillation of the current issues and add your comments here.

Comments closed