“Automated fact-checking”

Pop-up fact-checking moves online: Lessons from our user experience testing

After it became clear pop-up fact-checking was too difficult to display on a TV, we've moved to the web.

By Jessica Mahone – June 11, 2020 | Print this article

We initially wanted to build pop-up fact-checking for a TV screen. But for nearly a year, people have told us in surveys and in coffee shops that they like live fact-checking but they need more information than they can get on a TV.

The testing is a key part of our development of Squash, our groundbreaking live fact-checking product. We started by interviewing a handful of users of our FactStream app. We wanted to know how they found out about the app, how they find fact checks about things they hear on TV, and what they would need to trust live fact-checking. As we saw in our “Red Couch Experiments” in 2018, they were excited about the concept but they wanted more than a TV screen allowed. 

We supplemented those interviews with conversations in coffee shops – “guerilla research” in user experience (UX) terms. And again, the people we spoke with were excited about the concept but wanted more information than a 1740×90 pixel display could accommodate.

The most common request was the ability to access the full published fact-check. Some wanted to know if more than one fact-checker had vetted the claim, and if so, did they all reach the same conclusion? Some just wanted to be able to pause the video. 

Since those things weren’t possible with a conventional TV display, we pivoted and began to imagine what live fact-checking would look like on the web. 

Bringing Pop-Up Fact-Checking to the Web

In an online whiteboard session, our Duke Tech and Check Cooperative team discussed many possibilities for bringing live fact-checking online, and then, our UX team — students Javan Jiang and Dora Pekec and myself — designed a new interface for live fact-checking and tested it in a series of simple open-ended preference surveys. 

In total, 100 people responded to these surveys, in addition to the eight interviews above and a large experiment with 1,500 participants we did late last year about whether users want ratings in on-screen displays (they do). 

A common theme emerged in the new research: Make live fact-checking as non-disruptive to the viewing experience as possible. More specifically, we found three things that users want and need from the live fact-checking experience.

  • Users prefer a fact-checking display beneath the video. In our initial survey, users could choose if they liked a display beside or beneath the video. About three-quarters of respondents said that a display beneath the video was less disruptive to their viewing, with several telling us that this placement was similar to existing video platforms such as YouTube. 
  •  Users need “persistent onboarding” to make use of the content they get from live fact-checking. A user guide or FAQ is not enough. Squash can’t yet provide real-time fact-checking. It is a system that matches claims made during a televised event to claims previously checked. But users need to be reminded that they are seeing a “related fact-check,” not necessarily a perfect match to the claim they just heard. “Persistent onboarding” means providing users with subtle reminders in the display. For example, when a user hovers over the label “Related Fact Check,” a small box could explain that this is not a real-time fact check but an already published fact check about a similar claim made in the past. This was one of the features users liked most because it kept them from having to find the information themselves.
  • Users prefer all the information that is available on the initial screen. Our first test allowed users to expand the display to see more information about the fact check, such as the publisher of the fact check and an explanation of what statement triggered the system to display a fact check. But users said that having to toggle the display to see this information was disruptive. 
Users told us they wanted more on-screen explanations, sometimes called “persistent onboarding.”

More to Learn

Though we’ve learned a lot, some big questions remain. We still don’t know what live fact-checking looks like under less-than-ideal conditions. For example, how would users react to a fact check when the spoken claim is true but the relevant fact check is about a claim that was false? 

And we need to figure out timing, particularly for multi-speaker events such as debates. When is the right time to display a fact-check after a politician has spoken? And what if the screen is now showing another politician?

And how can we appeal to audiences that are skeptical of fact-checking? One respondent specifically said he’d want to be able to turn off the display because “none of the fact-checkers are credible.” What strategies or content would help make such audiences more receptive to live fact-checking? 

As we wrestle with those questions, moving live fact-checking to the web still opens up new possibilities, such as the ability to pause content (we call that “DVR mode”), read fact-checks,  and return to the event. We are hopeful this shift in platform will ultimately bring automated fact-checking to larger audiences.

Back to top

Squash report card: Improvements during State of the Union … and how humans will make our AI smarter

We've had some encouraging improvements in the AI powering our experimental fact-checking technology. But to make Squash smarter, we're calling in a human.

By Bill Adair – February 23, 2020 | Print this article

Squash, the experimental pop-up fact-checking product of the Reporters’ Lab, is getting better.

Our live test during the State of the Union address on Feb. 4 showed significant improvement over our inaugural test last year. Squash popped up 14 relevant fact-checks on the screen, up from just six last year.

That improvement matches a general trend we’ve seen in our testing. We’ve had a higher rate of relevant matches when we use Squash on videos of debates and speeches.

But we still have a long way to go. This month’s State of the Union speech also had 20 non-relevant matches, which means Squash displayed fact-checks that weren’t related to what the president said. If you’d been watching at that moment, you probably would have thought, “What is Squash thinking?”

We’re now going to try two ways to make Squash smarter: a new subject tagging system that will be based on a wonderfully addictive game developed by our lead technologist Chris Guess; and a new interface that will bring humans into the live decision-making. Squash will recommend fact-checks to display, but an editor will make the final judgment.

Some background in case you’re new to our project: Squash, part of the Lab’s Tech & Check Cooperative, is a revolutionary new product that displays fact-checks on a video screen during a debate or political speech. Squash “hears” what politicians say, converts their speech to text and then searches a database of previously published fact-checks for one that’s related. When Squash finds one, it displays a summary on the screen.

For our latest tests, we’ve been using Elasticsearch, a tool for building search engines that we’ve made smarter with two filters: ClaimBuster, an algorithm that identifies factual claims, and a large set of common synonyms. ClaimBuster helps Squash avoid wasting time and effort on sentences that aren’t factual claims, and the synonyms help it make better matches.

Guess, assisted by project manager Erica Ryan and student developers Jack Proudfoot and Sanha Lim, will soon be testing a new way of matching that uses natural language processing based on the subject of the fact-check. We believe that we’ll get more relevant matches if the matching is based on subjects rather than just the words in the politicians’ claims.

But to make that possible, we have to put subject tags on thousands of fact-checks in our ClaimReview database. So Guess has created a game called Caucus that displays a fact-check on your phone and then asks you to assign subject tags to it. The game is oddly addictive. Every time you submit one, you want to do another…and another. Guess has a leaderboard so we can keep track of who is tagging the most fact-checks. We’re testing the game with our students and staff, but hope to make it public soon.

We’ve also decided that Squash needs a little human help. Guess, working with our student developer Matt O’Boyle, is building an interface for human editors to control which matches actually pop up on users’ screens.

The new interface would let them review the fact-check that Squash recommends and decide whether to let it pop up on the screen, which should help us filter out most of the unrelated matches.

That should eliminate the slightly embarrassing problem when Squash makes a match that is comically bad. (My favorite: one from last year’s State of the Union when Squash matched the president’s line about men walking on the moon with a fact-check on how long it takes to get a permit to build a road.)

Assuming the new interface works relatively well, we’ll try to do a public demo of Squash this summer. 

Slowly but steadily, we are making progress. Watch for more improvements soon.

Back to top

Beyond the Red Couch: Bringing UX Testing to Squash

As automated fact-checking gains ground, it's time to learn how to make pop-up content crystal clear on video screens.

By Andrew Donohue – October 28, 2019 | Print this article

Fact-checkers have a problem.

They want to use technology to hold politicians accountable by getting fact-checks in front of the public as quickly as possible. But they don’t yet know the best ways to make their content understood. At the Duke Reporters’ Lab, that’s where Jessica Mahone comes in.

Jessica Mahone is designing tests to help Duke Reporters’ Lab researchers figure out how to clearly share fact-checks live during broadcasts. Photo by Andrew Donohue

The Lab is developing Squash, a tool built to bring live fact-checking of politicians to TV. Mahone, a social scientist, was brought on board to design experiments and conduct user experience (UX) tests for Squash. 

UX design is the discipline focused on making new products easy to use. A clear UX design means that a product is intuitive and new users get it without a steep learning curve. 

“If people can’t understand your product or find it hard to use, then you are doomed from the start. With Squash, this means that we want people to comprehend the information and be able to quickly determine whether a claim is true or not,” Mahone said

For Squash, fact-check content that pops up on screens needs to be instantly understood since it will only be visible for a few seconds. So what’s the best way?

Bill Adair, the director of the Duke Tech & Check Cooperative, organized some preliminary testing last year that he dubbed the red couch experiments. The tests revealed more research was needed to understand the best way to inform viewers. 

“I originally thought that all it would take is a Truth-O-Meter popping up on screen,” Adair said. “Turns out it’s much more complicated than that.”

Sixteen people watched videos of Barack Obama and Donald Trump delivering State of the Union speeches while fact-checks of some of what they said appeared on the screen. Ratings were true, false or something in between. Blink, a company specializing in UX testing, found that participants loved the concept of real-time fact-checking and would welcome it on TV broadcasts. But the design of the pop-up fact-checks often confused them.

It’s not just the quality of content that counts. Viewers must understand what they see very quickly. Squash may one day share fact-checks during live events, including State of the Union addresses.

Some viewers didn’t understand the fact-check ratings such as true or false when they were displayed. Others assumed the presidents’ statements must be true if no fact-check was shown. That’s a problem because Squash doesn’t fact-check all claims in speeches. It displays published previously fact-checks for only the claims that match Squash’s finicky search algorithm. 

The red couch experiments were “a very basic test of the concept,” Mahone said. “What they found mainly is that there was a need to do more diving in and digging into the some questions about how people respond to this. Because it’s actually quite complex.”

Mahone has developed a new round of tests scheduled to begin this week. These tests will use Amazon Mechanical Turk, an online platform that relies on people who sign up to be paid research subjects.

“One thing that came out of the initial testing was that people don’t like to see a rating of a fact-check,” Mahone said. “I was a little skeptical of that. Most of the social science research says that people do prefer things like that because it makes it a lot easier for them to make decisions.”

In this next phase, Mahone will recruit about 500 subjects. A third will see a summary of a fact-check with a PolitiFact TRUE icon. Another third will see a summary with the just the label TRUE. The rest will see just a summary text of a fact-check.

Each viewer will rank how interested they are in using an automated fact-checking tool after viewing the different displays. Mahone will compare the results.

After finding out if including ratings works, Mahone and three undergraduate students, Dora Pekec, Javan Jiang and Jia Dua, will look at the bigger picture of Squash’s user experience. They will use a company to find about 20 people to talk to, ideally individuals who consistently watch TV news and are familiar with fact-checking.

Participants will be asked what features they would want in real-time fact-checking.

“The whole idea is to ask people ‘Hey, if you had access to a tool that could tell you if what someone on TV is saying is true or false, what would you want to see in that tool?’ ” Mahone said. “We want to figure out what people want and need out of Squash.”

Figuring out how to make Squash intuitive is critical to its success, according to Chris Guess, the Lab’s lead technologist. Part of the challenge is that Squash is something new and viewers have no experience with similar products.

“These days, people do a lot more than just watch a debate. They’re cooking dinner, playing on their phone, watching over the kids,” Guess said. “We want people to be able to tune in, see what’s going on, check out the automated fact-checks and then be able to tune out without missing anything.”

Reporters’ Lab researchers hope to have Squash up and running for the homestretch of the 2020 presidential campaign. Adair, Knight Professor of the Practice of Journalism and Public Policy at Duke, has begun reaching out to television executives to gauge their interest in an automated fact-checking tool. 

“TV networks are interested, but they want to wait and see a product that is more developed.” Adair said. 

 

Back to top

Tech & Check in the news

Coverage of Duke Tech & Check Cooperative 's efforts to strengthen journalism

By Catherine Clabby – December 14, 2018 | Print this article

It’s been more than a year since the Reporters’ Lab received $1.2 million in grant funding to launch the Duke Tech & Check Cooperative.

Our goal is to link computer scientists and journalists to better automate fact-checking and expand how many people see this vital, accountability reporting.

Here’s a sampling of some of the coverage about the range of projects we’re tackling:

Tech & Check:

Associated Press, Technology Near For Real-Time TV Political Fact-Checks

Digital Trends, Real-time fact-checking is coming to live TV. But will networks use it?

Nancy Watzman, Tech & Check: Automating Fact-Checking
Poynter, Automated fact-checking has come a long way. But it still faces significant challenges.
MediaShift, The Fact-Checking Army Waging War on Fake News

FactStream:
NiemanLab, The red couch experiments, early lessons in pop-up fact-checking.
WRAL, Fake news? App will help State of the Union viewers sort out fact, fiction
Media Shift, An Experiment in Live Fact-Checking the State of the Union Speech by Trump
American Press Institute, President Trump’s first State of the Union address is Tuesday night. Here’s how to prepare yourself, factually speaking.
WRAL, App will help views sort fact, fiction in State of the Union
NiemanLab, Automated, live fact-checks during the State of the Union? The Tech & Check Cooperative’s first beta test hopes to pull it off
NiemanLab, FactStream debuted live fact-checking with last night’s SOTU. How’d it go?

Tech & Check Alerts:
Poynter, This Washington Post fact check was chosen by a bot

Truth Goggles:
NiemanLab, Truth Goggles are back! And ready for the next era of fact-checking

And …
NiemanLab, So what is that, er, Trusted News Integrity Trust Project all about? A guide to the (many, similarly named) new efforts fighting for journalism
MediaShift, Fighting Fake News: Key Innovations in 2017 from Platforms, Universities and More NiemanLab, With $4.5 million, Knight is launching a new commission — and funding more new projects — to address declining public trust in media
Poynter, Knight’s new initiative to counter misinformation includes more than $1.3 million for fact-checking projects
Axios, How pro-trust initiatives are taking over the internet
Recode, Why the Craig behind Craigslist gave big bucks to a journalism program
Digital News Report (with Reuters and Oxford), Understanding the Promise and Limits of Automated Fact-Checking
Democratic Minority Staff Report, U.S. House Committee on Science, Space & Technology, Old Tactics, New Tools: A Review of Russia’s Soft Cyber Influence Operations

Back to top

Duke students tackle big challenges in automated fact-checking

Trio assembled promising building blocks needed for live fact-checking

By Catherine Clabby – October 8, 2018 | Print this article

Three Duke computer science majors advanced the quest for what some computer scientists say is the Holy Grail in fact-checking this summer.

Caroline Wang, Ethan Holland and Lucas Fagan tackled major challenges to creating an automated system that can both detect factual claims while politicians speak and instantly provide fact-checks.

That required finding and customizing state-of-art computing tools that most journalists would not recognize. A collective fondness for that sort of challenge helped, a lot.

Duke junior Caroline Wang

“We had a lot of fun discussing all the different algorithms out there, and just learning what machine learning techniques had been applied to natural language processing,” said Wang, a junior also majoring in math.

Wang and her partners took on the assignment for a Data+ research project. Part of the Information Initiative at Duke, Data+ invites students and faculty to find data-driven solutions to research challenges confronting scholars on campus.

The fact-checking team convened in a Gross Hall conference from 9 am to 4 pm every weekday for 10 weeks to help each other figure out how to help achieve live fact-checking, a goal of Knight journalism professor Bill Adair and other practitioners of accountability journalism.

Their goal was to do something of a “rough cut” of end-to-end automated fact-checking: to convert a political speech to text, identify the most “checkable” sentences in the speech and then match them with previously published fact-checks.

The students concluded that Google Cloud Speech-to-Text API was the best available tool to automate audio transcriptions. They then submitted the sentences to ClaimBuster, a project at the University of Texas at Arlington that the Duke Tech & Check Cooperative uses to identify statements that merit fact-checking. ClaimBuster acted as a helpful filter that reduced the number of claims submitted to the database, which in turn reduced processing time.

They chose Google Cloud speech-to-text because it can infer where punctuation belongs, Holland said. That yields text divided into complete thoughts. Google speech-to-text also shares transcription results while it processes the audio, rather than waiting until translation is done. That speeds up how fast the new text can get moved to the next steps along a fact-checking pipeline.

Duke junior Ethan Holland

“Google will say: This is my current take and this is my current confidence that take is right. That lets you cut down on the lag,” said Holland, a junior whose second major is statistics.

Their next step was finding ways to match the claims from that speech with the database of fact-checks that came from the Lab’s Share the Facts project. (The database contains thousands of articles published by the Washington Post, FactCheck.org and PolitiFact, each checking an individual claim.)

To do that, the students adapted an algorithm that the open-source research outfit OpenAI released in June, after the students started working together. The algorithm builds on The Transformer, a new neural network computing architecture that Google researchers published just six months prior.

Duke sophomore Lucas Fagan

The architecture alters how computers organize trying to understand written language. Instead of translating a sentence word by word, The Transformer weighs the importance of each word to the meaning of every other word. Over time that system helps machines discern meaning in more and more sentences more quickly.

“It’s a lot more like learning English. You grow up hearing it and your learn it,” said Fagan, a sophomore also majoring in math.

Work by Wang, Holland and Fagan is expected to help jumpstart a Bass Connections fact-checking team that started this fall. Students on that team will continue the hunt for better strategies to find statements that are good fact-check candidates, produce pop-up fact-checks and create apps to deliver this accountability journalism to more people.

Tech & Check has $1.2 million in funding from the John S. and James L. Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation to tackle that job.

Back to top

FactStream app now shows latest fact-checks from Post, FactCheck.org and PolitiFact

New version features alerts for Pants on Fire and Four Pinocchio ratings

By Bill Adair – October 7, 2018 | Print this article

FactStream, our iPhone/iPad app, has a new feature that displays the latest fact-checks from FactCheck.org, PolitiFact and The Washington Post.

FactStream was conceived as an app for live fact-checking during debates and speeches. (We had a successful beta test during the State of the Union address in January.) But our new “daily stream” makes the app valuable every day. You can check it often to get summaries of the newest fact-checks and then click through to the full articles.

The new version of FactStream lets users get notifications of the latest fact-checks.

By viewing the work of the nation’s three largest fact-checkers in the same stream, you can spot trends, such as which statements and subjects are getting checked, or which politicians and organizations are getting their facts right or wrong.

The new version of the app includes custom notifications so users can get alerts for every new fact-check or every “worst” rating, such as Four Pinocchios from Washington Post Fact Checker Glenn Kessler, a False from FactCheck.org or a False or Pants on Fire from PolitiFact.

The daily stream shows the latest fact-checks.

The new daily stream was suggested by Eugene Kiely, the director of FactCheck.org. The app was built by our lead technologist Christopher Guess and the Durham, N.C., design firm Registered Creative. It gets the fact-check summaries from ClaimReview, our partnership with Google that has created a global tagging system for fact-checking. We plan to expand the daily stream to include other fact-checkers in the future.

The app also allows users to search the latest fact-checks by the name of the person or group making the statement, by subject or keyword.

Users can get notifications on their phones and on their Apple Watch.

FactStream is part of the Duke Tech & Check Cooperative, a $1.2 million project to automate fact-checking supported by Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation.

FactStream is available as a free download from the App Store.

 

Back to top

At Global Fact V: A celebration of community

More than 200 people attended the fifth meeting of the world's fact-checkers in Rome, which was organized by the International Fact-Checking Network.

By Bill Adair – June 25, 2018 | Print this article

My opening remarks at Global Fact V, the fifth annual meeting of the world’s fact-checkers, organized by the International Fact-Checking Network, held June 20-22 in Rome.

A couple of weeks ago, a photo from our first Global Fact showed up in my Facebook feed. Many of you will remember it: we had been all crammed into a classroom at the London School of Economics. When we went outside for a group photo, there were about 50 of us.

To show how our conference has grown, I posted that photo on Twitter along with one from our 2016 conference that had almost twice as many people. I also posted a third photo that showed thousands of people gathered in front of the Vatican. I said that was our projected crowd for this conference.

I rate that photo Mostly True.

What all of our conferences have in common is that they are really about community. It all began in that tiny classroom at the London School of Economics when we realized that whether we were from Italy or the U.K. or Egypt, we were all in this together. We discovered that even though we hadn’t talked much before or in many cases even met, we were facing the same challenges — fundraising and finding an audience and overcoming partisanship.

It was also a really powerful experience because we got a sense of how some fact-checkers around the world were struggling under difficult circumstances — under governments that provide little transparency, or, much worse, governments that oppress journalists and are hostile toward fact-checkers.

Throughout that first London conference there was an incredible sense of community. We’d never met before, but in just a couple of days we formed strong bonds. We vowed to keep in touch and keep talking and help each other.

It was an incredibly powerful experience for me. I was at a point in my career where I was trying to sort out what I would do in my new position in academia. I came back inspired and decided to start an association of fact-checkers – and hold these meetings every year.

The next year we started the IFCN and Poynter generously agreed to be its home. And then we hired Alexios as the leader.

Since then, there are have been two common themes. One you hear so often that it’s become my mantra: Fact-checking keeps growing. Our latest census of fact-checking in the Reporters’ Lab shows 149 active fact-checking projects and I’m glad to see that number keep going up and up.

The other theme, as I noted earlier, is community. I thought I’d focus this morning on a few examples.

Let’s start with Mexico, where more than 60 publishers, universities and civil society organizations have started Verificado 2018, a remarkable collaboration. It was originally focused largely on false news, but they’ve put more emphasis on fact-checking because of public demand. Daniel Funke wrote a great piece last week about how they checked a presidential debate.

In Norway, an extraordinary team of rivals has come together to create Faktisk, which is Norwegian for “actually” and “factually.” It launched nearly a year ago with four of the country’s biggest news organizations — VG, Dagbladet, NRK and TV 2 – and it’s grown since then. My colleague Mark Stencel likened it to the New York Times, The Washington Post and PBS launching a fact-checking project together.

 

At Duke, both of our big projects are possible because of the fact-checkers’ commitment to help each other. The first, Share the Facts and the creation of the ClaimReview schema, grew out of an idea from Glenn Kessler, the Washington Post Fact Checker, who suggested that Google put “fact-check” tags on search results.

That idea became our Duke-Google-Schema.org collaboration that created what many of you now use so search engines can find your work. And one unintended consequence: it makes automated fact-checking more possible. It all started because of one fact-checker’s sense of community.

Also, FactStream, the new app of our Tech & Check Cooperative, has been a remarkable collaboration between the big US fact-checkers — the Post, FactCheck.org and PolitiFact. All three took part in the beta test of the first version, our live coverage of the State of the Union address back in January. Getting them together on the same app was pretty remarkable. But our new version of the app –which we’re releasing this week – is even cooler. It’s like collaboration squared, or collaboration to the second power!

It took Glenn’s idea, which created the Share the Facts widget, and combined it with an idea from Eugene Kiely, the head of FactCheck.org, who said we should create a new feature on FactStream that shows the latest U.S. widgets every day.

So that’s what we did. And you know what: it’s a great new feature that reveals new things about our political discourse. Every day, it shows the latest fact-checks in a constant stream and users can click through, driving new traffic to the fact-checking sites. I’ll talk more about it during the automated demo session on Friday. But it wouldn’t be possible if it weren’t for the commitment to collaboration and community by Glenn and Eugene.

We’ve got a busy few days ahead, so let’s get on with it. There sure are a lot of you!

As we know from the photographs: fact-checking keeps growing.

 

Back to top

Tech & Check Alerts

Tech & Check Alerts aim to ease the workload of fact-checkers

Student-created tool can peruse political transcripts and find claims most likely to contain falsehoods

By Sydney McKinney – April 6, 2018 | Print this article

Students in the Duke Reporters’ Lab have built a bot that is like an intern who watches TV around the clock.

Asa Royal, a junior at Duke University, and Lucas Fagan, a freshman, have created Tech & Check Alerts, a new tool in a series of innovations the Reporters’ Lab is creating to help simplify the fact-checking process.

Using Tech & Check Alerts, the Lab can identify check-worthy claims in television news transcripts and send them to fact-checkers in daily email alerts.

“We’re going to save fact-checkers a lot of time and help them find things that they would otherwise miss,” said Mark Stencel, co-director of the Reporters’ Lab.

Though the fact-checking industry is growing worldwide, the organizations doing that work are typically small, even one-person enterprises, and the workload can be burdensome. Fact-checkers often have to sift through pages of text to find claims to check. This time-consuming process can create a substantial time gap between when statements are made and when fact-checks are available to viewers or readers.

The Tech & Check Alerts automate that process. Royal and Fagan, who are both computer science majors, created a program that scans transcripts of TV news channels, such as CNN, for claims that fact-checkers may want to investigate. It then compiles the check-worthy claims and sends them in a daily email to fact-checkers at The Washington Post, PolitiFact, the Associated Press, FactCheck.org and The New York Times, among others. Thus far, there have been seven fact-checks performed based on these alerts.

“Journalists don’t have to watch 15 hours of CNN or read the entire congressional report,” Royal said. “We’ll do it for them.”

Royal and Fagan created Tech & Check Alerts using ClaimBuster, an algorithm created by computer scientist Chengkai Li from the University of Texas at Arlington. ClaimBuster scans blocks of text and identifies “check-worthy” claims, based on indicators such as past-tense verbs, numbers, dates or statistics. It ranks statements from 0 to 1.0 based on how likely they are to be checkable; any statements that score a 0.7 or higher are typically considered check-worthy.

According to Royal, Li’s technology had yet to be used much outside of academia, so leaders of the Tech & Check Cooperative decided to utilize it for daily alerts.

“There’s already software that can find factual claims, and there are already fact-checkers who can check them,” Royal said. “We’re just solving the last-mile problem.”

The creation of Tech & Check Alerts is an important step for the Duke Tech & Check Cooperative, a two-year research project funded by the John S. and James L. Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation.

The broader purpose of this initiative is to bring together journalists, academics and computer scientists from across the country to innovate and automate the fact-checking industry. Over the course of two years, the Reporters’ Lab will develop tools that ease the job of fact-checkers and make fact-checking more accessible to consumers. Another tool the Lab is currently working on is FactStream, an app that provides instant fact-checking during live events.

Alongside other student researchers, Fagan and Royal are working to improve Tech & Check Alerts to include additional sources such as daily floor speeches and debates from the Congressional Record, and social media feeds from endangered incumbents running in this year’s closest House and Senate races. Fact-checkers will have input on how these additional alerts will be deployed.

Fagan is also building a web interface that would give fact-checking partners a way to dig deeper into these feeds and perhaps even customize certain alerts. Freshman Helena Merk, another student researcher in the Lab, is building a tool that would deliver the daily alerts directly to a channel on Slack, a communication platform used in many newsrooms.

Once these improvements are completed, and Tech & Check Alerts are deployed more widely, they should help fact-checkers across the country.

“This project is a stepping stone in our process of using real-time claims and existing fact-checks to automate fact-checking in real time,” Stencel said.

Back to top

Tech & Check Conference

Journalists, computer scientists gather for Tech & Check Conference at Duke

Members of the fact-checking community convened March 29-30 on Duke University's campus to tackle pressing issues

By Rebecca Iannucci – March 30, 2018 | Print this article

About 40 fact-checkers, journalists, computer scientists and academics gathered at Duke University March 29-30 for the Tech & Check Conference, a meeting hosted by the Reporters’ Lab.

As part of its Tech & Check Cooperative, the Reporters’ Lab is serving as a hub for automated fact-checking to connect journalists and technologists around the world. The conference gave them an opportunity to demonstrate current projects and discuss the big challenges of automation.

Some highlights of the conference:

Tech & Check Conference* Eleven demos of past and current projects.  Technologists and computer scientists showed off projects they’ve been developing to either automate fact-checking or improve the flow of accurate information on the internet.

Topics included new tools such as Chequeabot, an automated service that detects factual claims for the Argentinian fact-checker Chequeado; the Bad Idea Factory’s update of the Truth Goggles tool; and the perils of misinformation, including a real-life example from Penn State professor S. Shyam Sundar, whose research project about fake news was inaccurately described in widespread news coverage.

Tech & Check Conference

* Two Q&A panels. Alexios Mantzarlis, director of the International Fact-Checking Network, led a discussion with three fact-checkers about the potential tools and processes that could make fact-checking more efficient in the future.

Reporters’ Lab co-director Bill Adair moderated a conversation about challenges in automated fact-checking, including the pitfalls of voice-to-text technology and natural language processing.

Attendees also participated in breakout sessions to discuss ways to develop international standards and consistent terminology.

Photos by Colin Huth.

Back to top

FactStream

New Tech & Check projects will provide pop-up fact-checking

With advances in artificial intelligence and the growing use of the ClaimReview schema, Reporters' Lab researchers are developing a new family of apps that will make pop-up fact-checking a reality

By Julianna Rennie – January 16, 2018 | Print this article

For years, fact-checkers have been working to develop automated “pop-up” fact-checking. The technology would enable users to watch a political speech or a campaign debate while fact-checks pop onto their screens in real time.

That has always seemed like a distant dream. A 2015 report on “The Quest to Automate Fact-Checking” called that innovation “the Holy Grail” but said it “may remain far beyond our reach for many, many years to come.”

Since then, computer scientists and journalists have made tremendous progress and are inching closer to the Holy Grail. Here in the Reporters’ Lab, we’ve received $1.2 million in grants to make automated fact-checking a reality.

The Duke Tech & Check Cooperative, funded by Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation, is an effort to use automation to help fact-checkers research factual claims and broaden the audience for their work. The project will include about a half-dozen pop-up apps that will provide fact-checking on smartphones, tablets and televisions.

One key to the pop-up apps is a uniform format for fact-checks called the ClaimReview schema. Developed through a partnership of Schema.org, the Reporters’ Lab, Jigsaw and Google, it provides a standard tagging system for fact-checking articles that makes it easier for search engines and apps to identify the details of a fact-check. ClaimReview, which can be created using the Share the Facts widget developed by the Reporters’ Lab, will enable future apps to quickly find relevant fact-checking articles.

“Now, I don’t need to scrape 10 different sources and try to wrangle permission because there’s this database that will be growing increasingly,” says Dan Schultz, senior creative technologist at the Internet Archive.

This works because politicians repeat themselves. For example, many politicians and analysts have claimed that the United States has the highest corporate tax rate.

FactStreamThe Reporters’ Lab is developing several pop-up apps that will deliver fact-checking in real time. The apps will include:

  • FactStream, which will display relevant fact-checks on mobile devices during a live event. The first version, to be tested this month during the State of the Union address Jan. 30, will be a “manual” version that will rely on fact-checkers. When they hear a claim that they’ve checked before, the fact-checkers will compose a message containing the URL of the fact-check or a brief note about the claim. That message will appear in the FactStream app on a phone or tablet.
  • FactStream TV, which will use platforms such as Chromecast or Apple TV for similar pop-up apps on television. The initial versions will also be manual, enabling fact-checkers to trigger the notifications.

Another project, Truth Goggles, will be a plug-in for a web browser that will automatically scan a page for content that users should think about more carefully. Schultz, who developed a prototype of Truth Goggles as a grad student at the MIT Media Lab, will use the app to experiment with different ways to present accurate information and help determine which methods are most valuable for readers.

The second phase of the pop-up apps will take the human fact-checker out of the equation. For live events, the apps will rely on voice-to-text software and then match with the database of articles marked with ClaimReview.

The future apps will also need natural language processing (NLP) abilities. This is perhaps the biggest challenge because NLP is necessary to reflect the complexities of the English language.

“Human brains are very good at [NLP], and we’re pretty much the only ones,” says Chris Guess, the Reporters’ Lab’s chief technologist for Share the Facts and the Tech & Check Co-op. Programming a computer to understand negation or doublespeak, for instance, is extremely difficult.

Another challenge comes from the fact that there are few published fact-checks relative to all of the claims made in conversation or articles. “The likelihood of getting a match to the 10,000 or so stored fact-checks will be low,” says Bill Adair, director of the Reporters’ Lab.

Ideally, computers will eventually research and write the fact checks, too. “The ultimate goal would be that it could pull various pieces of information out, use that context awareness to do its own research into various data pools across the world, and create unique and new fact-checks,” Guess says.

The Reporters’ Lab is also developing tools that can help human fact-checkers. The first such tool uses ClaimBuster, an algorithm that can find claims fact-checkers might want to examine, to scan transcripts of newscasts and public events and identify checkable claims.

“These are really hard challenges,” Schultz says. “But there are ways to come up with creative ways around them.”

Back to top