My opening remarks from Global Fact 6, Cape Town, South Africa, on June 19, 2019.
It’s wonderful to be here and see so many familiar faces. It’s particularly cool to see our new team from the IFCN, not just Baybars and Cris, but also Daniela Flamini, one of our journalism students from Duke who graduated last month and is now working for the IFCN.
And it warms my heart to see my old friend Stephen Buckley here. When Stephen was dean of the faculty at Poynter, the two of us organized the first Global Fact meeting in London in 2014. That wasn’t easy. We had difficulty raising enough money. But Stephen was determined to make it happen, so he found some money from a few different accounts at Poynter. Global Fact – and our important journalistic movement – would not have happened if it weren’t for him.
I’m impressed by this turnout – more than 250 attendees this year! I confess that when I saw the headline on Daniela’s story last week that said this was “the largest fact-checking event in history”… I wanted a fact-check. But I did one, and as PolitiFact would say, I rate that statement True!
I want to start today with a quick reminder of the importance of holding people accountable for what they say — in this case…me.
You will recall that last year at Global Fact, I promised that I would get a tattoo. And after some discussion, I decided it would be a tattoo of my beloved Truth-O-Meter. But a year went by and a funny thing happened: I decided I didn’t want a tattoo.
Now, as fact-checkers, we all know the importance of holding people accountable for what they say. We did that at PolitiFact with the Obameter and other campaign promise meters. PolitiFact has a special meter for a broken promise that usually features the politician with a big frown. We have fun choosing that photo, which has the person looking really miserable.
So I’ve created one to rate myself on the tattoo promise: The Bill-O-Meter. Promise broken!
My message today to open Global Fact is also about accountability. It’s about the need to make sure we fact-check all players in our political discourse.
Julianna Rennie and I recently wrote a piece for Poynter that looked at a new trend in the United States we call “embedded fact-checking.” It’s the growing practice of reporters including fact-checks in their news articles, when they drop in a paragraph or two that exposes a falsehood. For example, they may write that someone “falsely claimed that vaccines cause autism.”
We were glad to find a growing trend of embedded fact-checking in news and analysis articles in the New York Times, the Washington Post, and the AP over the past four years. But we also found the subject was nearly always the same: Donald Trump. It was wonderful to see the trend, but it was lopsided.
Trump is a prime target for fact-checking because his volume of falsehoods is unprecedented in American history — and probably in world history, too. Journalists rightly should question everything he says. And you may have similar figures in your own countries who deserve similar scrutiny.
But we shouldn’t focus so much on Trump that we neglect other politicians and other parties. That’s true not just in the United States but everywhere. Indeed, when we put together the IFCN code of principles three years ago, we said that fact-checkers “do not concentrate their fact-checking on any one side.”
In the United States and around the world, we need to make sure that we check all the important players in the political discourse, whether it is for news stories or our fact-checking sites.
So my message for you today is a simple one: check everybody. Hold everyone accountable.
The number of fact-checking outlets around the world has grown to 188 in more than 60 countries amid global concerns about the spread of misinformation, according to the latest tally by the Duke Reporters’ Lab.
Since the last annual fact-checking census in February 2018, we’ve added 39 more outlets that actively assess claims from politicians and social media, a 26% increase. The new total is alsomore than four-times the 44 fact-checkers we counted when we launched our global database and map in 2014.
Globally, the largest growth came in Asia, which went from 22 to 35 outlets in the past year. Nine of the 27 fact-checking outlets that launched since the start of 2018 were in Asia, including six in India. Latin American fact-checking also saw a growth spurt in that same period, with two new outlets in Costa Rica, and others in Mexico, Panama and Venezuela.
The actual worldwide total is likely much higher than our current tally. That’s because more than a half-dozen of the fact-checkers we’ve added to the database since the start of 2018 began as election-related partnerships that involved the collaboration of multiple organizations. And some those election partners are discussing ways to continue or reactivate that work – either together or on their own.
Over the past 12 months, five separate multimedia partnerships enlisted more than 60 different fact-checking organizations and other news companies to help debunk claims and verify information for voters in Mexico, Brazil, Sweden, Nigeria and Philippines. And the Poynter Institute’s International Fact Checking Network assembled a separate team of 19 media outlets from 13 countries to consolidate and share their reporting during the run-up to last month’s elections for the European Parliament. Our database includes each of these partnerships, along with several others – but not each of the individual partners. And because they were intentionally short-run projects, three of these big partnerships appear among the 74 inactive projects we also document in our database.
Politics isn’t the only driver for fact-checkers. Many outlets in our database are concentrating efforts on viral hoaxes and other forms of online misinformation – often in coordination with the big digital platforms on which that misinformation spreads.
We also continue to see new topic-specific fact-checkers such as Metafact in Australia and Health Feedback in France — both of which launched in 2018 to focus on claims about health and medicine for a worldwide audience.
(Here’s how we decide which fact-checkers to include in the Reporters’ Lab database.)
Fact-Checkers by Continent Since Feb. 2018
Africa:4 to 9 Asia: 22 to 35 Australia: 3 to 5 Europe: 52 to 61 North America: 53 to 60 South America: 15 to 18
TRACKING THE GROWTH
As we’ve noted, elections are not the only draw for aspiring fact-checkers. Many outlets in our database are concentrating their efforts on viral hoaxes and other forms of online misinformation – often in coordination with the big digital platforms on which that misinformation spreads. And those platforms are also providing incentives.
In one such effort, the Reporters’ Lab worked with Google and Schema.org to develop ClaimReview, an open-source tagging system for fact-checks. Google, Microsoft’s BING, Facebook and YouTube use this system to help identify and showcase fact-checkers’ work in their news feeds and search results – a process that generates traffic and attention for the fact-checkers. It also provides data that is powering experiments in live, real time fact-checks that can be delivered to users automatically. (Disclosure: Google and Facebook are among the funders of the Reporters’ Lab.)
Another driver: Facebook. It has recruited independent fact-checking partners around the world to help identify misinformation on its platforms. The social network began that effort in late 2016 with help from the Poynter’s Institute’s IFCN. (Poynter is a journalism training and research center in St. Petersburg, Florida, that also is home to the U.S. fact-checking site PolitiFact.)
Meanwhile, YouTube gave fact-checking a boost in India when it started putting fact-checks at the top of YouTube search results, which helped contribute to a surge of new outlets in that country. Now India has 11 entries in our database, six of which launched since our February 2018 census. And it’s likely there are others to add in the next few weeks.
KINDS OF FACT-CHECKERS
A bit more than half of fact-checkers are part of a media company (106 of 188, or 56%). That percentage has been dropping over the past few years, mostly because of the changing business landscape for media companies in the United States. In our 2018 census, 87% of the U.S. fact-checkers were connected to a media company (41 out of 47). Now it’s 65% (39 out of 60). In other words, as the number of fact-checker in the U.S. has grown, fewer of them have ties to those companies.
Among fact-checkers in the rest of the world, the media mix remains about half and half (67 out of 128, or 52% — very close to the 54% we saw in 2018).
The fact-checkers that are not part of a larger media organization include independent, standalone organizations, both for-profit and non-profit (the definitions of these legal and economic entities vary greatly from country to country). Some of these fact-checkers are subsidiary projects of bigger organizations that focus on civil society and political accountability. Others are affiliated with think tanks and academic institutions.
Among the recent additions is the journalism department at the University of the Philippines’ College of Mass Communication, which was the coordinator of Tsek.ph, a political fact-checking partnership mentioned earlier that also involves two other academic partners.
About 70% of the fact-checkers (131 of 188) have well-defined rating systems for categorizing the claims they investigate — similar to what we’ve seen in past years.
As usual, we found many of the rating systems to be entertaining. One of our new favorites comes from Spondeo Media in Mexico, which launched in December. It supplements a basic, four-point, true-to-false scale with a mascot – NETO, a cartoon lie-detector who smiles and jumps for joy with true claims but gets steamed with false ones. Another, India Today Fact Check, rated claims using a scale of one-to-three animated crows, along with a slogan in Hindi: “When you lie, the crow bites” (also the title of a popular movie: “Jhooth bole kauva kaate”).
We decided to time this year’s fact-checking census to correspond with the sixth annual GlobalFact Summit, which begins next week in Cape Town, South Africa. About 250 attendees from nearly nearly 60 countries are expected at this year’s gathering — which is yet another measure of fact-checking’s continued growth: That’s five times the number from the first GlobalFact in London in 2014.
Joel Luther, Share the Facts Research and Outreach Coordinator at the Duke Reporters’ Lab, and former student researcher Daniela Flamini (now an intern at the Poynter Institute’s International Fact Checking Network) contributed to this report.
FOOTNOTE: ANOTHER WAY TO COUNT FACT-CHECKERS?
A challenge we have each time the Duke Reporters’ Lab conducts our annual fact-checking censuses is that our final tally depends so much on when we happen to discover these outlets. Our counting also depends on when fact-checkers come and go — especially short-term, election-focused projects that last several months. If a fact-checker was hard at work most of the year covering a campaign, but then closed up shop before we did our census, they’ll still be counted — but in our list of inactive projects.
That inactive list is an interesting trove of good ideas for other fact-checkers to mine. It also provides an entirely different way for us to tally fact-checkers: by counting all the projects that were active at some point during the year — not just the ones that make it to winter.
This approach might better showcase the year in fact-checking. And it also would show that fact-checking was in fact growing faster than we even thought it was.
Here’s chart that compares the number of fact-checkers that we know were active in certain years — even the ones that ultimately closed down — with the subsequent census number for that year….
There are reasons why the Reporters’ Lab would still need to keep counting fact-checkers the way we have since 2014. For one, we need current lists and counts of serious fact-checking projects for all kinds of reasons, including academic research and the experiments that we and others want to try out with real-world fact-checkers.
And yet it’s still great to see how fast fact-checking is growing — even more than we sometimes thought.
(The small print for anyone who’s fact-checking me: The adjusted numbers shown here combine any fact-checker in our database that was active at some point during that given year. Most of our census reports were meant to count the previous year’s activity. For example our February 2018 census appears in this chart as our count of 2017 fact-checkers, even if some of those 2017 fact-checkers were only counted in last year’s census as inactive by the time the census was published. The number shown for 2018 is the 16-month 2018-19 number we are releasing in this report. You also might note that some other numbers here are slightly off from data we’ve previously shared. The main reason is that this proposed form of counting depends on having the dates that each project began and ended. In a handful of cases, we do not.)
It’s now much easier for fact-checkers to use ClaimReview, a tagging tool that logs fact-checks published around the world into one database. The tool helps search engines — and readers — find non-partisan fact-checks published globally. It also organizes fact-check content into structured data that automated fact-checking will require.
Currently, only half of the roughly 160 fact-checking organizations that the Duke Reporters’ Lab tracks globally use ClaimReview. In response, Google and the Duke Reporters’ Lab have developed an easier method of labelling the articles to help both recruit more users and expand a vital fact-check data set.
ClaimReview was created in 2015 after a conversation between staff at Google and Glenn Kessler, the Washington Post fact-checker. Kessler wanted Google to highlight fact-checks in its search results. Bill Adair, director of the Duke Reporters’ Lab, was soon brought in to help.
Dan Brickley from Schema.org, Justin Kosslyn from Google and Adair developed a tagging systembased on the schemas maintained by Schema.org, an organization that develops structured ways of organizing information. They created a universal system for fact-checkers to label their articles to include the claim checked, who said it and a ruling on its accuracy. “It’s the infrastructure that provides the atomic unit of fact-checking to search engines,” Adair said.
Initially, ClaimReview produced a piece of code that fact-checkers copy and pasted into their online content management system. Google and other search engines look for the code when crawling content. Next, Chris Guess of Adair’s team developed a ClaimReview widget called Share the Facts, a content box summarizing fact-checks that PolitiFact, FactCheck.org and the Washington Post can publish online and share on social media.
The latest version of ClaimReview no longer requires users to copy and paste the code, which can behave inconsistently on different content management systems. Instead, fact-checkers only have to fill out Google form fields similar to what they used previously to produce the code.
While the concept of ClaimReview is simple, it opens to the door to more innovation in fact-checking. It organizes data in ways that can be reused. By “structuring journalism, we can present content in more valuable ways to people,” said Adair.
By labeling fact-checks, the creators effectively created a searchable database of fact-checks, numbering about 24,000 today. The main products under development at the Reporters’ Lab, from FactStream to Squash, rely on fact-check databases. Automated fact-checking especially requires a robust database to quickly match untrue claims to previously published fact-checks.
The database ClaimReview builds offers even more possibilities. Adair hopes to tweak the fields fact-checkers fill in to provide better summaries of the fact-checks and provide more information to readers. In addition, Adair envisions ClaimReview being used to tag types of misinformation, as well as authors and publishers of false content. It could also tag websites that have a history of publishing false or misleading articles.
The tagging already is already benefiting some fact-check publishers. “ClaimReview helps to highlight and surface our fact-checks on Google, more than the best SEO skills or organic search would be able to achieve,” said Laura Kapelari, a journalist with Africa Check. ClaimReview has increased traffic on Africa Check’s website and helped the smaller Africa Check compete with larger media houses, she said. It also helps fact-checkers know which facts have already been investigated, which reduces redundant checks.
Joel Luther, the ClaimReview project manager in the Reporters’ Lab, expects this new ClaimReview format will save fact-checkers time and decrease errors when labeling fact-checks. However, there is still room to grow. Kapelari wishes there was a way for the tool to automatically grab key fields such as names in order to save time.
The Reporters’ Lab has a plan to promote ClaimReview globally. Adair is already busy on that front. Early this month, a group of international fact-checkers and technologists met in Durham for Tech & Check 2019, an annual conference where people on this quest share progress on automated fact-checking projects intended to fight misinformation. Adair, an organizer of Tech & Check, emphasized new developments with ClaimReview, as well as its promise for automating fact-checking.
Not much would be possible without this tool, he stressed. “It’s the secret sauce.”
When fact-checking technologists and journalists gather in Durham for the 2019 Tech & Check Conference this month, they will share new tools intended to optimize and automate fact-checking.
For Dan Schultz, one founder of the Bad Idea Factory software development collective, this will be a chance to debut a “mannequin” version of the Talking Point Tracker. Created in collaboration with the Duke Tech & Check Cooperative, the tracker is intended to “capture the zeitgeist” of television news by identifying trending topics.
Duke journalism professor Bill Adair, who runs Tech & Check, launched the project by asking Schultz how fact-checkers could capture hot topics on TV news as quickly as possible. That is a simple but powerful idea. TV news is a place of vast discourse, where millions of viewers watch traditional, nonpartisan newscasts and partisan broadcasters such as Sean Hannity and Rachel Maddow. Listening in would give insight into what Schultz calls a “driver or predictor of collective consciousness.”
But executing even simple ideas can be difficult. In this case, TV news programs broadcast dense flows of media: audio, video, text and images that are not simple to track. Luckily, network and cable news outlets produce closed-caption subtitles for news shows. Talking Pointer Tracker scans those subtitles to identify keywords used most frequently within blocks of time. It also puts the keywords in context by showing sentences and longer passages where the keywords were found. To deepen the context, the tracker shows related keywords that often appear with the trending words.
The eventual goal is to group keywords into clusters that better capture emerging conversations. “Our hope is that it will be a useful tool for journalists who want to write in the context of what’s being discussed,“ said Schultz, who is collaborating with Justin Reese, a front-end developer with the Bad Idea Factory, on the project.
More technically, Talking Point Tracker runs closed-caption transcripts through a natural language processing pipeline that cleans the text as well as it can. An application programming interface, an API, uses separate language processing algorithm to find the most common keywords. These are “named entities” — usually proper nouns that can be sorted into different categories such as places, organizations and locations.
Talking Point Tracker’s prototype, to be unveiled at Tech & Check, is dense with information. But the design Reese created for viewing on a computer screen makes it readable. There’s enough white space to be easy on the eyes and a color scheme of red, blue, black and yellow that organizes text.
A list of the most frequent keywords over a specified time period are listed in a column on the left. Next to that is a line graph highlights their frequency. Sentences from which the keywords are listed on the right. If you click there, the tool points you to longer passages of transcripts. On the bottom are related keywords that often appear in the same sentences as a given word.
Moving from a mannequin stage to a living stage for this project will be challenging, Schultz said. As much as natural language processing has evolved over the past decade, algorithms still have trouble understanding aspects of human language. One free, open-source system the Tracker relies on is an API called spaCy. But programs like spaCy don’t always recognize the same thing when they’re stated differently — say, the “Virginia legislature” and the “Virginia General Assembly.”
Another challenge is coping with the quality of news show transcripts, Schultz said. The transcripts can contain many typos, in addition to sometimes being either all caps or all lowercase, which the API can have trouble reading.
And the API doesn’t always know where sentences break. Too often, the system will return sentences that contain just “Mr.” because it concludes that a period signifies the end of the sentence. To get around this, Schultz is using another NLP technology to clean the transcripts he obtains.
To prepare for the Tech & Check Conference, Schultz is building better searching tools and further cleaning up the Tracker’s design. “It’s always good to have your feet close to the fire,” Schultz said.
The biggest question he hopes to get answered before leaving is whether Talking Point Tracker could be useful for journalists, he said.
“There’s a lot things we can gain from feedback. If we have the capacity and interest from whoever, we will continue to iterate and build on top of that,” Schultz said.
We tested two fact-checking products during the State of the Union address. One failed, the other showed great promise.
The failure was FactStream, our iPhone app. It worked fine for the first 10 minutes of the speech. Users received two timely “quick takes” from Washington Post Fact Checker Glenn Kessler, but then the app crashed under an unusual surge of heavy traffic that we’re still investigating. We never recovered.
The other product is a previously secret project we’ve code-named Squash. It’s our first attempt at fully automated fact-checking. It converts speech to text and then searches our database of fact-checks from the Post, FactCheck.org and PolitiFact. When it finds a match, a summary of the fact-check pops onto the screen.
We’ve been testing Squash for the last few weeks with mixed results. Sometimes it finds exactly the right fact-checks. Other times the results are hilariously bad. But that’s what progress looks like.
We went into last night’s speech with very modest expectations. I said before the speech I’d be happy if the speech simply triggered fact-checks to pop up, even if it was a poor match.
But Squash actually performed pretty well. It had 20 pop-ups and six of them were in the ballpark.
Overall, the results were stunning. It gave us a glimpse of how good automated fact-checking can be.
We’ll have more to share once we’ve reviewed the results, so stay tuned.
As for FactStream, it now has lots of timely fact-checks from the State of the Union on the main home screen, which continues to function well. We will fix any problems we identify with the live event feature and plan to be back in action for real-time coverage for campaign events later this year.
UPDATE, Feb. 5, 11 p.m.: Our FactStream app failed during the State of the Union address. We apologize for the problems. We are still sorting out what happened, but it appears we got hit with an unexpected surge of traffic that overwhelmed our servers and our architecture.
As we noted at the bottom of this post, this was a test – only our second of the app. We’ll fix the problems and be better next time.
The Reporters’ Lab is teaming up with the Washington Post, PolitiFact and FactCheck.org to offer live fact-checking of the State of the Union address on Tuesday night on our new FactStream app.
Journalists from the Post, PolitiFact, and FactCheck.org will provide real-time updates throughout the speech in two forms:
Ratings – Links to previously published fact-checks with ratings when the president repeats a claim that has been checked before.
Quick takes – Instant updates about a statement’s accuracy. They will be labeled red, yellow and green to indicate their truthfulness.
Tuesday’s speech will be the second test of FactStream. The first test, conducted during last year’s State of the Union address, provided users with 32 updates. We got valuable feedback and have made several improvements to the app.
FactStream is part of the Duke Tech & Check Cooperative, a project to automate fact-checking that is funded by Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation. Additional support has been provided by Google.
FactStream is available for iPhone and iPad (sorry, no Android version yet!) and is a free download from the App Store.
The app has two streams. One, shown by the home symbol in the lower left of the screen, provides a constant stream of the latest fact-checks published every day throughout the year. The live event feature for the State of the Union address is marked by an icon of a calendar with a check mark.
Because this is a test, users could encounter a few glitches. We’d love to hear about any bugs you encounter and get your feedback at email@example.com.
Same ad, different name, over and over again. Cookie-cutter ads, generic political ads used to promote or criticize multiple campaigns and candidates, were widely deployed during the 2018 North Carolina midterm elections.
As student journalists working on the North Carolina Fact-Checking Project, we spent months sifting through thousands of campaign ads looking for political claims to fact-check. It didn’t take long to notice that many were nearly identical.
The copy-cat ads we encountered typically targeted groups of candidates, such as state House candidates from one party, and added their names to the same attack ad. That allowed the opposing political party and their boosters to widely circulate messages about topics important to their base.
One reason for this is state political campaigns have become increasingly centralized in recent years, often run by political caucuses rather than individual candidates, said Gary Pearce, co-publisher of Talking About Politics, a blog about North Carolina and national politics.
Congressional campaign committees in Washington, D.C. as well as North Carolina legislative caucus committees conduct voter research and use the data to pinpoint issues that matter most to target voters during election season, he said.
“Consistency amplifies the message,” Pearce said. “It makes sense for the caucuses to take on a specific set of issues that are important in this election and will rile the voters up.”
The North Carolina Democratic Party employed this technique often this year, producing ads that claimed Republicans would eliminate insurance coverage for pre-existing medical conditions, ignore polluted drinking water, even tolerate corruption within the state Republican Party.
Political Action Committees, such as the conservative North Carolina Values Coalition, employed a different strategy, also based on focused messaging. They published a series of same-design ads endorsing 13 North Carolina House and Senate candidates. They cited the same reasoning every time: the candidates supported “pro-life, pro-religious liberty, and pro-family public policy.”
“We aim to use a language that appeals to our coalition members, and creates brand familiarity,” said Jim Quick, the group’s media and communications director. “We want to show that we are laser focused on certain issues through repetition.”
Angie Holan, editor of the national fact-checking website PolitiFact, said such ads remain an inexpensive way to disseminate information. Despite this age of targeting marketing on the web and elsewhere, the persistence of this sort of marketing could be linked to U.S. voters’ increasing partisanship, she said.
“We’re not seeing a lot of crossover or, frankly, a lot of complexity or nuance in most of the public policy positions politicians are taking. So that makes it very easy to do cookie cutter ads,” Holan said.
For Democrats to win rural districts and Republicans to win urban districts, candidates need to switch their focus to local issues that people from all parties care about, Campbell argued. He pointed to State Rep. Ken Goodman, a Democrat who this fall won re-election in District 66, west of Fayetteville.
Goodman’s ads focused on increasing the amount of lottery money that goes towards public education, not an issue on the national or statewide Democratic agenda, Campbell noted. The moderate Democrat won re-election in a rural district, which required him to gain wide support.
Which way will political campaigns lean in the presidential election year 2020? Unknown. But student journalists in the Duke Reporters’ Lab will be watching.
Duke Reporters’ Lab students expanded vital political journalism during a historic midterm campaign season this fall with the North Carolina Fact-Checking Project.
Five student journalists reviewed thousands of statements that hundreds of North Carolina candidates vying for state and federal offices made online and during public appearances. They collected newsy and checkable claims from what amounted to a firehose of political claims presented as fact.
Duke computer science undergraduates with the Duke Tech & Check Cooperative applied custom-made bots and the ClaimBuster algorithm to scrape and sort checkable political claims from hundreds of political Twitter feeds.
Editors and reporters then selected claims the students had logged for most of the project’s 30 plus fact-checks and six summary articles that the News and Observer and PolitiFact North Carolina published between August and November.
Duke senior Bill McCarthy was part of the four-reporter team on the project, which the North Carolina Local News Lab Fund supported to expand local fact-checking during the 2018 midterms and beyond in a large, politically divided and politically active state.
“Publishing content in any which way is exciting when you know it has some value to voters, to democracy,” said McCarthy, who interned at PolitiFact in Washington, D.C. last summer. “It was especially exciting to get so many fact-checks published in so little time.”
“NC GOP falsely ties dozens of Democrats to single-payer health care plan,” read one project fact-check headline. “Democrat falsely links newly-appointed Republican to health care bill,” noted another. The fact-check “Ad misleads about NC governors opposing constitutional amendments” set the record straight about some Democratic-leaning claims about six proposed amendments to the state constitution.
Work in the lab was painstaking. Five sophomores filled weekday shifts to scour hundreds of campaign websites, social media feeds, Facebook and Google political ads, televised debates, campaign mailers and whatever else they could put their eyes on. Often they recorded one politician’s attacks on an opponent that might, or might not, be true.
Students scanned political chatter from all over the state, tracking competitive state and congressional races most closely. The resulting journalism was news that people could use as they were assessing candidates for the General Assembly and U.S. Congress as well as six proposed amendments to the state constitution.
The Reporters’ Lab launched a mini news service to share each fact-checking article with hundreds of newsrooms across the state for free.
The Charlotte Observer, a McClatchy newspaper like the N&O, published several checks. So did smaller publications such as Asheville’s Citizen-Timesand theGreensboro News and Record. Newsweek cited a fact-check report by the N&O’s Rashaan Ayesh and Andy Specht about a fake photo of Justice Kavanaugh’s accuser, Christine Blasey Ford, shared by the chairman of the Cabarrus County GOP, which WRAL referenced in a roundup.
Project fact-checks influenced political discourse directly too. Candidates referred to project fact-checks in campaign messaging on social media and even in campaign ads. Democrat Dan McCready, who lost a close race against Republican Mark Marris in District 9, used project fact-checks in two campaignads promoted on Facebook and in multiple posts on his Facebook campaign page, for instance.
While N&O reporter Andy Specht was reporting a deceptive ad from the Stop Deceptive Amendments political committee, the group announced plans to change it.
The fact-checking project will restart in January, when North Carolina’s reconfigured General Assembly opens its first 2019 session.
Five Duke undergraduates monitored thousands of political claims this semester during a heated midterm campaign season for the N.C. Fact-Checking Project.
That work helped expand nonpartisan political coverage in a politically divided state with lots of contested races for state and federal seats this fall. The effort resumes in January when the project turns its attention to a newly configured North Carolina General Assembly.
Three student journalists who tackled this work with fellow sophomores Alex Johnson and Sydney McKinney reflect on what they’ve learned so far.
Lizzie Bond: After spending the summer working in two congressional offices on Capitol Hill, I began my work in the Reporters’ Lab and on the N.C. Fact-Checking Project with first-hand knowledge of how carefully elected officials and their staff craft statements in press releases and on social media. This practice derives from a fear of distorting the meaning or connotation of their words. And in this social media age where so many outlets are available for sharing information and for people to consume it, this fear runs deep.
Yet, it took me discovering one candidate for my perspective to shift on the value of our work with the N.C. Fact-Checking Project. That candidate, Peter Boykin, proved to be a much more complicated figure than any other politician whose social media we monitored. The Republican running to represent Greensboro’s District 58 in the General Assembly, Boykin is the founder of “Gays for Trump,” a former online pornography actor, a Pro-Trump radio show host, and an already controversial, far-right online figure with tens of thousands of followers. Pouring through Boykin’s nearly dozen social media accounts, I came across everything from innocuous self-recorded music video covers to contentious content, like hostile characterizations of liberals and advocacy of conspiracy theories, like one regarding the Las Vegas mass shooting which he pushed with little to no corroborating evidence.
When contrasting Boykin’s posts on both his personal and campaign social media accounts with the more cautious and mild statements from other North Carolina candidates, I realized that catching untruthful claims has a more ambitious goal that simply detecting and reporting falsehoods. By reminding politicians that they should be accountable to the facts in the first place, fact-checking strives to improve their commitment to truth-telling. The push away from truth and decency in our politics and toward sharp antagonism and even alternate realities becomes normalized when Republican leaders support candidates like Boykin as simply another GOP candidate. The N.C. Fact-Checking Project is helping to revive truth and decency in North Carolina’s politics and to challenge the conspiracy theories and pants-on-fire campaign claims that threaten the self-regulating, healthy political society we seek.
Ryan Williams: I came into the Reporters’ Lab with relatively little journalism experience. I spent the past summer working on social media outreach & strategy at a non-profit where I drafted tweets and wrote the occasional blog post. But I’d never tuned into writing with the immense brevity of political messages during an election season. The N.C. Fact-Checking Project showed me the importance of people who not only find the facts are but who report them in a nonpartisan, objective manner that is accessible to an average person.
Following the 2016 election, some people blamed journalists and pollsters for creating false expectations about who would win the presidency. I was one of those critics. In the two and a half months I spent fact-checking North Carolina’s midterm races, I learned how hard fact-checkers and reporters work. My fellow fact-checkers and I compiled a litany of checkable claims made by politicians this midterm cycle. Those claims, along with claims found by the automated claim-finding algorithmClaimBuster were raw material for many fact-checks of some of North Carolina hottestraces. Those checks were made available for voters ahead of polling.
Now that election day has come and gone, I am more than grateful for this experience in fact-finding and truth-reporting. Not only was I able to hone research skills, I gained a deeper understanding of the intricacies of political journalism. I can’t wait to see what claims come out of the next two years leading up to, what could be, the presidential race of my lifetime.
Jake Sheridan: I’m a Carolina boy who has grown up on the state’s politics. I’ve worked on campaigns, went to the 2012 Democratic National Committee in my hometown of Charlotte and am the son of a long-time news reporter. I thought I knew North Carolina politics before working in the Reporter’s Lab. I was wrong.
While trying to wrap my head around the 300-plus N.C. races, I came to better understand the politics of this state. What matters in the foothills of the Piedmont, I found out, is different than what matters on the Outer Banks and in Asheville. I discovered that campaigns publiclyrelease b-roll so that PACs can create ads for them and saw just how brutal attack ads can be. I got familiar with flooding and hog farms, strange politicians and bold campaign claims.
There was no shortage of checkable claims. That was good for me. But it’s bad for us. I trust politicians less now. The ease with which some N.C. politicians make up facts troubles me. Throughout this campaign season in North Carolina, many politicians lied, misled and told half truths. If we want democracy to work — if we want people to vote based on what is real so that they can pursue what is best for themselves and our country — we must give them truth. Fact-checking is essential to creating that truth. It has the potential to place an expectation of explanation upon politicians making claims. That’s critical for America if we want to live in a country in which our government represents our true best interests and not our best interests in an alternate reality.