Full Fact: Using GenAI to track government promises

Project: Full Fact’s Government Tracker

Newsroom size: 21 - 50

Solution: A generative AI–powered chat interface that automates the tracking and analysis of government promises.


Full Fact, the UK's leading independent fact-checking charity, faced a daunting challenge after the 2024 general election: how to sustainably track approximately 300 pledges made by the Labour government throughout an entire parliamentary term.

This fact-checking organisation, known for its rigorous approach to verifying claims in public debate, recognised that traditional manual tracking methods would prove unsustainable over the five-year parliamentary cycle.

The problem: Sustaining momentum beyond the election cycle

When Full Fact began planning their Government Tracker, they spoke to other organisations who had attempted similar projects in the UK. The feedback was consistent: tracking government pledges proved extremely resource-intensive.

"Speaking to other organisations who'd tried this before, they told us it simply took up too much time and resources to keep going," explains Thom Callan-Riley, Full Fact's Delivery Manager. "You'd see these snippets of tracking that would start strong but then drop off. Yet the most useful time for this information is right before the next election."

The scale of the challenge was immense. Full Fact's editorial team identified around 300 trackable pledges in the Labour manifesto. Writing up what each pledge means, providing context, and continuously monitoring progress would require enormous human resources.

Louisa Wania, Full Fact’s Fundraising Manager, frames the broader mission: "Our work aims to increase public trust in institutions and politics. We want to provide a trustworthy source people can turn to, allowing them to see in real time how promises turn into action – or don't."

Building the solution: The genesis

Full Fact wasn't starting from scratch. The organisation had been developing AI tools for fact-checking for nearly a decade, primarily to help fact-checkers monitor large volumes of media content and identify claims worth checking.

"We already use AI to ingest hundreds of thousands of sentences daily across newspapers, TV, radio, podcasts, and social media," says David Corney, Senior Data Scientist on Full Fact's AI team. "But we don't use AI to do the actual fact-checking – it’s not very good at that. Instead, it helps our experts work more efficiently."

Combined with their existing AI capabilities and partnerships – particularly with a research team at Cambridge University, a world-renowned expert in AI for fact-checking – they saw potential for a new approach.

The new workflow

Full Fact and Cambridge University developed a two-pronged AI solution. First, they use Google's Gemini to generate comprehensive questions that help fact-checkers explain each pledge in context. "The AI is quite good at capturing how most people would interpret certain terms and phrases," Corney explains. "It reminds our experts that while they understand the details, most people might interpret terms like ‘inflation’ differently."

Second, they created an automated monitoring system using ChatGPT to track pledge updates, searching trusted sources and creating timelines of officials’ statements about specific pledges. The search process is iterative: the initial results are used to generate a series of more specific queries to find more sources that can provide more details that might otherwise be missed. This can discover, for example, a ministerial announcement that has an impact on the delivery of a pledge but doesn't directly mention the pledge in question.

"The AI analyses articles and determines which are most relevant to each pledge," Corney notes. "It then summarises findings as bullet points that our fact-checkers can quickly review."

Crucially, human oversight remains central. Fact-checkers review all AI outputs before publication, maintaining Full Fact's reputation for accuracy.

Progress and challenges

By July 2024, Full Fact and Cambridge University had improved their model's accuracy from 50% to 80%, however, that 20% gap remains significant.

"80% is incredible, but it's not 100%," Wania emphasises. "Unlike other organisations that might accept some level of AI hallucination, that's simply not a risk we're willing to take."

The team also discovered that updating existing pledge pages proved easier than creating new ones – a finding that shifted their development priorities. They adapted by working on multiple project phases simultaneously rather than sequentially.

The opportunities: Creating a model for trustworthy AI in journalism

Full Fact's approach exemplifies responsible AI deployment in journalism. They have clear principles: AI assists but never replaces human judgment, all outputs include source links for verification, and transparency about AI use is paramount.

"We want to teach healthy scepticism about public information," Callan-Riley explains. "Users shouldn't just trust our output – they should click through to see the actual sources and evidence."

This commitment extends to advancing AI literacy more broadly. Full Fact sees their tools as opportunities to demonstrate AI's genuine capabilities while dispelling hype about what it cannot do.

Lessons for newsrooms

Full Fact's experience offers valuable lessons for newsrooms considering AI adoption:

  • Start with existing workflows: Rather than creating new processes, Full Fact enhanced their established fact-checking methods with AI support.

  • Maintain rigorous standards: "We could have built a chatbot quickly if we weren't fussed about accuracy," Wania admits. "But we can't put anything on our website unless we're 100% sure users are accessing verifiable information."

  • Iterate based on feedback: The team discovered their AI initially captured irrelevant local news stories. "The AI would find a county council hiring 17 teachers and think it was relevant to national education pledges," Corney recalls. "Our fact-checkers immediately flagged this wasn't what we needed."

  • Bridge technical and editorial teams effectively: Callan-Riley's role as project manager proved crucial. "I ask clarifying questions that help open up understanding between technical and non-technical teams," he says. "It's about creating space where everyone can contribute meaningfully."

  • Partner strategically: Working with Cambridge University provided additional computational resources and academic expertise.

The Full Fact team emphasises focusing on real problems rather than chasing AI trends. "Assess the challenges your organisation already faces," Wania advises. "AI is great, but it's not magic. It requires study, dedication, and patience."

Explore Previous Grantees Journeys

Find our 2024 Innovation Challenge grantees, their journeys and the outcomes here. This grantmaking programme enabled 35 news organisations around the world to experiment and implement solutions to enhance and improve journalistic systems and processes using AI technologies.

Previous Grantees
Read 2024 Report

The JournalismAI Innovation Challenge, supported by the Google News Initiative, is organised by the JournalismAI team at Polis – the journalism think-tank at the London School of Economics and Political Science, and it is powered by the Google News Initiative.