Newtral: Using AI to tackle disinformation on social media

Project: FactFlow AI

Newsroom size: 51 - 100

Solution: An AI-powered tool that detects misinformation on Telegram, providing fact-checkers with real-time insights and customised alerts to address emerging disinformation risks.


Newtral is a journalism, fact-checking, and data verification organisation based in Spain. They have experimented with AI for many years, specifically to detect disinformation in political discourse. However, they soon noticed that the channels used for political disinformation were evolving. They were no longer coming from politicians’ interviews and other political spaces alone, but had migrated to social media. They found that a lot of disinformation went viral on social media before it was debunked causing a chain reaction of a misinformed public. 

The problem: Debunking disinformation on social media

One of the key issues that they faced while verifying and fact-checking social media data was the amount of disinformation that was prevalent on semi-private platforms like Telegram. The magnitude of disinformation was so high that manual-fact checking would have proved to be an endless task for their organisation. 

“For example, here in Spain we had a natural catastrophe in Valencia in October 2024. There was a lot of disinformation around this specific event especially on Telegram. Our fact checkers were able to tackle some of the disinformation by following some channels. However, the [disinformation] wave was so big that many people started believing some information that was not real. So we wanted to help avoid such situations before it grew even greater,” said Sara Estevez, NLP and Prompt Engineer at Newtral.

To help their team with such situations, Newtral engineers decided to tap into using AI technologies and create a tool they’ve named FactFlow AI.

Building the solution: How FactFlow AI works

FactFlow AI is designed for use by fact-checking organisations, including Newtral and others, to accelerate disinformation detection on Telegram. The FactFlow AI dashboard allows organisations to access and monitor potential disinformation, verify content, and also view what’s already been verified, by showing relevant narratives. It also allows fact-checkers to customise and monitor different channels. Estevez explains that the idea in being able to customise the tool is to make it “modular” and give fact-checkers “an environment they can actually use on a daily basis.”

FactFlow AI is currently in its internal testing phase at Newtral and will be ready for other fact-checkers in the next few months. In its internal testing and user feedback phase, the team received several requests for several features.

“This was a feedback given by the team because we initially were just shown the messages from Telegram which were potential disinformation. However, while it was very useful to them, they also wanted to know if some of these messages are already fact checked. The fact-checkers also annotated data for the AI models and so feedback from fact-checkers was always a part of the process. The fine tuning we did on the different models was actually really scoped under fact-checkers’ criteria,” added Estevez.

The opportunities: Iterating on the technology

FactFlow applies AI for three main purposes: 1) automated identification of suspicious channels; 2) detection of potential disinformation messages; 3) grouping related content

“The reason is that we wanted our models to have reasoning capabilities and we wanted them to be generative, and training that type of model from the beginning was not within our scope. We wanted to just fine-tune an already existing one in order to make it specific to our task,” said Estevez. 

They started with using a Microsoft model which was Phi 3.5 mini instruct.

“It was working fine but in the last months a new generation of open-source Qwen models appeared and they appear to be the best on performance over the different dashboards,” explained Estevez.

This led them to use the Qwen model since it provided them better results. 

“We are not using any closed source model like OpenAI here,” she added.

The tech stack included using multiple structural databases.

“We have the Telegram data entered on one big database then we are cleaning that data and putting it in another one [database]. We are then generating embeddings of that data so that we can do the claim-matching. We have different databases with the different data that we need,” she explained. 


The team and the challenges they faced

The journey was not without its challenges, including ensuring that the AI model performed “sufficiently well” without bias, performing as much annotation as possible for the model to perform correctly, and defining which exact Telegram channels to acquire data from.

“Selecting these channels at the beginning was a bit difficult and the hard work was done by fact checkers in selecting which channels have potential disinformation, that helped us,” she added.

The team working on FactFlow AI comprised three parts. The first part was an AI team which developed the AI models, fine-tuned and also selected data for annotation. The second, consisted of fact-checkers for data annotation, testing, and feedback. 

“They were the ones actually confirming that the process followed was done correctly and on fact-checking terms,” explained Estevez.

Apart from this, they also received support from the software development team and a project manager, Diana Cid, who managed the entire FactFlow AI project. One of the ways they ensured all different departments coordinated and collaborated were by establishing strict deadlines and regular meetings to track progress.

“We had meetings in order to make sure that the timelines were followed as it was crucial for us. So I think that is where we put most of our efforts in trying to have meetings like every week or two weeks and putting big efforts to try to fit into the rest of our daily schedule,” added Estevez.

A challenge they also had to tackle was multilingual support. FactFlow detects potential disinformation based on the presence of "disinformation patterns", in other words, linguistic cues commonly used when spreading disinformation. As the original LLM was trained in Spanish its performance will be better for similar languages like Italian or Portuguese than in, for example, Russian or Armenian. They look forward to collaboration with other fact-checkers from other countries to improve FactFlow capabilities. They also want to include a debunking capability to FactFlow AI functionality in the near future.

Lessons for newsrooms

Newtral’s successful implementation from data collection to disinformation detection offers several lessons for others looking to explore this space. 

  • Iterate and pivot quickly: Be prepared to explore different AI models and technology stacks and don't hesitate to pivot when you find a solution that yields better results. Fast iteration is key to successful AI implementation.

  • Capitalise on interdisciplinary collaboration: Foster strong collaboration within your organisation. This collaboration helped Newtral in two ways: they created tools that their users (fact-checkers) would actually engage with and make use of. It also enabled them to create more ethical AI tools considering how input and support from fact-checkers themselves centered how the tool would function.

Explore Previous Grantees Journeys

Find our 2024 Innovation Challenge grantees, their journeys and the outcomes here. This grantmaking programme enabled 35 news organisations around the world to experiment and implement solutions to enhance and improve journalistic systems and processes using AI technologies.

Previous Grantees
Read 2024 Report

The JournalismAI Innovation Challenge, supported by the Google News Initiative, is organised by the JournalismAI team at Polis – the journalism think-tank at the London School of Economics and Political Science, and it is powered by the Google News Initiative.