Verify: Using AI to combat misinformation in Syria
Project: Syrian fact-checking platform
Newsroom size: 10 - 20
Solution: An AI-powered WhatsApp bot that instantly retrieves verified information, allowing human fact-checkers to focus on investigating and debunking false claims.
When the Assad regime collapsed in December 2024, Syria's information landscape transformed overnight. Suddenly, regions previously inaccessible to independent media opened up, but with this freedom came an unprecedented surge in misinformation. For Verify, a Syrian fact-checking platform, the challenge was clear: how to handle the flood of verification requests while expanding their coverage across newly accessible territories.
Their answer? An AI-powered WhatsApp bot that could instantly access their database of verified information, allowing their human fact-checkers to focus on what they do best – investigating and debunking false claims.
The problem: Information chaos in a transitional society
"Everyone is trying to use misinformation as a weapon to affect civil society during this transition,” explains Ahmad Primo, founder and director of Verify. The platform, which had operated primarily in northern Syria before December 2024, suddenly faced requests from Damascus, southern regions, and areas previously under regime control.
The scale was overwhelming. Verification requests poured in through encrypted messaging platforms – spaces where misinformation spreads unchecked and fact-checkers have no visibility. "The majority of people use encrypted chat groups to share information. We cannot access these spaces to check whether they're sharing verified information or fake news,” Primo notes.
This challenge reflects a broader global issue, but in Syria's context, it carries particular urgency. False information about everything from vaccines to political developments can directly impact vulnerable populations including refugees and women. The team realised that traditional fact-checking methods were no longer sufficient.
Building the solution: From database to bot
Verify's approach centred on making their existing database of fact-checked information instantly accessible through AI. "The whole idea is to create a bot that gets information from our website specifically," Primo explains. "When we write articles, we ensure every single word is verified and debunked. The bot summarises this trusted information rather than pulling from Wikipedia or other sources."
The technical implementation uses RAG, connecting their Arabic-language database to an AI model that can respond to queries on WhatsApp – the platform most Syrians rely on for news and information.
Creating this system required bridging the gap between journalism and technology. Verify's internal team consisted of fact-checkers already familiar with AI tools in their daily work. However, they needed external expertise for the technical implementation.
Their first attempt with a technology company failed. "We don't have technical experts in our team, and they didn't understand the fact-checking environment," recalls Rami Magharbeh, the project lead. "We're speaking different languages."
The breakthrough came when they partnered with a company that had previously developed Verify's website. This shared history provided crucial context. "They have the minimum understanding of fact-checking in conflict zones, which involves multiple layers of complexity," Magharbeh explains.
Beyond the core team, Verify expanded their network of "citizen fact-checkers” – volunteers including media students, activists, and journalists now stationed across Syria. These individuals receive training in fact-checking tools and methodologies, becoming guardians of truth in their communities.
Practical solutions for complex challenges
The project faced several significant hurdles, each requiring creative solutions:
Language complexity: With a database primarily in Arabic but users potentially querying in English, Verify developed a workaround. They prepared a "whitelist" of trusted English-language sources like Reuters and The Guardian. When the bot cannot find answers in their Arabic database, it can provide basic information from these pre-approved outlets. "We asked our developer to prepare this whitelist so the bot can help users get answers about general information from trusted sources," Primo explains.
Building user trust: To make the bot feel more human, Verify insisted on natural conversation flows with clear escalation paths. "We challenged ourselves and the technology company to let the bot react as a human, not just as technology," Magharbeh explains. When the bot reaches its limits, it seamlessly transfers users to human fact-checkers, ensuring no query goes unanswered while filtering the volume of direct requests to the team.
The opportunities: Scaling truth in the age of AI
Despite challenges, the team sees transformative potential in their AI implementation. "AI will help us summarise requests and make the process easier for both us and our audience," Primo explains. The system also generates new fact-checking assignments when it encounters questions without answers in the database.
Looking ahead, Verify plans to expand beyond WhatsApp to other platforms like X, creating an ecosystem where verified information is always accessible. They envision a future where AI tools are essential for combating AI-generated misinformation.
"When we use traditional methods to fact-check against AI-generated misinformation, we're already too late," Primo argues. "We need AI to fight disinformation and propaganda campaigns effectively."
Lessons for newsrooms
Verify's experience offers valuable insights for other fact-checking organisations in conflict or transitional contexts:
Local context matters: Generic AI solutions often fail in specific regional contexts. Success requires technology partners who understand both the language and the complexity of working in conflict zones.
Human-AI collaboration is key: Rather than replacing fact-checkers, AI should amplify their capabilities, handling routine queries whilst humans focus on complex investigations.
Community engagement strengthens impact: Training citizen fact-checkers creates a distributed network of truth guardians, essential in societies recovering from authoritarian control.
As Syria navigates its transition, Verify's AI bot represents more than a technological upgrade – it’s an investment in the country's information integrity. In a landscape where false narratives can derail democratic progress, the ability to deliver trusted information instantly may prove as vital as any political reform.
"We believe everyone in our society can affect their community by defending the truth," Primo concludes. "With AI, we're giving them the tools to do exactly that."
Explore Previous Grantees Journeys
Find our 2024 Innovation Challenge grantees, their journeys and the outcomes here. This grantmaking programme enabled 35 news organisations around the world to experiment and implement solutions to enhance and improve journalistic systems and processes using AI technologies.
The JournalismAI Innovation Challenge, supported by the Google News Initiative, is organised by the JournalismAI team at Polis – the journalism think-tank at the London School of Economics and Political Science, and it is powered by the Google News Initiative.
