Getting Ahead of Disinformation

Author
The University of Latvia | Zane Eniņa

October 22, 2025

artificial intelligence research

To significantly strengthen the fight against disinformation and develop new, practical tools for debunking it, a four-year research initiative "Artificial Intelligence for Debunking Disinformation" (Participative Assistive AI-powered Tools for Supporting Trustworthy Online Activity of Citizens and Debunking Disinformation), or AI4Debunk, was launched at the beginning of 2024. The project is carried out by an interdisciplinary consortium of 13 partners from eight countries, including the University of Latvia (UL). As the project approaches its midpoint, we review the progress made so far with the project’s coordinator, UL Professor Inna Šteinbuka, and the leading researcher from the UL Faculty of Economics and Social Sciences, Žaneta Ozoliņa.

MI.jpg
The image is for illustrative purposes only. Photo: Unsplash (Igor Omilaev)

Disinformation has become almost a part of our daily lives, reaching society through various and sometimes unexpected channels, including comedy shows, religious rituals, dating or recipe websites, not to mention social media platforms. The widespread disinformation is often referred to as an infodemic. The European Union (EU) has become involved in this global infodemic narrative battle, where its primary opponents are Russia and China, countries with substantial experience and a rich arsenal of disinformation tools, including artificial intelligence (AI). However, AI can also serve as an antidote, a tool for exposing and countering disinformation. The goal of AI4Debunk is to make a substantial contribution to combating disinformation and supporting trustworthy online activity.

"The project is distinctly interdisciplinary, involving partners from various fields," says Inna Šteinbuka. "We have experts in political science, anthropology, media literacy, sociology, and another part of the team consists of artificial intelligence specialists and IT experts." The project leader emphasises that collaboration between such diverse specialists is one of the project’s biggest challenges, while Žaneta Ozoliņa sees it as a valuable experience: "This is a project where each participant gains an important set of new, specific knowledge. When you work only within your field, you don’t exactly get tired, but it can become a bit monotonous."

So far, the research team has developed a methodological framework to help identify disinformation. To build it, the team had to agree on a common understanding of what disinformation is, how it differs from propaganda, where it originates, what context it appears in, which social groups it targets, what kind of information it contains, how intensely it spreads, how it gains traction in public perceptions, whether other information sources respond to it, and many other factors.

"We created a kind of matrix that can be applied to a disinformation case and viewed from different angles," explains Žaneta Ozoliņa, comparing the framework to a Rubik’s Cube, something that can be turned and twisted to see how the colours align, ultimately revealing a coherent picture.

The next major task was creating a database of disinformation cases so that IT experts could train the AI tools designed to recognise and counter disinformation. The cases were collected in two large thematic blocks: climate change and the war in Ukraine.

"This choice was deliberate," says Ozoliņa, "because we wanted to see whether disinformation campaigns in the field of climate change are identical or similar to those in the context of war."

One example used in the methodological framework tests involved a manipulated video showing French farmers allegedly protesting against Ukrainian farmers. To build credibility, the video falsely displayed the logo of a well-known media outlet and attributed fake statements to a French trade union activist. The goal of such disinformation was to undermine public support for Ukraine in its war against invading Russia. More broadly, these messages seek to influence public opinion, creating division and scepticism toward supporting war-weary Ukraine. Their authors hope to sway political decision-making in the West, weakening the EU’s unified stance on assisting Ukraine.

In the second block, researchers tested cases of disinformation about environmental issues to manipulate public opinion and increase hostility toward the EU’s Green Deal, specifically toward wind energy. For instance, a report about France revising its wind turbine noise measurement protocols was distorted in Bulgaria and presented as a total ban on wind energy. Such disinformation aligns with the interests of Russia and fossil fuel industry supporters, who oppose renewable energy policies.

Another significant achievement has been the identification of key target groups of disinformation. Researchers conducted interviews with representatives of various groups to understand how people perceive disinformation, whether they can recognise it, if they have been exposed to it, and what strategies or tools they use to deal with it. As a result, six key groups were identified: politicians, entrepreneurs, researchers, journalists, NGO representatives, and the Russian-speaking diaspora.

"The war in Ukraine naturally affects those who are refugees or who live in Europe for other reasons," explains Ozoliņa.

These results will soon be published on the website. Identifying target groups is crucial for calibrating AI-based disinformation recognition tools as precisely as possible. However, the tools are intended for the broader public, including young people who are highly active and digitally skilled online but often lack critical thinking, making them an easy target for disinformation.

"Our task is to run faster than time," says Inna Šteinbuka, acknowledging that developing such tools is a huge challenge, an attempt to stay ahead of both the spread of disinformation and technological progress.

AI4Debunk aims to develop four AI-powered interfaces:

  • a web plug-in,
  • a collaboration platform,
  • a mobile app, and
  • An augmented/virtual reality (AR/VR) interface.

The plug-in will connect with web browsers and social media platforms, providing instant alerts when users encounter false content. The collaboration platform will allow users to report suspicious information, which experts will then verify. The mobile app will help detect disinformation on smartphones, while the AR/VR interface will guide how to handle disinformation on social media.

Demo versions of these tools will first be tested by representatives of the target groups and partner organisations that advise the researchers, including entrepreneurs, politicians, and defence experts.

About AI4Debunk

Project Leader: Prof. Inna Šteinbuka

Project Partners:

  • University of Latvia (Latvia) – Lead Partner, Project Coordinator
  • Euractiv.bg (Free Media Bulgaria, Bulgaria)
  • Pilot4DEV (Belgium)
  • University of Mons (Belgium)
  • Internews Ukraine (Ukraine)
  • National Research Council of Italy (Italy)
  • University of Florence (Italy)
  • Barcelona Supercomputing Centre (Spain)
  • DOTSOFT (Greece)
  • University of Galway (Ireland)
  • F6S Innovation (Ireland)
  • Utrecht University of Applied Sciences (Netherlands)
  • INNoVaTiVe POWER (Netherlands)

Project duration: 01/2024–12/2027 Project number: Horizon 101135757 Funding: EU Horizon Europe Programme

Recommended articles

artificial intelligence innovation

Latvia Joins the European Artificial Intelligence Factories Network, Strengthening Science, Innovation, and Digital Literacy

Latvia has become one of 13 EU Member States to strengthen its position in the European digital space by receiving support for the implementation of a strategic national initiative aimed at boosting artificial intelligence (AI) capacity and expertise. This marks an important achievement in Latvia’s…

Ministry of Education and Science

October 14, 2025

space research

Latvia to Join NASA’s International Agreement on Peaceful and Sustainable Space Exploration

On Tuesday, October 7, the government approved Latvia’s accession to the international cooperation framework known as the Artemis Accords, also referred to as the Artemis Agreement on Peaceful and Sustainable Space Exploration. The accords establish fundamental principles for how countries c…

Ministry of Education and Science

October 7, 2025

research

RTU Scientists Receive €1.8 Million to Bring Innovative Research-based Products to Market

Nine research teams from Riga Technical University (RTU) have received €1.8 million from the national research program “Biomedical and Photonics Research Platform for the Development of Innovative Products” (BioPhoT). Within one year, they will transform scientific results and technologies into pra…

Riga Technical University

October 3, 2025

research social sciences

Studying Child-Rearing and Changes in Family Structures

Researchers at the Faculty of Economics and Social Sciences of the University of Latvia (UL FESS) will conduct a comprehensive study to explore the most significant aspects of child-rearing in various family models. The project also aims to identify shortcomings in current family support systems an…

Faculty of Economics and Social Sciences of the University of Latvia

October 2, 2025