Researchers and practitioners working in the field of online misinformation, disinformation and conspiracy theories are facing a perfect storm. In the US, Trump has stacked his new administration with figures who have risen to prominence by promoting conspiracy theories and other unfounded culture wars narratives. From RFK Jr’s embrace of anti-vaxx claims to Kash Patel’s endorsement of Deep State conspiracy theories, and the removal of references to ‘climate change’ and DEI from all federal websites, what were once fringe ideas are now driving policy. In the UK, right-wing politicians are also adopting talk of the Deep State and nodding to conspiracy narratives in their roll-back of commitments to Net Zero. The riots in the summer of 2024 in the UK were in part fuelled by online influencers, both hyperlocal and also drawing on a global network of populist agitators. The Republicans’ hypocritical attacks on academic and think tank research on misinformation as a form of censorship is now being weaponised to undermine critical voices, just as Trump, freely elected leader of the world’s most powerful democracy, pivots to authoritarian Russia, the perceived source of much disinformation. Those sceptical about the power of a bloated counter-disinformation industry now find that what they termed ‘Big Disinfo’ is itself under vicious attack. Social media companies have caved to Republican pressure and started to roll back on content moderation. At the same time, the collapse of the transatlantic order suggests that long-standing differences between European and U.S. approaches to regulating online communication have acquired renewed geopolitical significance; at the very least, it is likely that financial and legal restrictions on social media companies will become bargaining chips in the emerging international tariff wars. In addition, the rapidly evolving landscape of AI is reshaping the nature of hybrid information warfare, including both dystopian scenarios (DOGE is using LLMs to detect supposed antisemitism in the social media posts of student activists in the US as a pretext for deporting them), but also potential interventions. It is far from clear, however, whether AI and deepfakes have proven to be the dire threat to democracy and the information environment that many commentators warned about.
The aim of this workshop was to take stock of the current situation and assess its part in what Adam Tooze (2022) characterizes as a ‘global polycrisis’. It will begin to separate justifiable concerns from inflated fears, to place those fears in historical perspective, and to work out the priorities for future research and action in this area.
The programme for this workshop can be downloaded below:
After the event, the workshop participants were asked to provide short summaries of their presentations and/or share their post-workshop observations. Notes from those participants who provided them (asterisked in the programme) are presented below.
Last year, a workshop on "Big Disinfo" was held by the "Everything is Connected" project and (Mis)Translating Deceit. You can find out more about the workshop via the link here.
How can democracy survive the normalisation of mis- and disinformation? Or is the rejection of establishment voices democracy in action? What approach should civil society organisations (fact-checkers, think tanks, broadcasters, regulators) take in the current climate?
Disinformation, democracy and truth
This suggests that, at least in terms of liberal democracy, it is the aims and impacts of communication practices that ultimately matter most rather than necessarily their factual accuracy.
In her contribution, Sabina Mihelj addressed the question: ‘How can democracy survive the normalization of disinformation?’ She argued that this question is misguided and instead suggested that we need to ask ourselves whether democracy can survive the way in which democratic countries have gone about addressing the problem of disinformation. By this she meant the fact that by and large, disinformation is first, largely treated as a matter of facts and falsehoods, and second, primarily seen as a problem imposed on (Western) democracies by authoritarian powers – above all, but not only, by Russia. This approach, she suggested, is too narrow, and does not reflect the way disinformation operates in practice; in particular, it does not reflect the way disinformation is perceived and engaged by one of its main targets – that is, ordinary citizens. She then developed this argument by drawing on ongoing research in Romania, a country that offers a particularly interesting case study of disinformation and democracy.
I've been working as a fact-checker in Serbia with FakeNews Tragač portal for eight years and teach fact-checking at the Faculty of Philosophy in Novi Sad, Serbia. This work has revealed challenges we didn't anticipate when we started years ago.
Over the past six months in Serbia, I've had to document disinformation about my own students and colleagues who were beaten, imprisoned, or forced into exile during protests. Reading lies about people you know personally while trying to maintain professional distance is exhausting.
The nature of misinformation has also changed. Five or six years ago, we dealt with more straightforward false claims that could be directly refuted. Now, Serbian media have learned to avoid outright lies. They've shifted toward bullshit rather than blatant falsehoods, making our job more complex.
Our impact remains frustratingly limited. While audiences often resist fact-checking content, I think we fact-checkers share the blame. Our writing is often boring, sterile, and full of NGO jargon. Fact-checking needs to become more human. We need better storytelling, more visuals, and writing that people actually want to read. Truth may never be as entertaining as lies, but we can make the process of seeking truth more engaging and accessible.
BBC Media Action is the BBC’s international charity. We work with partners around the world to provide impartial, impactful, trustworthy media to people in need so that they can make informed choices to transform their lives. In a world of disinformation, distrust and division, we share the BBC’s values, skills and experience to bring people together, and foster greater understanding and trust.
We often work in fragile democracies and conflict zones, which are different from many of the democratic contexts that are being discussed here today. But we have gleaned insights from these contexts that can be useful for democracies around the world. So, what has our research told us?
Our research from around the world has told us that audiences globally are increasingly aware of mis- and disinformation and the impact it can have on their societies. For some examples, in North Africa, 39% of respondents in Tunisia and 35% of respondents in Libya report seeing disinformation on a daily basis. In Afghanistan, 50% of survey respondents report having encountered disinformation, and in Solomon Islands, 49% report seeing it weekly. However, what these statistics don’t tell us is how audiences are assessing whether a piece of information is true or false.
To understand this, we’re increasingly studying our audiences’ digital and media literacy (DML). One trend we noticed in research in countries like Nepal and Tunisia, for example, is that many users agree it is more important that information is shared quickly than it be fact-checked. We have also conducted multiple surveys where pluralities of respondents believe that the first search result is always the right one. Many will also read the comments section on social media to fact-check the posts. None of these are best practice and do not demonstrate a high level of DML among audiences where we have conducted research. The advent of generative AI further complicates this picture – some of our research indicates growing use of AI tools but a lack of training and guidance, both for audiences and media professionals. All of this risks eroding our shared ground truth, which is key for a healthy democracy.
Given this context, supporting DML is one key way to protect audiences and democracies in the face of disinformation. There are many actors in this space, and each has an important role. Fact checkers have their place in helping establish this ground truth, but research has shown that this activity is not a way to have impact at scale. Think tanks need to produce more evidence about what works in this space, and BBC Media Action works to contribute to this evidence base with our research. Broadcasters need to invest in training and transparency, especially with the advent of AI. Finally, regulators need to assess systemic risks posed by tech platforms and AI.
Emerging technologies and the mis/disinformation future: As new technologies emerge and converge, what are the implications for mis/disinformation? What does this mean for regulation, governance, and content moderation?
Future mis/disinformation and extended reality (XR) technologies
Immersive (XR) technologies such as virtual and augmented reality create new avenues for spreading mis/disinformation across verbal, non-verbal, visual, and experiential modes. XR mis/disinformation narratives may be communicated through symbols and objects on avatars or in synthetic environments, or by overlaying physical reality with provocative visuals (e.g., a queue of refugees outside your GP’s surgery). Mis/disinformation could be embedded in interactive scenarios that reinforce misleading narratives, manipulate users, or implant false memories, or that feature persuasive deceptive avatars (e.g., impersonators, deepfakes). XR mis/disinformation could be disseminated via single- or multi-user games, in open or private social spaces, by hijacking a user experience via malware, and through micro-targeted advertising.
Whether XR mis/disinformation could or will have widespread harmful impact is uncertain: our evidence base is extremely limited. More broadly, research on using XR to change short term attitudes and behaviour yields unclear findings (and is absent on achieving lasting change). The influence of mis/disinformation in XR may be strongest when used to reinforce or shape existing or emerging beliefs and the effects may be greatest in XR spaces where like-minded individuals meet.
In terms of future trajectory, rising usage, particularly among youth, suggests future generations will be increasingly comfortable using XR. Algorithmic recommender systems could significantly aid XR mis/disinformation actors. Influencers migrating from platforms like Rumble or TikTok to VR spaces may be potent vectors (e.g., hosting live virtual events with direct interaction with and between followers).
Use of XR for spreading mis/disinformation raises profound challenges for harm mitigation, not least for moderation of harmful XR communication. Automated moderation that detects and labels mis/disinformation in verbal, non-verbal, visual, and experiential forms, including during real-time, ephemeral interactions, seems a rather distant possibility.
Automated consensus in participatory fact-checking
American Big Tech companies are undergoing an uneven but clear politicisation towards the right. In 2022, Elon Musk acquired Twitter and, in the following years, transformed it into X, the first large-scale illiberal social media platform. A radicalised Musk helped to elect Donald Trump in 2024, prompting other organisations – from Meta to Google – to try and appease Trump and his MAGA allies.
A crucial aspect of this process is the platforms’ undermining of professional fact-checking. Boosted after Trump’s first election in 2016 and the onset of the COVID-19 pandemic in 2020, fact-checking quickly became a favourite target of the global far-right, who claimed – against evidence – that fact-checkers were “woke,” “censorious” actors bent on imposing their partisan anti-conservative preferences onto public discourse. Musk was perhaps the most powerful of these critics. Once Twitter CEO, he radically boosted Birdwatch, an experimental programme Twitter had just created, whereby users could add labels and “context” to posts. Renamed in late 2022 as Community Notes, the programme has since become the only form of fact-checking on Twitter and has also been adopted by Meta as part of the Trump-appeasing measures announced by Mark Zuckerberg in January 2025.
Yet is this new, ostensibly bottom-up form of fact-checking truly democratic or efficient, as the likes of Musk and Zuckerberg claim? Hardly. Empirical research on X’s Community Notes has consistently documented how it fails to quickly fact-check truly divisive (and thus most important) posts. Understanding the reason for that demands seeing this system as a decision-making mechanism. The most important aspect of Community Notes is that almost anyone can become a member of the programme and write a label (or a “Note”) for a post – but only very few Notes actually become visible to the broader public. This is because a Note needs to be ranked as “helpful” by “raters with a diversity of viewpoints,” whose votes are transformed into the input of a somewhat complex and automated statistical calculation. There is much to unpack in this form of algorithmic consensus but consider that it is the platform that establishes the numerical threshold of “helpfulness” a Note needs to meet to become visible. In the case of X, this is 0.4. The exact reasons for sticking to this level, and the consequences of doing so, are largely ignored in X’s unusually extensive page about the programme.
Critical scholars have long argued that if platforms are to become more democratic, involving users more directly in their governance is urgent. However, a system that relies on a simplistic and unilaterally defined form of “consensus” is clearly not a solution.
Since the advent of LLM-powered chatbots — or more precisely, since chatbots like ChatGPT began to be paired with search engines — the idea that malign actors such as the Kremlin can ‘poison’ their training data has attracted intense scrutiny. Recently, NewsGuard, a company that rates websites and tracks mis- and disinformation, published a report investigating whether leading generative AI applications, such as ChatGPT, repeat Kremlin disinformation. NewsGuard analysts asked ten leading chatbots questions based on content spread by the Pravda network — a coordinated group of pro-Kremlin disinformation websites. According to NewsGuard, the results were alarming: chatbots ‘repeated false narratives laundered by the Pravda network 33 percent of the time’.
The report advanced a theory of ‘LLM grooming’: Kremlin disinformation sources flood the internet with false content so that users receive pro-Kremlin narratives from chatbots in response to political queries. If true, this would suggest a deeply concerning trend — pointing to the potency of Russian propaganda and implying that widely used chatbots may serve as extensions of Kremlin foreign influence operations. The report made headlines.
However, the report was methodologically flawed on many levels. Together with Mykola Makhortykh and Alex Voronovici, we independently assessed the risk of Kremlin-linked disinformation in chatbot outputs. We conducted an AI audit across four major LLM-powered chatbots: ChatGPT, Gemini, Copilot, and Grok. Several example prompts from the NewsGuard report were used, along with eight additional prompts varying in generality. Some focused on broad claims (e.g., U.S. biolabs or NATO presence in Ukraine), while others addressed niche claims found only on Pravda websites — such as specific allegations about NATO training facilities in a particular location (e.g., Odesa, Ukraine). We generated 416 responses from both the UK and Switzerland.
Some preliminary observations:
This suggests the issue is not LLM ‘poisoning’ but what Microsoft researchers refer to as data voids — areas of the internet where high-quality information is sparse or absent, allowing low-quality or manipulative content to dominate search and retrieval outputs. When users pose questions about niche, emerging, or controversial topics, chatbots may struggle to locate authoritative sources. In such cases, if disinformation sources are more readily available, AI systems may inadvertently reproduce them.
This interpretation has significant implications — not only for understanding foreign interference but also for assessing how users might realistically encounter Pravda links in chatbot responses. First, if the data void theory is correct, disinformation responses result from information scarcity rather than algorithmic bias or deliberate manipulation. Second, for Pravda links to appear in chatbot outputs, several specific conditions must be met:
While technically possible, this is highly unlikely:
The alarmist discourse around LLMs and disinformation oversimplifies how these systems function. It risks diverting attention from realistic assessments of AI vulnerabilities — such as its potential use in malware generation — and from more nuanced understandings of how disinformation spreads in digital environments.
Is much of the current wave of misinformation and disinformation—from the Great Reset to the Great Replacement—driven by the culture wars? Are the conspiracy theories just a pretext to wreck the welfare state and avoid addressing the climate crisis? And to what extent is it an organic, authentic, bottom-up movement or coordinated campaigns of manipulated influence? Does the focus on notions of disinformation and conspiracism make it virtually impossible to discuss actual conspiracies? Is badging all opposition to Net Zero ‘climate denialism’ exacerbating rather than combatting culture wars posturing?
Culture wars and climate crisis
Conspiracy theories and theorising conspiracies
This intervention challenges scholars to adopt a more critical orientation to conspiracy theories. History shows that political, corporate, financial and military elites routinely conspire to do harm and to deceive and mislead the public. The first part suggests that a moral panic over conspiracy theories has given rise to a conspiracy theory research agenda that has pathologised and criminalised conspiracy theories. The second part argues that although conspiracies are important sociological and political phenomena, the term ‘conspiracy theory’ functions to stigmatise certain narratives. The final part of the paper argues that scholars should take conspiracy theories seriously and seek to investigate conspiracies. If popular conspiracy theories about elite wrongdoing are invalid, we should develop better explanations of how and why conspiracies take place, as well as who conspires and to what ends.
In his contribution to the workshop, Ed Pertwee warned against attributing too much causal power to misinformation and disinformation. He emphasised instead the importance of taking a structuralist approach that connects what we are observing in the information space with underlying problems such as inequality, political disenfranchisement and distrust, and asking how narratives around these issues are selectively shaped and amplified by capitalist digital media platforms. Unless we can actually begin to address those underlying structural issues, he argued, there will continue to be a ready audience for misinformation and conspiracy theories that demagogues and grifters can easily exploit, and that platforms can continue to monetise.
Vera Tolz’s presentation questioned the analytical utility of terms such as disinformation and misinformation, arguing that they often function more as performative tools of legitimation than as meaningful categories for analysing complex social phenomena. Drawing on insights from the (Mis)Translating Deceit project, Vera briefly explored how these terms tend to reduce multifaceted discourses—such as those surrounding the so-called culture wars or the climate crisis—to simplistic true/false binaries. Emphasizing the value of historical perspective, she cited several examples to support the argument that understanding narrative battles around ‘the culture wars’ requires in-depth analysis of the specific socio-political and economic contexts in which questions of values and identity—central to such narratives—gain sudden prominence. For such analysis, the term disinformation rarely serves a useful purpose.
Does the collapse of the transatlantic order alter how we think about and engage with disinformation? Or has there been too much focus on threats to Western democracies, and not enough attention on the Global South, and to the post-Soviet spaces?
Geopolitics of/and disinformation: towards a strategic, coordinated response.
The so-called collapse of the transatlantic order has undeniably altered how disinformation is tackled, engaged with and spoken about. But this change by no means came out of the blue: it is the crest of a bigger, longer-term global wave. Democracies’ information spaces are already weakened by threat actors, vectors and vulnerabilities, both established and emerging. They now face a more urgent challenge. Counter-disinformation efforts – from public messaging to technical work – are more and more politicised and securitised. Turning to the UK and European democracies, confronting this crisis head-on demands an urgent refresh of coordination and information-sharing. It demands rejuvenating approaches to tackling shared challenges, based on coherent definitions (a long-standing issue in the field), building new technical capacities (for example, for attribution) and careful policy innovations (for example, where or whether to bring a national security frame to counter-disinformation efforts).
Trust and conspiracy: a perspective from the political anthropology of Kenya
In my remarks, I engage the question of how to separate genuine concerns from exaggerated fears in today’s interconnected ‘polycrisis’, drawing on my research as a social anthropologist in Kenya. Over the past eight years, I’ve spent 26 months living in a peri-urban neighbourhood close to Nairobi, studying politics, family dynamics, and land issues - particularly during Kenya’s 2017 and 2022 elections. The idea of ‘conspiracy theories’ or ‘post-truth’ politics often assumes a clear baseline of factual consensus, a ‘before’ and ‘after’, but Kenya’s elections complicate this. In 2017, Cambridge Analytica targeted middle-class voters with apocalyptic videos warning that opposition leader Raila Odinga (a Luo) would destroy Kenya if elected. But these fear-mongering tactics were nothing new. They tapped into deep-seated ethnic anxieties dating back to Kenya’s immediate post-independence politics and histories of political violence in the 1980s and 1990s. Ethnic Kikuyu media and gossip already long Luos as threatening outsiders, framing elections as an existential battle for survival, taking place in the shadow of anticipated communal violence.
However, more recently in Kenya economic crises are reshaping these divisions. Last year, young Kenyans frustrated by IMF-backed austerity measures took to the streets to protest tax hikes on essentials. They used ChatGPT to break down budget documents and translate them to Kiswahili, a throwback to the optimism once attached to social media activism as a source of democratic potential. But the protests also show that ‘conspiracies’ – ideas about mutual interest – stem from tangible historical and material grievances, the gauged complicity of Bretton Woods institutions in shaping economic life in Kenya. Whether ethnic tensions or IMF policies, such fears are not simply invented and spread through ‘misinformation’ – they are socially embedded phenomena, rooted in historical experiences of inequality and moral imaginations of betrayal spurred by real events.
The programme raises the issue of separating ‘justifiable concerns from inflated fears’ in the context of a global polycrisis. I am going to speak to this question from my perspective as a social anthropologist who has carried out 26 months of ethnographic fieldwork in Kenya across the past 8 years. These were periods of immersion in a peri-urban neighbourhood on the outskirts of Nairobi. I lived with a low-income family in Kiambu County, part of the central Kenya region that is home to the Kikuyu ethnic group who comprise 17 per cent of the country’s population, the largest in the country. Amongst topics of land inheritance and family life, my research there has focused on two national election cycles – 2017 and 2022 – and I have had the opportunity to reflect on topics of ‘post-truth’ and ‘conspiracy’ from a context outside Euro-America.
[1] https://theconversation.com/kenya-unrest-ruto-awakened-class-politics-that-now-threatens-to-engulf-him-233796[2] https://nation.africa/kenya/business/imf-told-state-to-ignore-anti-tax-protests-4672248
[2]https://nation.africa/kenya/business/imf-told-state-to-ignore-anti-tax-protests-4672248
Blind spots in disinformation studies? Lessons from Eastern Europe
The main argument advanced in this intervention is that the field of disinformation studies has been shaped predominantly by issues and challenges originating in the West – particularly the U.S., Western, and Northern Europe – resulting in several blind spots not only in research, but also in policy and regulatory approaches. These approaches are often developed without sufficient consideration of the unique characteristics of media and political landscapes in Eastern European countries. Among those blind spots are the following: