13 May
13May

Researchers and practitioners working in the field of online misinformation, disinformation and conspiracy theories are facing a perfect storm. In the US, Trump has stacked his new administration with figures who have risen to prominence by promoting conspiracy theories and other unfounded culture wars narratives. From RFK Jr’s embrace of anti-vaxx claims to Kash Patel’s endorsement of Deep State conspiracy theories, and the removal of references to ‘climate change’ and DEI from all federal websites, what were once fringe ideas are now driving policy. In the UK, right-wing politicians are also adopting talk of the Deep State and nodding to conspiracy narratives in their roll-back of commitments to Net Zero. The riots in the summer of 2024 in the UK were in part fuelled by online influencers, both hyperlocal and also drawing on a global network of populist agitators. The Republicans’ hypocritical attacks on academic and think tank research on misinformation as a form of censorship is now being weaponised to undermine critical voices, just as Trump, freely elected leader of the world’s most powerful democracy, pivots to authoritarian Russia, the perceived source of much disinformation. Those sceptical about the power of a bloated counter-disinformation industry now find that what they termed ‘Big Disinfo’ is itself under vicious attack. Social media companies have caved to Republican pressure and started to roll back on content moderation. At the same time, the collapse of the transatlantic order suggests that long-standing differences between European and U.S. approaches to regulating online communication have acquired renewed geopolitical significance; at the very least, it is likely that financial and legal restrictions on social media companies will become bargaining chips in the emerging international tariff wars. In addition, the rapidly evolving landscape of AI is reshaping the nature of hybrid information warfare, including both dystopian scenarios (DOGE is using LLMs to detect supposed antisemitism in the social media posts of student activists in the US as a pretext for deporting them), but also potential interventions. It is far from clear, however, whether AI and deepfakes have proven to be the dire threat to democracy and the information environment that many commentators warned about.

The aim of this workshop was to take stock of the current situation and assess its part in what Adam Tooze (2022) characterizes as a ‘global polycrisis’. It will begin to separate justifiable concerns from inflated fears, to place those fears in historical perspective, and to work out the priorities for future research and action in this area.

The programme for this workshop can be downloaded below:

After the event, the workshop participants were asked to provide short summaries of their presentations and/or share their post-workshop observations. Notes from those participants who provided them (asterisked in the programme) are presented below. 

Last year, a workshop on "Big Disinfo" was held by the "Everything is Connected" project and (Mis)Translating Deceit. You can find out more about the workshop via the link here.


Democracy and Disinformation

How can democracy survive the normalisation of mis- and disinformation? Or is the rejection of establishment voices democracy in action? What approach should civil society organisations (fact-checkers, think tanks, broadcasters, regulators) take in the current climate?


Neil Sadler (University of Leeds) 

Disinformation, democracy and truth 

  • One of the most commonly cited dangers of disinformation is that it is harmful to democracy.
  • Democracy is typically – if rarely explicitly – understood from a rationalist perspective as a means to attain outcomes that benefit all.
  • From this view, disinformation disrupts the flow of factually accurate information required for these processes to function. This makes disinformation inherently anti-democratic.
  • My comments, on the other hand, were grounded in Chantal Mouffe’s discursive and agonistic understanding of liberal democracy.
  • On this view, ‘democracy’ has no intrinsic or true nature – instead its meaning is produced discursively.
  • Contemporary liberal democracy, specifically, is grounded in the link between liberalism (understood as individual rights) and democracy (understood the unity of governor and governed – i.e. the people ultimately govern themselves).
  • The primary value of liberal democracy is that it allows the conflicts that are inevitable in all social life to play out ‘agonistically’ rather than ‘antagonistically’ – most crucially, avoiding violence.
  • This suggests that flows of accurate information are less central to democracy than assumed in rationalist accounts.
  • Disinformation, then, is problematic primarily insofar as it disrupts the link between liberalism and democracy on which liberal democracy is based.
  • Yet both factually accurate and inaccurate information can be used to do this.

This suggests that, at least in terms of liberal democracy, it is the aims and impacts of communication practices that ultimately matter most rather than necessarily their factual accuracy.


Sabina Mihelj (University of Loughborough)

In her contribution, Sabina Mihelj addressed the question: ‘How can democracy survive the normalization of disinformation?’ She argued that this question is misguided and instead suggested that we need to ask ourselves whether democracy can survive the way in which democratic countries have gone about addressing the problem of disinformation. By this she meant the fact that by and large, disinformation is first, largely treated as a matter of facts and falsehoods, and second, primarily seen as a problem imposed on (Western) democracies by authoritarian powers – above all, but not only, by Russia. This approach, she suggested, is too narrow, and does not reflect the way disinformation operates in practice; in particular, it does not reflect the way disinformation is perceived and engaged by one of its main targets – that is, ordinary citizens. She then developed this argument by drawing on ongoing research in Romania, a country that offers a particularly interesting case study of disinformation and democracy.

 

Stefan Janjić (University of Loughborough) 

I've been working as a fact-checker in Serbia with FakeNews Tragač portal for eight years and teach fact-checking at the Faculty of Philosophy in Novi Sad, Serbia. This work has revealed challenges we didn't anticipate when we started years ago.

 Over the past six months in Serbia, I've had to document disinformation about my own students and colleagues who were beaten, imprisoned, or forced into exile during protests. Reading lies about people you know personally while trying to maintain professional distance is exhausting.

 The nature of misinformation has also changed. Five or six years ago, we dealt with more straightforward false claims that could be directly refuted. Now, Serbian media have learned to avoid outright lies. They've shifted toward bullshit rather than blatant falsehoods, making our job more complex.

 Our impact remains frustratingly limited. While audiences often resist fact-checking content, I think we fact-checkers share the blame. Our writing is often boring, sterile, and full of NGO jargon. Fact-checking needs to become more human. We need better storytelling, more visuals, and writing that people actually want to read. Truth may never be as entertaining as lies, but we can make the process of seeking truth more engaging and accessible.


Cooper Gatewood (BBC Media Action)

BBC Media Action is the BBC’s international charity. We work with partners around the world to provide impartial, impactful, trustworthy media to people in need so that they can make informed choices to transform their lives. In a world of disinformation, distrust and division, we share the BBC’s values, skills and experience to bring people together, and foster greater understanding and trust. 

We often work in fragile democracies and conflict zones, which are different from many of the democratic contexts that are being discussed here today. But we have gleaned insights from these contexts that can be useful for democracies around the world. So, what has our research told us? 

Our research from around the world has told us that audiences globally are increasingly aware of mis- and disinformation and the impact it can have on their societies. For some examples, in North Africa, 39% of respondents in Tunisia and 35% of respondents in Libya report seeing disinformation on a daily basis. In Afghanistan, 50% of survey respondents report having encountered disinformation, and in Solomon Islands, 49% report seeing it weekly. However, what these statistics don’t tell us is how audiences are assessing whether a piece of information is true or false. 

To understand this, we’re increasingly studying our audiences’ digital and media literacy (DML). One trend we noticed in research in countries like Nepal and Tunisia, for example, is that many users agree it is more important that information is shared quickly than it be fact-checked. We have also conducted multiple surveys where pluralities of respondents believe that the first search result is always the right one. Many will also read the comments section on social media to fact-check the posts. None of these are best practice and do not demonstrate a high level of DML among audiences where we have conducted research. The advent of generative AI further complicates this picture – some of our research indicates growing use of AI tools but a lack of training and guidance, both for audiences and media professionals. All of this risks eroding our shared ground truth, which is key for a healthy democracy. 

Given this context, supporting DML is one key way to protect audiences and democracies in the face of disinformation. There are many actors in this space, and each has an important role. Fact checkers have their place in helping establish this ground truth, but research has shown that this activity is not a way to have impact at scale. Think tanks need to produce more evidence about what works in this space, and BBC Media Action works to contribute to this evidence base with our research. Broadcasters need to invest in training and transparency, especially with the advent of AI. Finally, regulators need to assess systemic risks posed by tech platforms and AI.


Technology and Regulation 

Emerging technologies and the mis/disinformation future: As new technologies emerge and converge, what are the implications for mis/disinformation? What does this mean for regulation, governance, and content moderation? 


Emma Barrett (University of Manchester)

Future mis/disinformation and extended reality (XR) technologies  

Immersive (XR) technologies such as virtual and augmented reality create new avenues for spreading mis/disinformation across verbal, non-verbal, visual, and experiential modes. XR mis/disinformation narratives may be communicated through symbols and objects on avatars or in synthetic environments, or by overlaying physical reality with provocative visuals (e.g., a queue of refugees outside your GP’s surgery). Mis/disinformation could be embedded in interactive scenarios that reinforce misleading narratives, manipulate users, or implant false memories, or that feature persuasive deceptive avatars (e.g., impersonators, deepfakes). XR mis/disinformation could be disseminated via single- or multi-user games, in open or private social spaces, by hijacking a user experience via malware, and through micro-targeted advertising. 

Whether XR mis/disinformation could or will have widespread harmful impact is uncertain: our evidence base is extremely limited. More broadly, research on using XR to change short term attitudes and behaviour yields unclear findings (and is absent on achieving lasting change). The influence of mis/disinformation in XR may be strongest when used to reinforce or shape existing or emerging beliefs and the effects may be greatest in XR spaces where like-minded individuals meet. 

In terms of future trajectory, rising usage, particularly among youth, suggests future generations will be increasingly comfortable using XR. Algorithmic recommender systems could significantly aid XR mis/disinformation actors. Influencers migrating from platforms like Rumble or TikTok to VR spaces may be potent vectors (e.g., hosting live virtual events with direct interaction with and between followers).

Use of XR for spreading mis/disinformation raises profound challenges for harm mitigation, not least for moderation of harmful XR communication. Automated moderation that detects and labels mis/disinformation in verbal, non-verbal, visual, and experiential forms, including during real-time, ephemeral interactions, seems a rather distant possibility.


João C. Magalhães (University of Manchester)

Automated consensus in participatory fact-checking 

American Big Tech companies are undergoing an uneven but clear politicisation towards the right. In 2022, Elon Musk acquired Twitter and, in the following years, transformed it into X, the first large-scale illiberal social media platform. A radicalised Musk helped to elect Donald Trump in 2024, prompting other organisations – from Meta to Google – to try and appease Trump and his MAGA allies. 

A crucial aspect of this process is the platforms’ undermining of professional fact-checking. Boosted after Trump’s first election in 2016 and the onset of the COVID-19 pandemic in 2020, fact-checking quickly became a favourite target of the global far-right, who claimed – against evidence – that fact-checkers were “woke,” “censorious” actors bent on imposing their partisan anti-conservative preferences onto public discourse. Musk was perhaps the most powerful of these critics. Once Twitter CEO, he radically boosted Birdwatch, an experimental programme Twitter had just created, whereby users could add labels and “context” to posts. Renamed in late 2022 as Community Notes, the programme has since become the only form of fact-checking on Twitter and has also been adopted by Meta as part of the Trump-appeasing measures announced by Mark Zuckerberg in January 2025. 

Yet is this new, ostensibly bottom-up form of fact-checking truly democratic or efficient, as the likes of Musk and Zuckerberg claim? Hardly. Empirical research on X’s Community Notes has consistently documented how it fails to quickly fact-check truly divisive (and thus most important) posts. Understanding the reason for that demands seeing this system as a decision-making mechanism. The most important aspect of Community Notes is that almost anyone can become a member of the programme and write a label (or a “Note”) for a post – but only very few Notes actually become visible to the broader public. This is because a Note needs to be ranked as “helpful” by “raters with a diversity of viewpoints,” whose votes are transformed into the input of a somewhat complex and automated statistical calculation. There is much to unpack in this form of algorithmic consensus but consider that it is the platform that establishes the numerical threshold of “helpfulness” a Note needs to meet to become visible. In the case of X, this is 0.4. The exact reasons for sticking to this level, and the consequences of doing so, are largely ignored in X’s unusually extensive page about the programme.

Critical scholars have long argued that if platforms are to become more democratic, involving users more directly in their governance is urgent. However, a system that relies on a simplistic and unilaterally defined form of “consensus” is clearly not a solution.


Maxim Alyukov and Alex Voronovici (University of Manchester)

Since the advent of LLM-powered chatbots — or more precisely, since chatbots like ChatGPT began to be paired with search engines — the idea that malign actors such as the Kremlin can ‘poison’ their training data has attracted intense scrutiny. Recently, NewsGuard, a company that rates websites and tracks mis- and disinformation, published a report investigating whether leading generative AI applications, such as ChatGPT, repeat Kremlin disinformation. NewsGuard analysts asked ten leading chatbots questions based on content spread by the Pravda network — a coordinated group of pro-Kremlin disinformation websites. According to NewsGuard, the results were alarming: chatbots ‘repeated false narratives laundered by the Pravda network 33 percent of the time’. 

The report advanced a theory of ‘LLM grooming’: Kremlin disinformation sources flood the internet with false content so that users receive pro-Kremlin narratives from chatbots in response to political queries. If true, this would suggest a deeply concerning trend — pointing to the potency of Russian propaganda and implying that widely used chatbots may serve as extensions of Kremlin foreign influence operations. The report made headlines. 

However, the report was methodologically flawed on many levels. Together with Mykola Makhortykh and Alex Voronovici, we independently assessed the risk of Kremlin-linked disinformation in chatbot outputs. We conducted an AI audit across four major LLM-powered chatbots: ChatGPT, Gemini, Copilot, and Grok. Several example prompts from the NewsGuard report were used, along with eight additional prompts varying in generality. Some focused on broad claims (e.g., U.S. biolabs or NATO presence in Ukraine), while others addressed niche claims found only on Pravda websites — such as specific allegations about NATO training facilities in a particular location (e.g., Odesa, Ukraine). We generated 416 responses from both the UK and Switzerland. 

Some preliminary observations: 

  1. Very few responses contained false claims — on average, 5%, with Copilot being the most vulnerable, producing false responses in 13% of instances. 
  2. Only 32 out of 416 responses (8%) referenced Pravda domains. Just two supported false claims using Pravda as a source; the remaining 30 debunked the claims they discussed. 
  3. Pravda references appeared primarily in response to niche prompts poorly covered by mainstream media.

This suggests the issue is not LLM ‘poisoning’ but what Microsoft researchers refer to as data voids — areas of the internet where high-quality information is sparse or absent, allowing low-quality or manipulative content to dominate search and retrieval outputs. When users pose questions about niche, emerging, or controversial topics, chatbots may struggle to locate authoritative sources. In such cases, if disinformation sources are more readily available, AI systems may inadvertently reproduce them. 

This interpretation has significant implications — not only for understanding foreign interference but also for assessing how users might realistically encounter Pravda links in chatbot responses. First, if the data void theory is correct, disinformation responses result from information scarcity rather than algorithmic bias or deliberate manipulation. Second, for Pravda links to appear in chatbot outputs, several specific conditions must be met: 

  1. The user must ask about highly specific, obscure topics and possess detailed prior knowledge. 
  2. The query must concern claims not covered by reputable media, as chatbots prioritise those sources. 
  3. The chatbot must lack safety mechanisms to flag disinformation sources.

While technically possible, this is highly unlikely: 

  1. Users rarely submit such specific and informed queries. 
  2. Data voids often close quickly — in our case, some Pravda references appeared prior to testing but disappeared days later, as higher-quality information filled the gap. 
  3. Even when data voids persist, chatbots still debunk the claims in the vast majority of cases.

The alarmist discourse around LLMs and disinformation oversimplifies how these systems function. It risks diverting attention from realistic assessments of AI vulnerabilities — such as its potential use in malware generation — and from more nuanced understandings of how disinformation spreads in digital environments.


Culture Wars and Climate Crisis

Is much of the current wave of misinformation and disinformation—from the Great Reset to the Great Replacement—driven by the culture wars? Are the conspiracy theories just a pretext to wreck the welfare state and avoid addressing the climate crisis? And to what extent is it an organic, authentic, bottom-up movement or coordinated campaigns of manipulated influence? Does the focus on notions of disinformation and conspiracism make it virtually impossible to discuss actual conspiracies? Is badging all opposition to Net Zero ‘climate denialism’ exacerbating rather than combatting culture wars posturing? 


Matthew Paterson (University of Manchester)

Culture wars and climate crisis

  1. In the midst of a concerted effort to roll back climate action.
    1. Transnationally organised –focus is on the UK dimension
      1. One particular effect of that focus – open climate denial is relatively minor part of this backlash in the UK.
        1. Reform ‘not climate sceptics but net zero sceptics’
      2. But from climate denial to ‘climate refusal’ (Daggett 2019)
  2. Key to this backlash is:
    1. Underlying political economy (shoring up of fossil fuel interests) – in NZSG era biggest focus was on overturning the fracking moratorium
      1. Close connections through the 65 Tufton St.
    2. Culture wars framings – climate change as part of ‘all things woke’
    3. Populist mobilisation around
      1. Fear of things being lost (mostly this is about cars). Freedom narrative around petrol/diesel phaseout, LTNs. Conspiracy version in 15 minute cities.
      2. Questions of inequality and social justice. Disproportionate costs of net zero policy on the poor.
  3. Where is the disinformation?
    1. Lots of just opportunistic things - ‘Miliband accused of turning blind eye to modern slavery in pursuit of net zero’
    2. But mostly in the arguments around costs of RE and the connection of the energy transition to the ‘cost of living’ crisis.
      1. Framing of ‘greenflation’ – as opposed to ‘fossilflation’ which is absolutely central to the inflation crisis from 2021 onwards. (cf next session on geopolitics which may also connect here).
  4. This however then gets rolled into the social justice critique by ‘anti-net zero populists’ in reinforcing ways.
    1. Heart of that though is about the distributional benefits of climate policy esp that focused on households
      1. Heat pumps, EVs, RE support all framed as ‘attack on the working class’ (Daily Express)
    2. Populist strategies roll this in with the overall costs question to amplify message. The overall cost message is BS but the distributional costs one is (even if it’s a bad faith argument on their part – has been very effective)


Theo Kindynis (City St George's, University of London)

Conspiracy theories and theorising conspiracies 

This intervention challenges scholars to adopt a more critical orientation to conspiracy theories. History shows that political, corporate, financial and military elites routinely conspire to do harm and to deceive and mislead the public. The first part suggests that a moral panic over conspiracy theories has given rise to a conspiracy theory research agenda that has pathologised and criminalised conspiracy theories. The second part argues that although conspiracies are important sociological and political phenomena, the term ‘conspiracy theory’ functions to stigmatise certain narratives. The final part of the paper argues that scholars should take conspiracy theories seriously and seek to investigate conspiracies. If popular conspiracy theories about elite wrongdoing are invalid, we should develop better explanations of how and why conspiracies take place, as well as who conspires and to what ends.


Ed Pertwee (University of Manchester)

In his contribution to the workshop, Ed Pertwee warned against attributing too much causal power to misinformation and disinformation. He emphasised instead the importance of taking a structuralist approach that connects what we are observing in the information space with underlying problems such as inequality, political disenfranchisement and distrust, and asking how narratives around these issues are selectively shaped and amplified by capitalist digital media platforms. Unless we can actually begin to address those underlying structural issues, he argued, there will continue to be a ready audience for misinformation and conspiracy theories that demagogues and grifters can easily exploit, and that platforms can continue to monetise.


Vera Tolz (University of Manchester)

Vera Tolz’s presentation questioned the analytical utility of terms such as disinformation and misinformation, arguing that they often function more as performative tools of legitimation than as meaningful categories for analysing complex social phenomena. Drawing on insights from the (Mis)Translating Deceit project, Vera briefly explored how these terms tend to reduce multifaceted discourses—such as those surrounding the so-called culture wars or the climate crisis—to simplistic true/false binaries. Emphasizing the value of historical perspective, she cited several examples to support the argument that understanding narrative battles around ‘the culture wars’ requires in-depth analysis of the specific socio-political and economic contexts in which questions of values and identity—central to such narratives—gain sudden prominence. For such analysis, the term disinformation rarely serves a useful purpose.


Geopolitics 

Does the collapse of the transatlantic order alter how we think about and engage with disinformation? Or has there been too much focus on threats to Western democracies, and not enough attention on the Global South, and to the post-Soviet spaces?


Isabella Wilkinson (Chatham House)

Geopolitics of/and disinformation: towards a strategic, coordinated response

The so-called collapse of the transatlantic order has undeniably altered how disinformation is tackled, engaged with and spoken about. But this change by no means came out of the blue: it is the crest of a bigger, longer-term global wave. Democracies’ information spaces are already weakened by threat actors, vectors and vulnerabilities, both established and emerging. They now face a more urgent challenge. Counter-disinformation efforts – from public messaging to technical work – are more and more politicised and securitised. Turning to the UK and European democracies, confronting this crisis head-on demands an urgent refresh of coordination and information-sharing. It demands rejuvenating approaches to tackling shared challenges, based on coherent definitions (a long-standing issue in the field), building new technical capacities (for example, for attribution) and careful policy innovations (for example, where or whether to bring a national security frame to counter-disinformation efforts). 


Pete Lockwood (University of Manchester)

Trust and conspiracy: a perspective from the political anthropology of Kenya 

In my remarks, I engage the question of how to separate genuine concerns from exaggerated fears in today’s interconnected ‘polycrisis’, drawing on my research as a social anthropologist in Kenya. Over the past eight years, I’ve spent 26 months living in a peri-urban neighbourhood close to Nairobi, studying politics, family dynamics, and land issues - particularly during Kenya’s 2017 and 2022 elections. The idea of ‘conspiracy theories’ or ‘post-truth’ politics often assumes a clear baseline of factual consensus, a ‘before’ and ‘after’, but Kenya’s elections complicate this. In 2017, Cambridge Analytica targeted middle-class voters with apocalyptic videos warning that opposition leader Raila Odinga (a Luo) would destroy Kenya if elected. But these fear-mongering tactics were nothing new. They tapped into deep-seated ethnic anxieties dating back to Kenya’s immediate post-independence politics and histories of political violence in the 1980s and 1990s. Ethnic Kikuyu media and gossip already long Luos as threatening outsiders, framing elections as an existential battle for survival, taking place in the shadow of anticipated communal violence. 

However, more recently in Kenya economic crises are reshaping these divisions. Last year, young Kenyans frustrated by IMF-backed austerity measures took to the streets to protest tax hikes on essentials. They used ChatGPT to break down budget documents and translate them to Kiswahili, a throwback to the optimism once attached to social media activism as a source of democratic potential. But the protests also show that ‘conspiracies’ – ideas about mutual interest – stem from tangible historical and material grievances, the gauged complicity of Bretton Woods institutions in shaping economic life in Kenya. Whether ethnic tensions or IMF policies, such fears are not simply invented and spread through ‘misinformation’ – they are socially embedded phenomena, rooted in historical experiences of inequality and moral imaginations of betrayal spurred by real events. 

The programme raises the issue of separating ‘justifiable concerns from inflated fears’ in the context of a global polycrisis. I am going to speak to this question from my perspective as a social anthropologist who has carried out 26 months of ethnographic fieldwork in Kenya across the past 8 years. These were periods of immersion in a peri-urban neighbourhood on the outskirts of Nairobi. I lived with a low-income family in Kiambu County, part of the central Kenya region that is home to the Kikuyu ethnic group who comprise 17 per cent of the country’s population, the largest in the country. Amongst topics of land inheritance and family life, my research there has focused on two national election cycles – 2017 and 2022 – and I have had the opportunity to reflect on topics of ‘post-truth’ and ‘conspiracy’ from a context outside Euro-America. 

  • The very idea of conspiracy theory or associated notions of ‘post-truth’ of course implies an informational norm – a consensus against which conspiracy theories can be measured. But in Kenya’s electoral politics, this idea quickly runs into trouble. 
  • During Kenya’s 2017 elections, Cambridge Analytica developed videos targeted at English-speaking members of Nairobi’s middle-class describing the threat posed to the then incumbent Jubilee Government by the prospect of opposition leader Raila Odinga winning the election. A long-time anti-corruption campaigner and government critic, Odinga had developed a significant following from his native Homa Bay County, home of the Luo Ethnicity, and was gauged by President Uhuru Kenyatta as a threat. 
  • One of these videos, entitled ‘The Real Raila’, presented an apocalyptic vision of Kenya in 2020, where Odinga had revoked the constitution and dissolved parliament while residing over an economic crisis. ‘“The Real Raila” Facebook page's video presenting a vision of Kenya in 2020 having ‘There is amongst young no money for clean water. There is no money for education. There is no money for farming. Women are giving birth in the streets.’ 
  • Much was made of Cambridge Analytica’s presence in the national and international media. But these videos were hardly breaking new ground. In central Kenya, where I was living at the time, Kikuyu language gospel songs had already been circulating for months describing Odinga’s ethnic Luo supporters as the ‘mbarĩ ya ũiru’, the clan of jealousy – jealous, apparently, of Kikuyu holding the Presidency. On vernacular radio stations, disk jockeys harnessed Biblical metaphors to warn Kikuyu to vote in large numbers for Kenyatta, lest the Luo take power and destroy them. Many of my Kikuyu interlocutors spoke to me of their fears if the Luo were to capture the Presidency – that these so-called ‘rough’ people would expropriate businesses and attack their homes. From a young age, Kikuyu receive a political education about the dangers of the ‘Luo bogeyman’ – the need to vote for ‘their guy’. Kenyatta ultimately won the election. 
  • These ethnic fault-lines have defined Kenya’s elections practically since independence, a political history too complex to unpack here. But it points towards the limits of ‘conspiracy theory’ as an idea of exceptional ideas spread solely on social media. In Kenya, voters have long been looking at elites from opposing ethnic groups as deeply and destructively partisan. Fears and rumours circulating on social media have deeper, social roots that map on to histories of communal violence and persistent anxieties about their return. 
  • However, these ethno-national alignments have been shifting in the context of more recent cost-of-living crises in Kenya, bringing me to the topic of the protests that took place in the summer of last year. Young Kenyans, the so called ‘Zillennial’ generation, took to the streets chanting ‘No Justice, No Peace’. The rigger was President William Ruto’s proposed Finance Bill. [1] 
  • In 2024, Ruto negotiated a relief package with the IMF to service Kenya’s 80 billion dollar national debt, though with conditions attached: cuts to government budgets, and further privatisation of parastatal bodies. It was where the cuts fell that provoked unrest - increased VAT on household staples like cooking oil, already driven up in price by global commodity shocks and a depreciating Kenyan shilling. It was precisely in this context, protestors drew attention to the lives of wealthy politicians, living luxurious lives at the expense of the state. 
  • Knowledge of the IMF’s involvement in the proposed taxes were hardly absent from the streets nor Twitter, where criticism was shared under the hashtag #SayNotoIMF. In The Elephant online newspaper, Kari Mugo explained how ‘every Kenyan knows that President William Ruto has become the new darling of the US and the G7 for sending Kenyan troops to Haiti and for accepting financing terms that favour he interests of foreign investors.’ Here, protestors were dealing with conspiracy in its most basic terms, connecting tangible mutual interests to critique a new round of ‘structural adjustment’. The Daily Nation and the Business Daily newspapers reported that IMF officials had predicted resistance to the bill, anticipating it as a ‘medium risk’, while encouraging the government to hold firm. [2]
  • From the perspective of austerity Europe, such criticisms are entirely recognizable. But in addition to the grievances, it is the tactics that stand out. Protest organisers used Chat GPT to simplify the budget documents, translating it into Kiswahili, in order to share them with friends and family and drum-up discontent. These acts recall the optimism surrounding social media in the previous decade, when liberal commentators welcomed the online organising behind the Arab Awakening, and heralded its potential to strengthen democracy and challenge authoritarianism.

[1] https://theconversation.com/kenya-unrest-ruto-awakened-class-politics-that-now-threatens-to-engulf-him-233796[2] https://nation.africa/kenya/business/imf-told-state-to-ignore-anti-tax-protests-4672248

[2]https://nation.africa/kenya/business/imf-told-state-to-ignore-anti-tax-protests-4672248


Václav Štětka (Loughborough University)

Blind spots in disinformation studies? Lessons from Eastern Europe 

The main argument advanced in this intervention is that the field of disinformation studies has been shaped predominantly by issues and challenges originating in the West – particularly the U.S., Western, and Northern Europe – resulting in several blind spots not only in research, but also in policy and regulatory approaches. These approaches are often developed without sufficient consideration of the unique characteristics of media and political landscapes in Eastern European countries. Among those blind spots are the following: 

  • The predominant orientation on social media or digital platforms as primary channels of mis/disinformation. Even though they do play a very important role in Eastern European disinformation ecosystems, there are various other online-based channels known to disseminate disinformation, including “alternative” news websites, or the so-called “chain emails” – group email messages with concealed sources, designed to be disseminated virally among colleagues, friends or family members.
  • Lack of attention paid to established mainstream outlets, which can also serve as significant producers and amplifiers of disinformation – and in some countries, they may even occupy a central role within local disinformation ecosystems, especially when captured by political powers or allied private actors.
  • Emphasis on foreign actors, overshadowing the role of domestic political or business elites. While foreign-orchestrated disinformation campaigns – particularly those driven by Russia – pose a clear and present threat to democracy and information autonomy in many EE countries, local elites can be equally complicit in the proliferation of disinformation within the region, especially when they get into power, as demonstrated e.g. in Hungary, Poland or Serbia (Štětka & Mihelj, 2024).
Comments
* The email will not be published on the website.