07 Jul
07Jul

by Emma Connolly. Emma is a Research Fellow at UCL's Digital Speech Lab. Her work explores the cross-platform movement of political content across social media platforms, and methods for investigating it. She has experience working with X, Instagram, TikTok, and Reddit.


Harmful political content – such as mis or disinformation–  is posted on an online forum. It gains momentum and goes mainstream through amplification on YouTube, Instagram, and TikTok. It trends on X before official verification or labelling protocols can be applied. 

This scenario is not merely hypothetical. It’s a common tactic used in information manipulation activities across the world, including Russia’s ‘Secondary Infektion’ operation, a campaign to discredit Europe and NATO with forged documents and fake online accounts. These operations rely not just on deception, but on speed and cross-platform spread. When harmful content goes viral, it is difficult to contain, often spilling over into real-life consequences. 

Authoritarian states are not the only culprits. In the ‘Pizzagate’ conspiracy which emerged in the lead-up to the 2016 U.S. election, a false story originating on alt-right platforms claimed that Hillary Clinton was orchestrating a child sex-trafficking ring out of a D.C. pizzeria. The story repurposed content from Instagram, used multiple languages, and spread from 4Chan and Reddit to YouTube, and X. In December 2016 alone, X users shared the term ‘pizzagate’ over one million times. The story motivated a man to enter the pizzeria armed with a weapon, in order to rescue the children that were never there

Welcome to the invisible war where the first casualty is truth. These cases aren't unusual or outliers. They are examples of disinformation mechanisms. They’re fast, involve multiple platforms, and are hard to contain. 


The speed of viral spread 

Immediacy and cross-platform spread are defining characteristics of disinformation in the digital age. Content does not stay on one platform. It travels between Reddit, X, Facebook, Instagram, YouTube, and TikTok, and is recontextualised and adapted by users as it does so. These users aren’t necessarily those with the most followers – although the role of influencers and ‘micro influencers’ in politics is increasingly significant. Instead, these actors are contextually specific, and sometimes non-human, taking the form of bots. The fluidity and unpredictability with which harmful content migrates means that it gains momentum faster than interventions like fact-checking can keep up. It also presents challenges for those looking to track – or stop – its spread. 

Disinformation is particularly powerful when it is shared on multiple platforms. Exposure via social media increases its intensity – the quantity of content that individuals encounter over a period of time. When the same information is received from multiple sources, it reinforces the narrative and feels more credible, even if it’s false. And because it happens so quickly, cumulative or concurrent exposure reduces opportunities for intervention, whether from platforms, or through digital education strategies. 

The type of exposure also presents a problem. Deliberately harmful or misleading content increasingly blurs with entertainment so users encounter it as they browse and scroll, whether they’re looking for it or not. TikTok reaches very young users, so disinformation can increasingly influence vulnerable audiences. A recent study has termed this ‘incidental exposure’  and it’s particularly challenging to stop because content behaves differently, depending on the platform it appears on. 


Understanding platform architecture 

Disinformation, especially deliberately promulgated conspiracy theories, often gain traction on right-wing sites such as 4Chan and Reddit. Both are mostly anonymous, making them ripe for the creation of fake accounts and astroturfing (the process of making something appear grassroots when it is heavily coordinated). Their decentralised moderation protocols also means that fake stories can be amplified quickly by cross-posting on multiple threads or subreddits. 

Even the most outlandish conspiracies can reach mainstream visibility through X – helped by its right-wing fringe. X is a conversational tool which connects topics as well as users. Its notable use of hashtags drives the resurgence, re-emergence, and re-contextualisation of content long after its initial engagement. 

Platforms that encourage the sharing of short-form content, such as TikTok and Instagram, have specific algorithmic architecture that prioritises content, not users. Instagram and Threads target political posts to users, even from people they do not follow. TikTok’s engagement algorithm uses previous interactions to push content based on what it predicts users will like, rather than who they know. This makes viral spread unpredictable and non-linear. On TikTok specifically, content is shared through ‘reactive imitation’ where users engage in lip syncing and dance trends, incorporating sound effects or trending songs. The danger of this is that is makes harmful content appear innocuous. 


How is ‘virality’ changing? 

Traditionally, viral content peaks rapidly, before losing traction, and vanishing. Virality is achieved within a few hours and measured through metrics such as the number of likes and shares. However, platform affordances and user behaviour is changing the way that content goes viral. 

Even when a false claim appears to die down, it can resurface unexpectedly providing viral content with potential longevity. Disinformation often re-emerges out of context and is reframed as trending content or breaking news. In the recent exchange of fire between Iran and Israel, Pro-Israeli social media accounts have been sharing what they call ‘mounting dissent’ against the Iranian government. In reality, there are recycled protest clips from previous unrest. 

Platform design facilitates this. On TikTok’s fyp (for you page) videos are not timestamped. Anecdotally, users are seeing their content go viral years after it was initially posted. Recent changes to Instagram’s ‘Remix’ features have made it easier for users to reuse others’ content so it is more likely to reappear in other forms. This is especially dangerous in moments of political instability or social unrest, when people are less likely to verify the information they encounter. 

What all platforms have in common is that they monetise and reward engagement, not truthfulness. Any content generating likes, shares, or adaptation, is likely to be spread further, regardless of its veracity.


How can we track it? 

Our current tools are not well-equipped to map the spread of content across platforms. Traditional methods like content or sentiment analysis, or single platform network mapping fall short. Studies are emerging that begin to account for the dynamic movement of content across platforms, but these are still few and far between. 

The danger is that narratives, and the platforms they are circulated on, evolve so quickly, that methods developed for investigating them are always playing catch up. It’s not enough to study what is false, nor focus on individual users. We must put emphasis on studying the patterns of information spread, and the mechanisms behind it. 

To really understand how harmful content spreads, researchers can use existing tools such as network mapping. But these should be updated to focus on connections between content accounting for instances of repackaging and remixing – as well as connections between users. Methods for identifying co-ordinated networks are also emerging as effective alternatives. Another option is to employ flexible viral models which account for platform affordances and predict how and when disinformation will spread. It is vital that researchers continue to develop cross-platform detection tools that take each platform’s technical affordances – such as algorithm design – into account. 

It’s impossible to fact check every piece of content. But we can slow its spread and mitigate its impact. Based on what we know about cross-platform diffusion, here are five strategies:

1. Identify patterns

Common fact-checking tools such as Snopes are useful, but fact-checking a false claim once it’s gone viral is already too late. Instead, we need to watch how it moves. Detecting patterns of coordinated behaviour – especially by employing artificial intelligence – is likely to be more effective than focusing on individual posts. Sites such as ‘Hoaxy’ and ‘Claimbuster’ aim to enable this in real-time. 

2. Slow down virality

Platforms can slow disinformation by delaying the sharing process, even by a few seconds. Cumulatively, this adds up. This could involve prompts for accuracy, or asking users if they’ve read an article before reposting it. These can reduce the spread of harmful information without restricting free speech. 

3. Understand viral diffusion

Viral patterns are not equal across platforms. Disinformation often emerges on Reddit or 4Chan but quickly goes mainstream. It diffuses rapidly on TikTok, slowly and more sustained on Instagram, gaining longevity on YouTube and X. 

4. Leverage counter content

If patterns of viral behaviour are better understood, they can be leveraged for good. Countering harmful content by circulating alternative narratives can help spread the right kinds of stories. 

5. Shape the conversation

Digital literacy or education programmes can empower users, especially younger ones, to think critically about the content they engage with. Social media representatives say that they want platforms to be more transparent – but there is a big difference in signalling intent and following through with improved algorithmic design and user experience. 

The bottom line is that disinformation campaigns are rarely contained to one source. It can be challenging to trace a single origin; they migrate quickly, and gain momentum by moulding to the contours of platform affordances. To fight them, we must recognise disinformation campaigns as powerful, adaptive threats amplified by human and algorithmic interventions. Until we do, we’ll always be playing catch up.

Comments
* The email will not be published on the website.