by Nicolas Hénin, Project Consultant, author, and journalist
Clicks are the cornerstone of influence. Content that seeks virality and engagement is referred to as clickbait for a good reason - translated in my mother tongue, French, to putaclic, literally “click-whore”, a portmanteau formed of putassier, French for sleazy, and click.
Clicks are also the cornerstone of the Internet economy since the Web became a ubiquitous commodity for the general public a quarter of a century ago, both as a commercial marketplace and as a source of information. They enable cookies to be placed, audiences to be tracked, browsing behaviour to be monitored, and advertising data to be sold.
Clicks are, last but not least, the cornerstone of online marketing. They drive traffic, increase visibility and search engine rankings, and, in the case of commercial websites such as most media sites, generate advertising revenue.
Are, or rather were. Because all this is changing.
Because online platforms are suppressing links. ‘Link below’, ‘link in the first comment’: these phrases are increasingly found on social networks, from X to LinkedIn. They are attempts to circumvent a strategy that significantly reduces the visibility of all posts containing external links. For a reason: as long as you continue to scroll on the platform itself, you increase its audience and expose yourself to more advertising. If you click on an external link, the platform lets you leave, without knowing when or whether you will come back. Its goal is therefore to actively ensure that you shan't click.
However, links have an important informational function. They are connectors pointing to the sources. They are what enables information to be justified as well as verified, even if a critical approach concerning the sources is imperative. Evaluating a source’s perspective and its possible bias is essential. Researchers and journalists alike know that their work is not worth a lot if it is detached from that of their peers and colleagues, and that its value lies largely in high quality sources. The ultimate example is Wikipedia, the only website that forbids itself from being the primary source of information: nothing shall be written there that is not supported by a reliable external source. Information without a source is not really information. If the source is poor, obscure, hidden or tampered with, the integrity of the information is compromised.
This move by a number of platforms to reduce clicks to external sites has an impact on its users: the proportion of “shares without clicks” (SwoCs) on social media is constantly increasing. A rising number of people are sharing content they haven't read, based solely on its title or preview, and possibly also on its origin (trusting its credibility based on the reputation of the website that posted it, or the account that shared it), with significant consequences for the quality of information circulating. Content (clickbait) which provokes, farms or fuels outrage is more viral and overrepresented, favouring extremist and poor-quality material. This phenomenon is also accompanied by the acceleration of number of posts. Opening and reading articles is an operation which organically slows down the number of posts and shares, and it improves the knowledge of the reader and his full understanding of complex news which cannot be summarised in a headline. It is thus contrary to the platforms’ clickbait strategy and to be hindered at all costs.
This rise in zero-click behaviour is also having an impact on the press itself, prompting it to change its approach. Many media outlets, including traditional quality media, tend to adjust their headline strategies to prioritise engagement and boost online traffic, in an effort to maintain visibility at the expense of informational quality. However, this is a losing battle. The acceleration of zero-click is locking the media into a decline in advertising revenue, therefore reducing further their ability to produce quality information.
Online searching is also changing. By early 2025, 55% of Americans were already using chatbots rather than search engines to conduct online searches, even if most of these searches are not related to news and therefore less problematic in terms of disinformation. While researchers studying disinformation have often complained about the lack of transparency in Google's algorithm, at least we can acknowledge a certain clarity: at the top of the search page (called SERP for Search Engine Results Page), sponsored results are displayed (and mentioned as such), which are the results of SEA (Search Engine Advertising). These sites have paid to have their links displayed first to users performing certain searches. Below are shown the results considered organic, the result of SEO (Search Engine Optimisation). These sites have been configured, through a complex process involving a wide range of operations, from audience building to cross-referencing, to optimise their chances of appearing among the top results of the search.
Nowadays, for a marketer (or an influencer, or a disinformation entrepreneur – these professions are different but they share many techniques and the objectives of visibility and attention coupled with retention), the goal is no longer so much to appear among the top Google results but to have their brand – or their narrative – cited in the responses of the main LLMs. Several companies boast of being able to alter results. Press reports have disclosed several influence operations, such as the one carried out by the US company Clock Tower X LLC on behalf of the Israeli Foreign Ministry, which targeted Generation Z on chatbots and fed them narratives favourable to Israel.
A noticed study by NewsGuard published in March 2025 warned about LLM grooming, which would allow the pro-Kremlin network Pravda to influence major AI models. This strategy would explain the huge number of articles produced by this network (which published no less than 3.6 million articles in 2024, according to NewsGuard) for laughable visibility indicators: these articles generate no engagement and are never shared. Evidently, no one seems to read them. Except for LLM crawlers, who use them as training data. The main purpose of this profusion of articles would be to poison the well of online knowledge that generative AI draws from.
However, the results of this study have been questioned and balanced by subsequent research, carried out by researchers affiliated to the (Mis)translating Deceit project, which suggests data voids rather than intentional manipulation of training data. ‘For disinformation to appear in a response, several conditions need to align. Users must ask 1) highly specific questions on 2) poorly covered topics, and 3) chatbot guardrails must fail,’ note the authors, who conclude that ‘users are unlikely to encounter such content under normal conditions’ and urge caution against overstating the role of malicious actors in AI. We know how much Russian propaganda loves it when Westerners overestimate its capabilities or attribute actions to it that it has not committed.
Nevertheless, the advent of GEO (Generative Engine Optimisation) further complicates the trackability of information. Techniques for increasing or reducing visibility are largely obscured by the LLMs’ ‘digestion’ mechanism, which provides a fully written, ready-to-consume response rather than a simple list of links as sources, whose relevance the user can quickly assess. The algorithmic mechanics, the ‘reasoning’ pursued by the chatbot, are mostly hidden. Chatbots have even bigger problems with links. They rarely provide them unless explicitly instructed, and they often invent them through hallucination. Much will depend on how willing AI companies are to address these issues and to introduce stronger guardrails and greater transparency around how responses are constructed and sourced. This will be a major issue for those involved in marketing, as chatbots are likely to offer 2-tier pricing for advertisements, making their services cheaper when links are fully visible and more expensive when they are concealed.
This complicates the task not only for researchers investigating disinformation, but also for the public: examining and questioning sources is one of the fundamentals of media literacy, even if this principle can be undermined in contexts lacking access to politically diverse material. Moreover, there is an urgent need for media literacy programmes to address poor public appreciation of the way that LLMs operate. As a recent survey shows, when asked what happens when you consult a tool like ChatGPT, few properly understood the probabilistic nature of LLMs: 45% of those surveyed thought that the chatbot looks up an exact answer in a database, and 21% thought it followed a script of prewritten responses.
While AI is profoundly changing the value chain on the internet (chatbots aim to sell privileged visibility in their responses, taking up an increasing share of advertising budgets), those at the bottom of the information ‘food chain’, the media and their journalists, risk being not only plundered but also distorted, and impoverished. So, as well as improvements to media literacy, the other (admittedly challenging) solution to the zero-click internet is stronger regulation of LLMs, ensuring greater transparency and enabling users to understand how chatbots find specific results to searchers, why those results have been pushed and whether any payment was involved in the process.