On Disinformation: How AI and Social Media Are Reshaping Politics and Elections

Before the technology revolution that put mobile phones in all hands and allowed information to reach millions of people in seconds, misinformation and disinformation existed and required significant amounts of time and resources to be widespread and successful in effect. However, with the digital revolution, technologies—such as Photoshop, deepfakes, and generative AI—have democratised deception and altered the economics of information distribution. While internet users still pay for connectivity, the ability to distribute content is instantaneous. A single click can broadcast disinformation to hundreds of thousands of people at nearly the speed of light, far outpacing traditional methods of communication.

One of the most significant manifestations of this rapid spread is false voting information, particularly during elections. The lucrative nature of disinformation makes it an attractive strategy for those looking to manipulate public opinion. Research indicates that peak periods for disinformation coincide with elections, political discourse, and crises—whether political or health-related. As fake news circulates, it fosters political cynicism among audiences, leading them to distrust information sources. The instant nature of digital communication allows misleading information about polling locations, voting requirements, or candidate stances to spread widely before election officials have time to respond. This confluence of high stakes, emotional engagement, and time sensitivity creates ideal conditions for manipulation.

Misinformation, Disinformation and AI

When weaponised in political contexts, disinformation significantly affects elections by motivating violence, and voter apathy and preventing netizens from accessing credible information. To reach the target audiences, disinformation campaigns use a variety of message formats, such as news articles with misleading headlines, short-form videos on platforms like TikTok and Reels, and hashtags. Political actors—from governments to opposition parties and private organizations—often instigate these campaigns, while social media influencers amplify the messages to influence what audiences see and how they view information sources —what sources they trust and do not trust. 

The accessibility of generative AI tools now means individuals with little technical knowledge can generate convincing fake content, overwhelming traditional fact-checking methods. With tools that can generate realistic deepfakes and automate content creation, disinformation campaigns can now produce tailored messages that resonate with specific voter demographics. For instance, in January 2024, thousands of New Hampshire voters received a call claiming to be President Joe Biden, urging them not to vote in the primary election. The call was fake with an AI-generated deepfake that sounded like Biden with the aim of influencing voters. Recently, there have been several deepfake videos portraying presidential candidates in bad light.

Outside of use with malicious intent, generative AI models can also aid the spread of misinformation by generating false information. More often than not, rather than being wrong or admitting to not having information, these models will simply invent information that does not exist. These models such as ChatGPT and Claude AI rely on a database of publicly available data, social media, and academic papers amongst other sources to make predictions and return output. This means it can produce content that is biased and/or isn’t based on existing data (hallucinations). For example, when asked about the U.S. election on October 24, 2024, ChatGPT incorrectly listed Joe Biden as a running candidate despite being able to access current data. And when asked to rephrase a tweet supporting each candidate, ChatGPT refined a tweet endorsing Kamala but declined to do same for Trump. 

 

In a concerning instance of AI involvement in electoral matters, Perplexity AI endorsed Kamala Harris based on its data inputs. Such endorsements blur the lines of accountability regarding how decisions are made and who bears responsibility for potential misinformation or polarization stemming from these outputs

The Amplification Effect

Social media platforms are designed to maximize user engagement often at the expense of content accuracy. Algorithms prioritize posts that generate high levels of interaction—likes, shares, comments—regardless of their truthfulness. This engagement-driven model incentivizes sensationalism and emotional content, leading to the proliferation of misinformation. For example, an MIT study found that false news stories spread more rapidly on X (formerly Twitter) than true ones because they evoke stronger emotional responses. The chatbot Grok on X utilizes real-time data from the platform but has been shown to amplify toxic content when queried about political matters.

The impact of social media on electoral outcomes has been starkly illustrated in several high-profile cases. One notable example is the Cambridge Analytica and Facebook scandal during the 2016 U.S. Presidential Election. Cambridge Analytica collected personal data of millions of Facebook users without their consent, and analytics were used to build individual profiles and micro-target voters with tailored campaigns. By exploiting personal data harvested from Facebook users, Cambridge Analytica was able to deliver highly specific messages designed to sway public opinion and influence voter behaviour.

Additionally, social media has played a crucial role in other elections worldwide. In countries like Nigeria and Brazil, misinformation campaigns have been linked to increased political polarisation and violence during electoral periods. Social media algorithms also contribute to echo chamber formation by curating content based on user preferences, leading to a narrow range of perspectives. These spaces facilitate the reinforcement of ideas and opinions while marginalising dissenting viewpoints.

Addressing The Problem

In a year marked by critical elections around the globe, it is important that people can exercise their democratic rights based on factual knowledge. Perhaps the most concerning is the cumulative effect of mis/disinformation on public trust. When citizens can no longer trust their eyes and ears, they become cynical and disengaged—a trend that threatens political processes and the social fabric itself.

To combat these challenges effectively, a comprehensive Connected Verification Ecosystem (CVE) combining media literacy programs and social media education campaigns is essential for building public resilience against disinformation. Educational initiatives should focus on teaching individuals how to critically evaluate sources, recognize misinformation, and understand the motivations behind disinformation campaigns through adaptive learning systems that offer personalized curricula and real-world applications. First-party verification tools like Reuters Fact Check and AP Fact Check, combined with collaborative verification networks, provide crucial infrastructure for fact-checking. Additionally, emerging deepfake detection platforms such as Microsoft Video Authenticator and Intel FakeCatcher offer technical solutions for content verification.

While freedom of expression must be protected, it is equally important to acknowledge that disinformation is often organized, well-resourced, and technologically amplified—a reality that demands appropriate regulation. Addressing the financial incentives that drive the creation of fake news is another critical strategy. This could involve tech companies implementing policies that reduce advertising revenue for sites and posts that propagate misinformation or incentivizing high-quality journalism that adheres to ethical standards. To ensure lasting impact, these efforts should be supported by community-driven verification networks like X (formerly Twitter) Community Notes, which foster peer review, expert mentorship, and collaborative investigation tools. By creating a sustainable ecosystem for media literacy and fact-checking, we can empower citizens to engage more critically with information and strengthen democratic processes.