By Michael Olaogun
As Nigeria gravitates toward the 2027 general elections, the wave of unregulated force of artificial intelligence is quietly and gradually reshaping the political landscape. While AI promises efficiency, innovation and economic growth, its darker applications, particularly in the political arena poses a serious threat to the credibility and stability of Nigeria’s staggering democracy. The concern is no longer hypothetical as the evidence from Nigeria’s 2023 elections and global electoral cycles shows that AI-driven disinformation is evolving fast as it is dramatically lowering the barrier to producing convincing false content.
Across social media platforms, fake videos, cloned voices and AI-generated images are being created cheaply and circulated widely. Interestingly, AI-generated media contents are becoming so realistic that distinguishing truth from fabricated lies often required expert analysis rather than common sense. In Nigeria, where social media plays a central role in political discourse, this is particularly dangerous. During the 2023 elections, coordinated disinformation campaigns and manipulated media flooded online spaces, fuelingpolarization and eroding trust in democratic institutions. Sadly, as we approach 2027 elections, AI tools arebecoming more advanced and sophisticated and are to be used by political gladiators, their protagonists and antagonists.
While AI does not just make fake content possible, it makes it scalable. Political actors, their supporters andeven foreign interests can deploy strategies includingtargeted messaging systems to influence millions of voters simultaneously. Different report indicates that a significant portion of disinformation campaigns targeting Africa are foreign-sponsored, with West Africa often at the center of such operations. This raises troubling questions about sovereignty: who really controls the narrative during Nigerian general elections?
More importantly, AI allows disinformation to be personalized. Voters can be targeted with tailored falsehoods designed to exploit their fears, biases or ethnic and religious identities, deepening existing societal divides. Imagine a fabricated video of a presidential candidate or other strong political figuresmaking inflammatory statements released just days before an election. Even if debunked later, the damage may already be done.
Globally, there have been cases of AI-generated audio impersonating political figures to mislead voters, demonstrating how easily trust can be manipulated. In Nigeria, where political tensions can be so high, such tactics could incite unrest, suppress voter turnout, or delegitimize election outcomes. Ironically, the greatest danger of AI may not be that people believe fake content, but that they stop believing anything at all. When citizens can no longer distinguish between real and fabricated information, a “liar’s dividend” emerges as politicians can dismiss genuine evidence as fake, while fake evidence circulates as truth. This erosion of shared reality threatens the very foundation of democratic participation and this will further undermine trust in Election Management Body and sister institutions.
Despite the growing threat, Nigeria’s regulatory and institutional frameworks remain insufficient. While existing laws address traditional misinformation, they are not fully equipped to handle AI-generated manipulation at scale. Though Nigeria’s Election Management Body and civil society organizations has begun exploring AI tools to counter disinformation by developing fact-checking technologies. But these measures may not be enough to match the speed and sophistication of AI-driven propaganda as we approach the elections.
If Nigeria is to jealously protect the 2027 elections, there is the need for the enactment of laws governing the use of AI in political campaigns. Similarly, there is the need for advanced systems to identify deepfakes, while citizens must be equipped to critically evaluate online content. The partnership between government at different levels with tech companies, media and civil society must must be encouraged to counter disinformation.
AI is not a threat as it is perceived, it is a tool. But like other tools, its impact depends on how it is used.Nigeria’s 2027 elections may become a defining momenteither by how technology can strengthen democracy or conversely, a tale of how it can undermine it. The window for preparation is narrow. If Nigeria fails to make intentional efforts, the battle for the ballot may be fought not just at polling stations, but in algorithms, data centers and the invisible architectures of artificial intelligence.
Michael Olaogun, a policy-oriented researcher and democracy observer wrote from Abuja.
michaelolaogun2014@gmail.com

