loader image
October 17, 2025
Technology

Bots and Broken Truths: AI, Misinformation, and Nigerians’ Fight for Facts

By Ayobami Olutaiwo

As AI-generated content floods social media, Nigerians, both ordinary citizens and professionals, are feeling the sting of misinformation. Victims, fact-checkers, and experts weigh in on the battle for truth.

Over time, falsehoods in Nigeria spread through sensational headlines, rumour mills, and WhatsApp broadcasts passed from phone to phone. Low literacy levels and reliance on audiovisual information has made many judge credibility based on appearance or sound rather than fact.

In recent years, companies have loosened the guardrails governing the use of technology. Cloned voices, manipulated images, and polished texts now flood social media. Tools like Google’s Veo 3 have made it possible for anyone with a smartphone to generate ultra-realistic multimedia content. Meanwhile, Meta’s monetisation policy, which rewards creators based on the volume of their posts, has recently been criticised for encouraging quantity over quality, raising concerns about the spread of misleading content.

Nigerian media platforms have been sounding the alarm. Premium Times has reported that AI has moved beyond visuals into audio deepfakes, where public figures’ voices are fabricated to “have them say anything the actors want.” In July 2025,  the organization’s Mukhtar Ya’u Madobi warned that the “fight against deepfakes is not just a fight against technology, it is a fight for truth, for peace, and ultimately, for the soul of the nation.”

Concerns are not limited to Nigeria. A report by The Guardian revealed that X’s AI chatbot, Grok, began spreading antisemitic content and praising Hitler shortly after receiving a “right-wing” update that reduced its content filters. While that incident may feel distant, the lesson resonates in Nigeria as AI can both support fact-checking and supercharge misinformation, depending on how it is deployed.

A Near-Perfect Lie

Experts warn that the consequences of AI misinformation are deeply personal. “AI-generated misinformation marks a significant shift from the era of WhatsApp broadcasts and manipulated images,” said Dr. Solomon Oyeleye, a media scholar at Caleb University. His words echo what Nigerian journalists have reported.

“The complication arises from the near perfection of the deepfakes which has become more challenging for Nigeria’s smartphone users to decipher,” he explained. Many users, he added, “lack media information literacy skills.” This, he said, creates an uphill task for journalists and fact-checkers. “Media workers must now work harder to establish the authenticity of their own content, which is now under stiff competition from near-perfect fakes.”

Scholars are also adapting. “We are seeking ways to ensure the transmission of accurate and reliable information,” Dr. Oyeleye said, pointing to the introduction of critical thinking and media literacy courses in academic departments. “It has become an imperative to survive the current challenge.”

Lives Disrupted by Deepfakes

In 2021, Binta Yusuf, a student entrepreneur in Kaduna, fell victim to a deepfake impersonation scam. Her parents had given her ₦300,000 for school fees and accommodation, but when she saw a video of a popular Nigerian musician advertising a forex scheme promising triple returns in three days, she thought she had struck gold.

“The person came on a video call to verify that it was him talking, and I actually saw that it was him,” she recalled. Binta could not detect the fake, and her money vanished. “It was difficult. I learnt the hard way. I had to sell my iPhone to raise my school fees. I still feel sad about that, and I still haven’t caught up, even though this was in late 2021 and this is 2025. I messaged EFCC, Police, and all, but nothing had happened since then.”

For Taofeek Adetunji, a gadget store owner in Ekiti State, the cost of misinformation was painfully real. He remembered how a supposed customer purchased an iPhone and a power bank, paid online, and sent him a receipt along with a convincing credit alert. Trusting what he saw, Taofeek packaged the items and handed them over to a dispatcher.

“At the time, I had no reason to doubt it,” he said. “The alert came in just like every other genuine payment I had received.” It was only later, when he checked his bank balance, that he realised nothing had entered his account. The alert was fake. The goods were gone. “Until I had a first-hand experience of this, I wouldn’t have believed someone could generate a fake bank transfer receipt and even make it look like a proper credit alert,” he said, his voice heavy with regret.

While Binta’s savings were wiped out, Taofeek lost goods worth hundreds of thousands of naira. These stories underscore how misinformation is no longer limited to politics or clickbait headlines. It is about everyday Nigerians whose dignity and livelihoods are undermined by fabricated content.

Fact-Checkers: Fighting a Digital Battle

The burden of countering falsehoods falls heavily on fact-checkers who are often underfunded, under-resourced, and overwhelmed. In Lagos, Muktar Balogun and Sunday Awosoro, both fact-checkers with local verification initiatives, described the growing challenges of their work and the rising influence of AI-generated misinformation.

Verifying AI-generated misinformation is far more complex than tackling traditional media manipulation. “For photoshopped images or misleading headlines, you can use reverse image search and keyword search,” Muktar explained. “For AI-generated media, it’s not that straightforward. You have to find weaknesses. But AI continues to improve, meaning the weaknesses that were easy to spot are slowly disappearing.”

Muktar is particularly alarmed about synthetic media in the health sector. “I found a video of a health product review generated with Google’s Veo 3,” he said. “In the comments, many Nigerians were showing interest. Because we could not track these people, there’s no way to know whether they bought the product after seeing the AI-generated video.”

He also pointed to deepfake-style videos featuring well-known Nigerians like Aproko Doctor promoting fake products. “Even if the full impact cannot be quantified, we can anticipate its consequences—erosion of public trust, financial loss for unsuspecting victims, and reputational damage for respected figures.”

Sunday has witnessed similar dangers. During Ghana’s last election, his fact-checking team encountered synthetic audio recordings that spread quickly online. “There were two specific ones from the main presidential candidates. Their voices were cloned,” he recalled. “In one, a candidate was heard declaring war if he lost. In another, someone was allegedly instructing people to buy votes.” The audio escalated tensions. “One person got killed. People thought it was true.”

Despite using AI tools to verify the recordings, nothing worked. “They all failed,” Sunday said. “Traditional investigative work saved the day.”

He believes this confusion is dangerous. “People don’t know what’s real anymore—it’s a challenge even for us professionals.”

As for AI detection tools, Sunday admitted he has lost faith in them “I no longer rely on them,” he said. “Some even flagged original images as AI-generated. Even the tools are confusing.”

The Political Dimension of Misinformation

In Nigeria, misinformation also carries political weight. Elections are particularly vulnerable. During the 2023 polls, several deepfake audios and manipulated videos circulated online, including fabricated endorsements and false claims of rigging.

A recent article by Dubawa highlights the growing challenge of audio deepfakes in Nigeria, especially during the elections. The spread of AI-generated clips, mimicking the voices of politicians and religious leaders, misled the public. While some were later debunked, many Nigerians had already accepted them as real.

In the report, Silas Jonathan of the Centre for Journalism Innovation and Development (CJID) said sensitising the public about audio deepfakes should be prioritised. “Generally, there is a lack of awareness among the populace, and the possibility for AI to create a convincing voice of someone. The public is not well aware of the capacity of AI to do those kinds of magic.”

Towards Regulation

Nigeria is not alone in the struggle to regulate AI-generated misinformation. While governments in Europe debate new laws to hold platforms accountable, regulation in Nigeria remains weak despite repeated calls from experts. In a report by The Nation, Akintunde Rotimi, spokesman for the House of Representatives, warned that “AI-generated fake news is a serious threat to freedom of the press,” stressing that fabricated content has been used to manipulate public opinion and, in some cases, “resulted in violence, public unrest, and fractured communities.”

Rotimi noted that the House is advancing legislation to regulate AI’s development and deployment, sponsoring a bill aimed at protecting privacy, human rights, and transparency in AI applications. The move seeks to align Nigeria with global standards, including UNESCO’s AI ethics guidelines and the African Union’s digital transformation strategy. Efforts to get a response from Mrs. Hadiza Umar of the National Information Technology Development Agency were unsuccessful at the time of filing this report.

Solutions: Literacy, Regulation, and Tools

Experts say tackling AI-driven misinformation in Nigeria requires both education and regulation. One urgent step is improving digital literacy so citizens can question what they consume online.

“We must consider fact-checking not as a technical process or concept but rather as steps an individual takes to verify the authenticity of information before accepting or sharing it,” said Dr. Solomon. He cited a resource developed by the Association of Communication Scholars and Professionals of Nigeria (ACSPN), adopted by UNESCO, as a tool to help ordinary Nigerians spot falsehoods. “Exposure to such documents is essential in this present day.”

But awareness alone is not enough. Stronger regulation is also needed to hold social media platforms accountable. “It is not to say that policy is not also important, but without awareness, media literacy, bridging that knowledge, the laws will not help much,” said Sunday. Similarly, Muktar stressed that “collaboration between government, tech companies, media, and regulators can help scale detection efforts.”

As Nigeria faces this new frontier of misinformation, the battle is not only about technology; it is about people. It is about Binta, Taofeek, and countless others whose lives are disrupted by digital falsehoods. It is about fact-checkers like Muktar and Sunday, fighting an uphill battle to protect credibility. And it is about a society learning to distinguish fact from fabrication.

“It is time to own up to our own safety on the internet,” Muktar reflected. “One way to do that is to invest in fact-checking efforts, especially one targeted at creating a tool to identify synthetic media. If such a tool exists somewhere, we should partner with innovators to bring it closer to local fact-checkers and journalists who understand our local context.”

In a country where democracy, economy, and social cohesion hang in the balance, the fight against AI-fuelled misinformation is not just urgent, it is existential.

This report was produced with support from the Centre for Journalism Innovation and Development (CJID)

Share Post

Mustapha Salisu

Mustapha Salisu is a graduate of BSc. Information and Media Studies from Bayero University Kano, with experience in Communication Skills as well as Public Relations.

Leave a Reply

Your email address will not be published. Required fields are marked *