INEC Warns: AI and Screenshots Are Unreliable Evidence in X Post Controversy
INEC: AI, Screenshots Not Reliable Evidence in X Controversy

INEC Dismisses AI and Screenshots as Unreliable Evidence in X Post Controversy

The Independent National Electoral Commission (INEC) has firmly rejected artificial intelligence (AI) outputs and social media screenshots as unreliable evidence in the ongoing controversy surrounding alleged X (formerly Twitter) posts linked to its chairman. This statement was made by INEC’s Director of Information and Communication Technology (ICT), Lawrence Bayode, during an interview on Channels Television on Monday, where he addressed claims tied to a social media account purportedly associated with the Commission’s chairman.

Emphasis on Verifiable and Forensic Evidence

Bayode stressed that INEC will rely strictly on verifiable and forensic evidence to determine the authenticity of the account and the content circulating online. He explicitly stated, "We rely on evidence. I will not base my judgments on screenshots. I will not allow that to guide my conclusion." To ensure a thorough investigation, INEC has engaged security agencies and is bringing in independent forensic experts. Bayode added, "We are taking this further. Beyond referring the issue to security agencies, we are also engaging third-party forensic experts to examine the situation."

Background of the Controversy

The controversy stems from the resurfacing of a 2023 post by APC National Youth Leader, Dayo Israel, which celebrated electoral success in an Igbo-dominated community. Critics later linked this to an alleged response from an account believed to belong to the INEC chairman, sparking claims of partisanship. However, INEC has repeatedly denied any connection, maintaining that its chairman does not operate a personal X account and has never engaged in partisan political commentary on social media.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Digital Impersonation and Broader Threats

Bayode described the development as part of a broader pattern of digital impersonation, warning that the issue extends beyond a single account. He noted that publicly available information, such as phone numbers and email addresses, could be exploited by malicious actors to create misleading digital identities. "What we are seeing is something bigger. This is digital impersonation," he said. "Anyone who wants to create havoc can use information in the public domain and manipulate it for this kind of activity."

Caution Against AI Hallucinations

Addressing claims generated by the AI tool GROK, which some commentators cited as evidence, Bayode cautioned against relying on artificial intelligence outputs without proper verification. He explained, "GROK, like any modern AI system, can hallucinate. Its outputs must be verified before conclusions are drawn." INEC is already conducting internal technical reviews as part of its investigation while continuing preparations for future elections.

Implications for Electoral Integrity

Bayode warned that this incident highlights emerging digital threats that could undermine electoral integrity, particularly as INEC plans to expand the deployment of technology in the 2027 general elections. He emphasized, "We are already looking at it in-house, even as we prepare for upcoming elections. If this is already happening now, then we need to ensure we take the necessary steps to address it before then." INEC reiterated that the allegations are false and part of a coordinated misinformation campaign, adding that it is working with relevant authorities to identify and prosecute those behind the impersonation.

Pickt after-article banner — collaborative shopping lists app with family illustration