SMI Trains Lagos Journalists on Responsible AI Use in Modern Journalism
The Safer Media Initiative (SMI) recently organized a comprehensive training session for journalists in Lagos, aimed at bridging knowledge gaps and promoting the effective, responsible, and safe use of Artificial Intelligence (AI) in the media industry. The event, themed 'AI Tools for Journalists: Effective, Responsible and Safe Use', highlighted the transformative impact of technology on journalism and the urgent need for professionals to adapt.
Embracing Technological Disruption
Peter Iorter, Executive Director and Chief Executive Officer of SMI, addressed the participants, noting that journalism has undergone significant changes driven by technology. "While some aspects remain the same, many changes are driven by technology. Even traditional media has no option but to adapt, otherwise it risks being left out of the media business," he stated. Iorter emphasized that AI is at the center of this disruption, and journalists who fail to keep pace with such innovations risk becoming irrelevant in the evolving industry.
He allayed fears that AI would replace journalists, but insisted that professionals who refuse to embrace the technology are more likely to lose their relevance. "AI will not take your job, but it will take the job of journalists who refuse to embrace it and give it to those who have adopted it," Iorter explained. He disclosed that recent survey data collected by SMI revealed a significant knowledge and skills gap in the industry, which the training aims to address.
Responsible AI Practices in Journalism
In her presentation, Titilope Fadare Oparinde, Founder of Generative AI Journalism with Titi, reinforced the message that AI will not replace journalists who know how to use it, but will replace those who refuse to learn. She emphasized that the choice to adapt has always been open to journalists, and responsible AI requires human editorial control over AI outputs.
Oparinde provided practical advice for journalists using AI tools:
- Always verify sources: AI research tools can cite outdated or non-existing sources. For example, Notebook LM only works with the information provided to it.
- Maintain transparency: Label AI-generated images, disclose AI assistance, and uphold editorial accountability.
- Double-check data accuracy: Ensure the accuracy of data before publishing visualizations or reports.
She also issued a cautionary note: "Never upload sensitive materials into public AI tools. Examples include confidential interview transcripts, unpublished investigations, private source communications, and leaked documents. You use AI to work faster without cutting corners on verification. You protect your sources even from tools that promise to be helpful. You stay accountable because your name is on the story, not the AI's," Oparinde stated.
The Future of Journalism with AI
The training underscored the importance of integrating AI into journalistic practices while maintaining ethical standards. Participants were encouraged to view AI as a tool to enhance efficiency and accuracy, rather than a threat to their profession. By adopting responsible AI practices, journalists can leverage technology to produce high-quality content and stay competitive in the rapidly changing media landscape.
This initiative by SMI reflects a growing recognition within the industry that technological literacy is essential for modern journalism. As AI continues to evolve, ongoing education and training will be crucial for journalists to harness its potential responsibly and effectively.



