As the global media terrain evolves, Artificial Intelligence (AI) is emerging as a central force in reshaping journalism.
From automating tasks like content generation to powering complex data analysis, AI is rapidly transforming how stories are told, distributed, and consumed.
The implications for press freedom and ethics are significant, so much so that this year’s World Press Freedom Day is themed: “Reporting in the Brave New World: the Impact of Artificial Intelligence on Press Freedom and the Media.”
The shift is no longer speculative. Newsrooms are actively incorporating AI into their operations.
Some fear it could displace journalists, while others believe it enhances newsroom productivity by taking over repetitive work and enabling more personalized news experiences.
A bold example of AI integration comes from Nigeria’s TVC News, which recently introduced AI news anchors in five local languages.
“TVC News, a satellite digital television platform in Nigeria has made history as the first broadcaster in Nigeria to introduce Artificial Intelligence news anchors,” the company announced.
“The AI anchors will deliver news in English, Yoruba, Hausa, Igbo, and Pidgin, reaching audiences across the country in the languages they understand.”
Edward Akintara, Corporate Communications Manager, stated this move is part of the station’s broader technological advancement.
Meanwhile, CEO Victoria Ajayi reassured the public “AI news anchors would not replace human broadcasters… they would be used to showcase the dexterity and expertise of the company’s human talent.”
Yet, alongside such innovations come deep concerns. AI’s role in shaping narratives has raised alarms over bias, misinformation, surveillance, and erosion of journalistic values.
The United Nations notes that AI holds both promise and peril for media freedom.
“While the principles of free, independent, and pluralistic media remain crucial, AI’s impact on information gathering, processing, and dissemination is profound, presenting both innovative opportunities and serious challenges,” a UN note read.
“AI can help support freedom of expression by making information easier to access… At the same time, AI brings new risks. It can be used to spread false or misleading information, increase online hate speech and support new types of censorship.”
AI’s dual role was explored in a webinar hosted by the Media Foundation for West Africa (MFWA), held on April 30, 2025.
Titled “AI, Press Freedom and the Future of Journalism,” the session gathered journalists, technologists, academics, and civil society experts.
Topics included the ethical dilemmas, editorial implications, and sustainability challenges posed by AI in the news industry.
Ajibola Amzat, Africa Editor at the Centre for Collaborative Investigative Journalism, emphasized how AI is altering traditional editorial roles:
“Not again! AI such as ChatGPT, Deepseek, Co-Pilot now play the same role as human editors… AI has also made the production of journalism faster and with less error.”
However, he added:
“AI has been found to reproduce ideologies in news that reinforces social inequality, racism, sexism and colonialism… The data largely used to power AI technology are mostly collected from the Global North which predisposes audience from the Global South to consuming western ideologies at a faster rate.”
Amzat also underscored the need for media investment in verification tools:
“Newsrooms should train and retrain reporters more frequently to develop verifying skill, invest in fact-checking tools and regularly track influence operations online and social media.”
He called on governments to contribute by being transparent:
“Government should start by being more transparent and accountable to the citizens… Government should promptly provide accurate information and be responsive to media enquiry as a way of fighting misinformation and disinformation.”
In Nigeria, events marking the day include a symposium by SERAP and the Nigerian Guild of Editors, addressing how the Cybercrime Act is being allegedly used to suppress dissent and restrict media freedom.
This aligns with growing concerns over state and corporate control of information using AI.
Ayode Longe, Deputy Executive Director of Media Rights Agenda, also weighed in on AI’s rapid adoption:
“AI can do a lot of things today, including research, curating information, analysing data, writing and editing stories… The Il Foglio Newspaper… built its own AI chatbot with which it wrote whole editions of its newspaper, this is phenomenal.”
Yet, he warned about AI’s limitations:
“We have been told that AI sometimes hallucinates… some are made up… There is also the consideration of compensation for those whose works are being used… AI can be used to produce deep fakes that are difficult to spot as AI-generated.”
He emphasized the need for regular training on fact-checking to combat disinformation:
“All journalists should be trained and retrained on fact-checking… The journalist also needs to be widely read and discerning.”
On regulation, Longe argued for a collective approach:
“AI regulation is not something that should be left to government alone but must be stakeholders driven… The government, the media, the technology companies, the academia, the legal profession and even the consumers all have to come together to formulate policies.”
Despite the threats, Longe acknowledged AI’s potential to help sustain the media industry:
“AI can be used to generate news, do fact-checking, proofread and even edit making the working of journalism faster and better… It can also be used to target and reach more audiences faster… which inadvertently means more advertising revenue for the media.”
As the industry adapts to this digital shift, the consensus remains clear: AI must complement, not compromise journalism’s core mission, truth, accountability and freedom of expression.











