Features/Spotlight

Balancing AI Efficiency through Ethical Journalism standards

Supreme Desk
18 Dec 2025 7:06 PM IST
Balancing AI Efficiency through Ethical Journalism standards
x
It has made the job easier for people like me who write in both Hausa and English for a newspaper organisation.

Mohammed Ibrahim, a Kaduna-based journalist, has just returned from an assignment where he interviewed key stakeholders to develop his story.

As he settled down to work, Ibrahim opened an application on his phone and searched for the audio recordings he had captured during the interviews.

Within minutes, the application had converted the recorded voices into written text.

The tool he relied on was powered by Artificial Intelligence (AI), designed to transcribe audio files to text based on their length.

Ibrahim, who was introduced to the application by a friend, said his work had become easier and faster since adopting the technology.

He recalled the days when he transcribed interviews manually by repeatedly listening to audio recordings and writing word for word, a process that often took several hours and delayed report filing.

Beyond transcription, Ibrahim also uses AI-powered tools for translation, converting Hausa to English and vice versa within seconds or minutes.

“It has made the job easier for people like me who write in both Hausa and English for a newspaper organisation.

“Since I started using it, I hardly miss my deadlines because I know I have a dependable tool to make my job easier”, he said.

Across the world, experiences like Ibrahim’s are becoming more common as Artificial Intelligence tools gain a foothold in newsrooms.

Increasingly, AI is no longer viewed as experimental but as a routine part of journalistic practice.

Globally, studies by media development organisations indicate that more than two-thirds of large newsrooms in North America and Europe now use some form of AI in editorial processes.

These tools are applied for transcription, audience analytics, automated alerts, and content personalisation.

In addition, surveys by international journalism bodies suggest that AI adoption is growing fastest in areas that reduce repetitive tasks, allowing journalists to devote more time to investigative and field reporting.

Furthermore, industry data show that AI-assisted workflows can reduce time spent on routine newsroom tasks by between 30 and 50 per cent, especially in breaking news situations.

As a result, editors often encourage reporters to integrate AI tools into their daily routines.

Artificial Intelligence is therefore transforming journalism and reshaping how news is gathered, processed and distributed.

From transcription and translation to data analysis, image verification and headline optimisation, AI tools are rapidly becoming embedded in modern media ecosystems.

However, experts caution that while generative AI tools are ushering in a new era of efficiency, creativity and innovation, the technology also raises major concerns around ethics, accuracy, equity and access, especially in developing countries.

For them, the rise of AI-generated news has raised concerns about accuracy, as the number of working journalists declines.

Yet, AI also presents reporters with new opportunities to pursue high-impact stories, according to Sotiris Sideris, data editor at the Centre for Collaborative Investigative Journalism and Reporters United in Greece.

Speaking at the Centre for European Studies on Oct. 14, Sideris said AI and data-driven tools enable reporters to swiftly analyse large volumes of government and business data, uncover patterns, and expose questionable or illegal activities.

“The question today isn’t whether we are using AI in journalism, because we do it already.

“But whether we can do journalism without outsourcing our scepticism, our ethics, and our sense of accountability, both as journalists ourselves and the accountability we are asking people and organisations that hold power to provide,” he said.

To better understand these trends, the Thomson Reuters Foundation (TRF) conducted a survey examining the adoption and usage of AI tools among journalists in the Global South and emerging economies, including Nigeria.

The survey, conducted in the fourth quarter of 2024, covered more than 200 journalists across over 70 countries.

Findings revealed that more than 80 per cent of respondents use AI tools in their journalistic work, with nearly half integrating them into their daily workflows and about one-third using them weekly.

In addition, the survey found that journalists primarily rely on AI for writing and editing support, background research, transcription, translation, fact-checking and idea generation, with ChatGPT identified as the most commonly used tool.

Similarly, global newsroom assessments indicate that automated transcription is now the most widely adopted AI application in journalism worldwide, followed closely by translation and data analysis tools.

This trend is evident in multilingual societies, where AI helps bridge language gaps and expand audience reach.

Albeit the efficiency and speed offered by AI, experts argue that transcription and translation tools still face significant challenges, in terms of accuracy and performance in low-resourced languages.

Research has shown that AI transcription tools can insert words that were never spoken and often struggle with non-standard accents, including World Englishes and African American Vernacular English.

Consequently, journalists working in diverse linguistic environments face a higher risk of errors.

These limitations are even more pronounced in African contexts, where hundreds of indigenous languages remain underrepresented in AI training datasets.

Zainab Idris, an AI educator and founder of the Creativity Enthusiast Network, said most artificial intelligence tools still perform poorly when applied to Nigerian languages and cultural contexts.

According to her, many AI systems were not originally designed with Nigeria’s linguistic realities in mind, adding that when tools are used with Hausa, Yoruba, Igbo or Pidgin, errors often occur in tone, meaning and cultural interpretation.

She added that speech recognition remains weak, especially with accents and background noise, although gradual improvements are emerging through local projects collecting Nigerian speech data.

In spite these gains, she stressed that AI understanding of Nigerian languages and culture remains far from accurate and requires more local data and expertise.

Against this backdrop, media professionals stress that verification remains central to journalism, regardless of technological advancements.

In light of these challenges, journalists who use AI in storytelling and reporting are advised to rigorously verify and fact-check information before publication.

Ibrahim advised journalists not to rely 100 per cent on AI-generated output, stressing that cross-checking was necessary to avoid misrepresentation of facts.

Another journalist who uses AI, Juliet Ekwenugo, said AI can help journalists analyse large datasets, speed up background research, summarise documents, and suggest headlines or story structures.

She warned against publishing AI-generated content without verification, noting that the tools may sometimes produce incorrect or misleading information.

While many journalists embrace AI tools to ease their workload and meet deadlines, others remain cautious, arguing that overdependence on technology could erode creativity and professional judgement.

Sabiu Muhammed, a veteran journalist with 32 years of experience in the print media, said he still prefers to transcribe, translate and edit his work manually without using AI tools.

Beyond individual concerns, institutional readiness remains a major challenge.

Notwithstanding widespread adoption, the TRF report revealed major policy gaps within media organisations.

Only 13 per cent of respondents work in organisations with a formal AI policy, while nearly 80 per cent reported the absence of clear guidelines governing AI use in their newsrooms.

This lack of structure, experts warn, increases the risk of ethical breaches and reputational damage.

Muhammad Auwal-Ibrahim, founder of Halal Reporters, said AI is not a threat but a tool leveraged to enhance efficiency and expand into multimedia content creation.

“We are not part of the people running away from AI as if it’s a disaster. Rather, we are the early adopters of AI”, he said.

According to him, AI is best utilised by creative professionals who understand how to write effective prompts, adding that his organisation credits AI for any AI-generated content it publishes.

As more newsrooms integrate Artificial Intelligence into their operations, scholars and practitioners emphasise that technology must not replace core journalistic values.

One of them is Dr Murjanatu Abba, Senior Lecturer in the Department of Mass Communication at Ahmadu Bello University (A.B.U), who emphasised that journalism is grounded in reality and factual reporting, not fiction

According to her, some Nigerian universities have begun incorporating AI-related topics into their journalism curricula in recognition of the growing influence of AI in media practice.

However, she observed that education on AI ethics and responsible use remains inadequate and requires greater attention.

She identified key ethical areas such as bias and fairness, transparency and explainability, as well as authenticity and trust, as critical topics journalism students must understand to ensure accountability when using AI-generated content.

“Skills like investigative reporting, ethics, and collaboration will remain essential

”The ability to adapt to changing media landscapes and work under pressure will also be crucial; By combining these skills, future journalists can thrive in a rapidly evolving industry”, she said.

Ultimately, as artificial intelligence continues to reshape journalism globally, experts are calling for stronger collaboration among media organisations, technologists, linguists, universities, and government institutions.

They also emphasised the need for sustained investment in infrastructure and capacity building.

Furthermore, they stressed that transparency, ethics, and inclusiveness are essential to ensure that AI tools genuinely reflect and serve the communities they are designed to support.


Source: Aisha Gambo, NAN.

Next Story