Ever since the test version of ChatGPT was released in November last year, we've seen a mushrooming of GenAI tools, discussed the potential for newsroom improvement, content improvement, revolutionizing marketing and advertisement strategies and so many more.
We previously wrote about how newsrooms and journalists can benefit from using generative AI (GenAI), but we've also stressed some of the issues surrounding GenAI in journalism.
In the article „How Must Journalists and Journalism View Generative AI?“, the author Subramaniam Vincent notes: „There is too much ethical debt these systems are creating upstream before the tools even reach journalists. You may have editors review the generated text for facts and accuracy, but regressive biases in categorization and characterizations will be harder to catch. If you are summarizing historical contexts into a piece, and want to try machine summarizers, run them on selected articles and documents you yourself have vetted first. Proceed with caution.“
The World Association of Newspapers and News Publishers (WAN-IFRA) in collaboration with Germany-based Schickler Consulting surveyed in late April and early May the global community of journalists, editorial managers and other news professionals about their newsrooms’ use of Generative AI tools.
WAN-IFRA highlights a few takeways from the survey taken by 101 participants all over the world: “Half of newsrooms already work with GenAI tools.”
Given that most Generative AI tools became available for the public only a few months ago – at most – it is quite remarkable that “almost half (49 percent) of our survey respondents said that their newsrooms are using tools like ChatGPT. On the other hand, as the technology is still evolving quickly and in possibly unpredictable ways, it is understandable that many newsrooms feel cautious about it. This might be the case for the respondents whose companies haven’t adopted these tools (yet).
Overall, the attitude about Generative AI in the industry is overwhelmingly positive: 70 percent of survey participants said they expect Generative AI tools to be helpful for their journalists and newsrooms. Only 2 percent said they see no value in the short term, while another 10 percent are not sure. 18 percent think the technology needs more development to be really helpful.”
Few newsrooms have guidelines for their use of GenAI
There is a wide spread of different practices when it comes to how the use of GenAI tools is controlled at newsrooms. For now, the majority of publishers have a relaxed approach: almost half of survey participants (49 percent) said that their journalists have the freedom to use the technology as they see fit. Additional 29 percent said that they are not using GenAI.
Only a fifth of respondents (20 percent) said that they have guidelines from management on when and how to use GenAI tools, while 3 percent said that the use of the technology is not allowed at their publications. As newsrooms grapple with the many complex questions related to GenAI, it seems safe to assume that more and more publishers will establish specific AI policies on how to use the technology (or perhaps forbid its use entirely).
Back in May, the Financial Times Editor shared her thinking on GenAI and the FT, stressing that: „FT journalism in the new AI age will continue to be reported and written by humans who are the best in their fields and who are dedicated to reporting on and analysing the world as it is, accurately and fairly.”
She also notes that FT will be transparent “within the FT and with our readers. All newsroom experimentation will be recorded in an internal register, including, to the extent possible, the use of third-party providers who may be using the tool. Training for our journalists on the use of generative AI for story discovery will be provided through a series of masterclasses.”
In their updated Terms and Conditions, the FT also includes: “To the fullest extent permitted by law, we expressly prohibit any use of our content or data (including any associated metadata) in any manner for any machine learning and/or artificial intelligence purposes, including without limitation for the purposes of training or development of artificial intelligence technologies or tools or machine learning language models, or otherwise for the purposes of using or in connection with the use of such technologies, tools or models to generate any data or content and/or to synthesise or combine with any other data or content. We reserve all rights to license any use of our content and data for any such purposes.”
Thomson Reuters adopted a set of Data and AI Ethics Principles to promote trustworthiness in their continuous design, development, and deployment of artificial intelligence and their use of data.
Newsrooms and media in the Western Balkans seem to be catching up with new AI and GenAI developments, introducing new tools and experimenting with their workflows. However, it is important to start the discussion on the ethics of using these tools, disclaimers, and journalistic independence and integrity.
When thinking about introducing new GenAI principles, a good start is reviewing existing procedures, processes and codes of principles and conduct and using them as a basis to build upon in light of changes caused by GenAI.