Fanning the flames of artificial intelligence in the media: beyond efficiency and productivity gains

Click to download (PDF)

Introduction

The use of artificial intelligence (AI) is by no means new in the media. Indeed it penetrated the sector some time ago in the guise of what was known as “computational journalism” (Vállez & Codina, 2018; Calvo-Rubio & Ufarte-Ruiz, 2021), and some years later, the “robot journalist” was to pull up a chair in the newsroom (Moran & Shaikh, 2022; Tejedor, 2023). However,  it was the emergence of ChatGPT at the end of 2022 that has ushered in a new era (Gutiérrez-Caneda et al., 2023), that is, the era of generative artificial intelligence or GenAI to use the acronym popularized in the academic and technical literature.


How to cite this article: Codina, L., Ufarte-Ruiz, M.-J. & Borden, S. L. (2024). Fanning the flames of artificial intelligence in the media: beyond efficiency and productivity gains. Communication & Society, 37(2),221-225. https://doi.org/10.15581/003.37.2.221-225


This series of events is marked by a certain paradox: Turns of such significance do not usually occur without years of prior research; yet, it seems that GenAI caught everyone off guard judging by the confusion that it has caused among governments, businesses and the general public alike (Porlezza, 2023; Ciobanu, 2024). But, be that as it may, it is now commonplace to equate the GenAI era with that of a new industrial revolution, which might not be that far off the mark. There are even those who compare it to the discovery of fire[1], which might be even more apt: First, because of the genuine change of era that this metaphor captures and, second, because the image can be extrapolated – on the one hand, it points to the obvious utility of the discovery of fire for our very survival and, on the other, to the devastation a fire can wreak when it gets out of control.

However, we should perhaps add a note of realism at this point and reflect on the lesson to be taught by “Amar’s Law”[2] (formulated by the futurologist, Roy Amara), according to which, there is a tendency to overestimate the impact of new technologies in the short run, and to underestimate their true effects in the long run. Additionally or alternatively, we might cite the unattributed law that holds that early observations of new technologies almost always end up appearing naïve. We are, therefore, duty bound to be realistic and it is our task to encourage the analysis of the potential impact of AI, in general, and of GenAI, in particular, for media companies (Gutiérrez Caneda et al., 2023).

This task becomes all the more pressing as it is widely assumed that AI is here to stay, because of the promise that “it can deliver meaningful gains in efficiency and productivity” (Simon, 2023 p. 12). In this regard, most reports highlight at least three key areas in which the impact of AI will be felt most keenly: (1) identification of trends and patterns, (2) content creation and (3) content distribution. To these, media outlets with paywalls usually add another key area, that is, (4) the tailoring of content to users and the reduction in subscriber churn rates thanks to the use of big data – AI’s chief ally – to detect patterns.

Ethics and transparency

Yet, it is becoming increasingly clear that there is one dimension of the emergence of AI and GenAI in the media that we cannot turn a blind eye to – namely, that of the ethical questions to which it gives rise. We return to this issue in greater detail but suffice it to say at this juncture that the AI revolution is not simply a matter of reaping meaningful gains in efficiency and productivity.

The inescapable fact is that AI has enormous disruptive power, to the extent that all areas of media companies will feel its impact (Ventura, 2021; Tejedor, 2023, Simon, 2023). Simply put, AI is about to have a significant effect on both the business side and on the content creation side, whatever form that might take, of every media outlet. But there remains this third dimension that constitutes a highly important intangible factor and which cuts across these other two dimensions of a media company.

We refer to those questions of ethics that manifest themselves in a number of different areas. First, the media are bound by a real contract (in the case of publicly owned media) or a virtual contract (in the case of privately owned media) of service to society (Kovach & Rosenstiel, 2003). What this means is that, for better or worse, a media company stands apart from normal companies. Because, while all companies are bound by  a social commitment, media companies are fundamental for guaranteeing a much more inclusive society. In addition to the essential service that the media provide to democracy (in their role as the Fourth Estate), they have played a key role to date in what we might call the construction of the “consensus on reality”[3] within society.

This role is something that we only discovered once it started to break down. Two examples should serve to illustrate the point. The first is the anti-vaccine movement. Here, we have witnessed the breakdown of a basic consensus on the reality that establishes something as fundamental as the fact that vaccination campaigns prevent deaths. The second is the percentage of citizens, in a country as important as the US when it comes to establishing benchmarks, who believe that Trump won the elections. As in the first case, a parallel yarn has been spun that has prevailed against all evidence. In both cases we can conclude that the news media have failed in their function of providing a basic consensus on reality.

The reasons for the breakdown in this function are open to debate and, clearly, varied, but it is common to attribute it to the polarization of society and the social networks, a vicious circle where it is difficult to differentiate between cause and effect. Thus, significantly, while AI is a source of great hope for those that fight against disinformation (Simon, 2023; Beckett & Yaseen, 2023), GenAI, with its ability to produce synthetic media, including, fake images and videos, constitutes a new threat. As such, the continuing impact of AI is more than guaranteed over the coming years and academia is called upon to pull out all the stops in an effort to better understand this new scenario.

The media unquestionably face major challenges in seeking to apply AI to their work, almost all of them of enormous ambivalence in terms of their potential to cause problems but also to improve communication and to put AI at the service of a better society. But it is in the generation of multimodal content (text, image, sound and video) at the hand of GenAI where the most formidable challenges – of a technical, organizational and ethical nature – emerge (Simon, 2023; Beckett & Yaseen, 2023). The technical challenges concern the way in which media outlets might integrate GenAI into their production processes so that they might actually deliver meaningful gains in efficiency and productivity. The organizational challenges concern the way in which the media integrate AI, in general, and GenAI, in particular, into their overall business culture and, more specifically, into the culture of their newsrooms (Lopezosa et al., 2023; Lopezosa et al., 2024). Finally, the ethical challenges offer the greatest test, as it is here that the media are confronted by a whole battery of problems. Here, there is the clear need to combat the bias and prejudices that contradict democratic values or the values of a particular media outlet. But there is also the question of how the content generated by GenAI is used and how this can be done in as transparent a fashion as possible for media audiences (Ventura, 2021; Ventura, 2023).

The fundamental conjecture of AI in communication

Here, it is worth recalling the key elements of journalism as identified in the seminal work of Kovach and Rosenstiel (2003). The authors, both academics and practising journalists, characterize the essence of journalism in terms of nine elements (later expanded to ten in subsequent editions). Of these, and on the occasion of the presentation of this special issue, we are interested in highlighting the first three, namely: (1) journalism’s first obligation is to the truth, (2) its first loyalty is to the citizens, and (3) its essence is a discipline of verification. On the back of these three elements we can develop a theoretical “Turing test” of AI applied to the media, and to do so we begin with a fundamental conjecture: If AI fails to serve to improve these elements, then AI will not have fulfilled its ethical mission in the media.

In other words, as we know, AI will and is already impacting the media in many ways. However, the truth must remain journalism’s first obligation, and it should be even better served by AI. The use of AI ought to maintain and if possible increase the loyalty of the media to the citizens. And, finally, processes of verification should be even more effective thanks to AI. Naturally, this same exercise could be applied to each of the elements that Kovach and Rosenstiel (2003) established in their day and we refer the interested reader to their work.

For all these reasons, we are interested in promoting a meticulous analysis of the adoption of AI by the news media and in scrutinizing each and every one of its implications, a task to which the three issues of this journal in 2024 will be dedicated. In this the April issue, we present four valuable contributions that do just that. Thus, the critical question of the factors that determine the adoption and use of artificial intelligence and big data by citizens is addressed in the study undertaken by Sánchez-Holgado and Arcila-Calderón.

In the second article, Calvo and Rojas address the decisive issue of whether the characteristics of journalistic quality are maintained in the digital media, especially following the adoption of AI, something that the authors estimate has been done by no fewer than 75% of news outlets. Previously, we noted the importance of the role to be played by AI as an ally of the media in verifying and identifying synthetic content (or deepfakes). Addressing this very issue, the study reported by Sánchez Esparza and Palella Stracuzzi examines the way in which the Spanish broadcasting company, RTVE, uses AI to detect fake videos while, at the same time, the authors make interesting contributions in relation to new AI content and new media profiles. Last but not least, Alcaraz-Martínez, Vállez and Lopezosa analyse the visibility of the AI content published by a total of 69 media outlets in 12 countries of the European Union, plus England and the United States.

It is our hope that the four studies in this first issue will help scholars and practitioners alike meet the fundamental conjecture stated above, namely that AI should serve to help the media create more (and not less) inclusive societies, as well as more (and not less) democratic societies. And what is certain is that, without the research and dedication of everyone in the professional and academic sectors, this cannot be guaranteed.

References

Beckett, C. & Yaseen, M. (2023). Generating Change: A global survey of what news organisations are doing with AI. JournalismAI. Department of Media and Communications. The London School of Economics and Political Science. https://www.journalismai.info/s/Generating-Change-_-The-Journalism-AI-report-_-English.pdf

Calvo-Rubio, L.-M., & Ufarte-Ruiz, M.-J. (2021). Artificial intelligence and journalism: Systematic review of scientific production in Web of Science and Scopus (2008-2019). Communication & Society, 34(2), 159-176. https://doi.org/10.15581/003.34.2.159-176

Ciobanu, M. (2024). Artificial Intelligence for Independent News Publishers – a guide. Public Interest News Foundation. https://www.publicinterestnews.org.uk/_files/ugd/cde0e9_3113a392e3cc4fde83b629a5515a366e.pdf

Gutiérrez-Caneda, B., Vázquez-Herrero, J., & López-García, X.  (2023). AI application in journalism: ChatGPT and the uses and risks of an emergent technology. Profesional de la información/Information Professional, 32(5). https://doi.org/10.3145/epi.2023.sep.14

Kovach, B., Rosenstiel, T. (2003). The elements of journalism. Three Rivers Press. ISBN 10: 0609806912

Lopezosa, C., Codina, L., Pont-Sorribes, C., & Vállez, M. (2023). Use of generative artificial intelligence in the training of journalists: challenges, uses and training proposals. Profesional de la información/Information Professional, 32(4).

Lopezosa, C., Pérez-Montoro, M., & Martín, C. R. (2024). El uso de la inteligencia artificial en las redacciones: propuestas y limitaciones. Revista de Comunicación, 23(1), 279-293.

Moran, E. M. &  Shaikh, S. J. (2022) Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial Intelligence in Journalism, Digital Journalism, 10:10, 1756-1774, https://doi.org/10.1080/21670811.2022.2085129

Porlezza, Colin (2023). Promoting responsible AI: A European perspective on the governance of artificial intelligence in media and journalism. Communications 2023; 48(3): 370–394. https://doi.org/10.1515/commun-2022-0091

Simon, F. M. (2024).  Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena. Tow Center for Digital Journalism. https://towcenter.columbia.edu/sites/default/files/content/Tow%20Report_Felix-Simon-AI-in-the-News.pdf

Vállez, M. & Codina, L. (2018). Periodismo computacional: Evolución, casos y herramientas. Profesional de la información, v. 27, n. 4, pp. 759-768. https://doi.org/10.3145/epi.2018.jul.05

Ventura, P. (2021). Algorithms in the newsrooms: Challenges and recommendations for artificial intelligence with the ethical values of journalism. Consell de la Informació de Catalunya. https://www.patriciaventura.me/single-post/presentaci%C3%B3n-del-informe-sobre-inteligencia-artificial-%C3%A9tica-y-periodismo

Ventura, P. (2023). Guías éticas para el uso de la inteligencia artificial en el periodismo [sitio web]. https://www.patriciaventura.me/single-post/gu%C3%ADas-%C3%A9ticas-para-el-uso-de-la-inteligencia-artificial-en-el-periodismo

Tejedor, S.  (2023). La inteligencia artificial en el periodismo : mapping de conceptos, casos y recomendaciones. Editorial UOC


[1] As is the case of the anthropologist, Eudald Carbonell, when interviewed by the Spanish literary magazine Librújula in March 2024 (https://librujula.publico.es/eudald-carbonell-la-inteligencia-artificial-es-el-descubrimiento-mas-importante-de-la-humanidad-despues-del-fuego/)

[2] See https://en.wikipedia.org/wiki/Roy_Amara

[3] Pepa Bueno, director of El País, in a statement made at the congress on ‘The Future of the Media’ (UNIR, Sept. 2023). https://www.unir.net/actualidad/vida-academica/pepa-bueno-directora-de-el-pais-en-unir-se-ha-roto-el-consenso-sobre-la-realidad-y-eso-mata-la-democracia/


NOTE

Article originally published in Communication & Society: https://revistas.unav.edu/index.php/communication-and-society/article/view/50674

Please cite the journal article instead of this entry:

Codina, L., Ufarte-Ruiz, M.-J. & Borden, S. L. (2024). Fanning the flames of artificial intelligence in the media: beyond efficiency and productivity gains. Communication & Society, 37(2),221-225. https://doi.org/10.15581/003.37.2.221-225

Thank you very much