Navigation auf uzh.ch
People in Switzerland are very skeptical about the use of artificial intelligence (AI) in journalism, according to a new study conducted by the Research Center for the Public Sphere and Society (fög). Only 23.8% of respondents say they would read articles created entirely by AI, while 53.6% would read media reports written by media professionals with the help of AI.
“Acceptance is greatest when using AI for research, data analysis or translating,” says study lead Daniel Vogler, Head of Research and Deputy Director at fög. “But people take a critical view if headlines or entire texts were to be written by AI.” He adds that survey respondents believe it is important that humans stay responsible for the content at all times. “Many people probably think of text robots when they hear ‘AI in journalism’, and they find this idea off-putting,” Vogler says. A mere 21.2% of respondents believe that their media of choice handle AI responsibly, which also supports this notion. This stands in contrast to the fact that journalists in Switzerland currently use AI almost exclusively as support tools – i.e. not for producing whole texts – and the majority of media outlets now have guidelines governing the use of AI.
Most respondents expect media outlets to be transparent about their use of AI. This call for transparency not only applies to cases in which AI (co-)writes a text, but extends to all areas in which AI is used.
Journalists in Switzerland currently use AI almost exclusively as support tools – i.e. not for producing whole texts.
“The media have to explain where and how they use AI,” says Vogler. “One sentence in the publishing information isn’t enough. The media should address how they use AI, give examples and provide behind-the-scenes insights.”
Swiss media consumers believe that media outlets aren’t yet doing enough in this regard. Only 12.1% of respondents feel that Swiss media disclose their specific use of AI and the guidelines they follow. For example, they could tell their readers that using AI may enable them increase efficiency in routine tasks, which frees up time that can be devoted to conducting analyses and putting news into context.
The crux of the matter, however, is that Swiss people aren’t very willing to pay for AI-generated content. Only 6.2% of respondents are generally willing to shell out for news that’s generated entirely by AI. This figure dropped by 3.2 percentage points compared to a survey conducted by fög last year. Around one-quarter of respondents (25.2%) say they are open to paying for media reports that have been produced with the help of AI – i.e. the exact type of content offered by many media outlets in Switzerland. This percentage too has decreased since last year. People’s willingness to pay is highest for content that was created without the use of AI.
The fög study also examined the areas of society in which the Swiss are most critical about the use of AI. “I was surprised that the respondents rated journalism as the area with the highest risks, even ahead of the military and healthcare,” says Vogler.
How does media scientist Vogler explain this great skepticism towards AI? “We’re currently still in a period of uncertainty, as has been the case with new technologies in the past,” says the fög researcher. “Apparently, this uncertainty hasn’t decreased over the past year.” Another point is probably that the media often report on AI in a negative context and emphasize the risks, he adds.
If a mistake happens because someone used AI, this should be communicated openly.
Besides transparency, Vogler also believes media outlets should embrace a culture of learning from mistakes. “If a mistake happens because someone used AI, this should be communicated openly. This strengthens people’s trust that the media companies are using the new technology in a responsible way.”