The arrival of the Internet in society has undoubtedly enabled an infinite variety of transformations and changes in different areas. It has democratized many aspects of our lives and information is one of those that has notably benefited.
Although many media companies (print, television, radio and even marketing) made this great 'quantum leap' to the digital era, they have also questioned this model, with information to everyone, creating endless content (collated or not).
Not everything is wonderful. Cyberjournalism or 'internet journalism' as some theorists refer to it, faces serious challenges (use of information sources, use of digital tools, ethical conditions, evidence, etc.) that affect journalists and society.
The ‘modus operandi’ of the media and journalists has changed with the arrival of this new technological paradigm through an unsustainable model of excessive attention to the click, which also represents a challenge for humanity.
Given the constant and fast evolution that the world is experiencing as a result of the incorporation of Artificial Intelligence into our lives, are we citizens ready to respond to situations where information hangs on a fine line between truth and deception?
Let's see.
“Fake" news anchors
At the beginning of February 2023, New York Times published a report explaining that the alleged news anchors of the Wolf News channel were fakes.
According to Graphika, an American research company that studies disinformation, the synchronization of the movement of the alleged presenters' mouths, irregularities in their voices, pixelated faces and anomalies in some parts of the body were evidence that reflects the use of 'deepfake' in the conduction of the newscast in 2022.
Graphika noted that these computer-generated avatars created by artificial intelligence programs are the "first known case of using deepfake video technology - talking digital puppets - to produce fictitious people as part of a state-aligned information campaign (China)."
"This is the first time we've seen a state-aligned operation use AI-generated video footage of a fictional person to create misleading political content," Jack Stubbs, vice president of intelligence at Graphika, told AFP.
Given these situations, it is evident that a new trend is emerging that tries to form parallel lives due to the advancement of technology.
Although it may seem "fun", and may lighten the workload, deepfakes can actually harm people, companies, or society as a whole by provoking greater distrust and uncertainty in the media.
Face-ing Our Realistic Digital Self with Koki Nagano of TEDx Charlottesville, North Carolina, USA
Challenges in journalism
Apparently, journalism is much more attentive to what the reader demands. Much more multimedia content and shorter and more brief texts are needed for the news consumer to understand what is happening in a short reading time.
However, this should not be fuel for society to find it difficult to identify the real from the fictitious. It should be a process to continue solidifying different methods of verification.
Since the Internet arrived in our lives, it is clear that we have been able to access information from almost anywhere and no longer know the events in 24 hours but instantly.
This has caused cyberjournalism to contribute to the alienated race towards exclusivity or being the first to publish content.
But this element breaks through with different elements such as the accuracy of information and suggests that many times mistakes are made, which encourages the reader to feel much more distrustful of the media and journalistic praxis.
Next step…?
It is time to consider and take action on this situation that is happening with the use of technology in journalism.
On the other hand, it can be educational and it is an opportunity to see Artificial Intelligence for what it is: a learning tool that allows bringing the population closer to knowledge, to a digital reality.
And on another aspect, it can be very dangerous due to the fact that there are no laws or guidelines to stop or regulate this type of audiovisual material, where not only unreal, misleading, fictitious information is transmitted, but also includes hate speech, slander, explicit words and images that are difficult to detect, which would be complicated to determine what constitutes disinformation.
A study published in 2017 by the Tow Center of Digital Journalism, argues that Artificial Intelligence technologies must integrate journalistic values from their conception, so that the ethical conditions of the media and journalists in particular is present in every information the consumer is receiving.
Audiences deserve to have access to a transparent methodology on how AI tools were used to perform an analysis, identify a pattern or report a finding. But this description should be translated into non-technical terms and explained concisely.