logo

OPINION

Taming the digital genie

Syed Fattahul Alim | Tuesday, 25 April 2023


Misinformation or false information may be both intended and unintended. Either way, it can mislead people causing harm to an individual, a group, or a community. But disinformation is still worse, for it is misinformation that is deliberately produced and spread to mislead people. Both kinds of false information have been in use since when humans became civilized.
And so far, it was humans who used to manipulate information to create misinformation and thereby cheat, misguide, and dupe its recipients. It usually happened in the form of rumours during times of social unrest or war. But nowadays with the advancement of information technology, both texts and images can be created without the help of humans, i.e., autogenerated by chatbots (computer programs that can simulate conversations) and spread on the internet, especially social media. It is to be noted here that the computer program called chatbot uses artificial intelligence (AI) and natural language processing (NLP) to understand questions asked by humans (generally, customers in a business environment) and generate answers (to the questions) automatically that look and sound like conversations between humans. ChatGPT is one such autonomous digital tool launched on November 30, 2022 by OpenAI, an AI research company based in San Francisco, USA. It is undoubtedly something to be celebrated as it marks the height of innovation that humans are capable of. And the innovation's potential for further advancing the progress of both science and society is immeasurable. But there is also danger lurking in it. And it is that malicious content in the shape of disinformation can now be automatically created by this AI-powered computer program at an unimaginable speed and in unheard-of quantities over cyberspace. So, the power of AI has also made the generation and broadcasting of disinformation inconceivably easier and cheaper than before. The disinformation so generated may simply be used to dupe a section of society into believing something that is not true. Or it may also be false propaganda material to influence public opinion so as to create social disharmony.
During the coronavirus pandemic, for instance, conspiracy theories about vaccines against the pandemic were spread on social media to dissuade people from getting inoculated. The former US president Donald Trump and the former Brazilian president Jair Bolsonaro were among those who subscribed to such wild theories. The problem with this manipulated information in digital format that includes images, audio, videos, and texts is that it becomes very difficult for trackers to distinguish fact from fiction. It can create an 'alternative reality', a 'make-believe world' where one can see a person in a certain type of dress or being in a particular physical condition, though that was never the case. It may also be that they were saying or doing something that they never did. Here lies the real danger of this modern generative technology that can churn out narratives that, though false, sound very credible in enormous quantities and in no time. A research report says that a piece of fake news spreads six times faster than a true one. And the probability is 70 per cent that the fake information would be retweeted on the social media platform, Twitter.
So, the initial euphoria surrounding generative technology as a digital-intellectual aid to researchers, academicians, and professionals whose work involves a lot of writing is perhaps over. For its potential as a powerful digital tool to generate disinformation sounds more attractive to criminals and agents of disruption in society.
So, the tech giants behind the creation of ChatGPT-like autonomous digital tools should now concentrate on controlling the genie that has already been let out of the bottle.
[email protected]