FE Today Logo
Search date: 15-10-2024 Return to current date: Click here

OPINION

Is AI modern Frankenstein?

Syed Fattahul Alim | October 15, 2024 00:00:00


A cognitive psychologist and computer scientist, the British-Canadian, Geoffrey E. Hinton who along with the American, John J. Hopfield, was awarded the Nobel Prize in Physics by the Royal Swedish Academy of Sciences on October 8 is himself fearful of the invention that brought him the honour. Upon leaving Google in May 2023 after working there for a decade, he admitted that he left the job to speak freely about the dangers of AI. To him, AI is outpacing human's ability to control it. Consider the frustration of the AI buffs, not less Google, who held him in high regard for his pioneering work on deep learning!The reason the Nobel Committee considered them for the Prize (in Physics) is their use of statistical physics concepts in the development of artificial intelligence. John J. Hopfield, a physicist- turned-chemist-turned-biologist at the California Institute of Technology (Caltech) in 1982 proposed a simple (neural) network on how memories are stored in the brain. He later returned to Princeton as a molecular biologist.That means, neither scientist was a practising physicist when they got the Nobel Prize in Physics. Interestingly, though these two Nobel laureates in Physics got the prize for their seminal works in the advancement of AI, yet both of them expressed concerns about further development of the field they dedicated their career for. However, unlike Geoffrey Hinton, John Hopfield was less dramatic, though no less apprehensive, about expressing his fears about neural network he worked for that mimics the function of the human brain. Maybe, AI does it better than human brain and, what is alarming, even faster! He also warned of potential catastrophes if the advancements in AI research are not properly managed. So, he emphasised the need for deeper understanding of the deep learning systems so the technological development in the field may not go out of control.

The concerns raised by these two lead researchers in AI's advancement, call to mind the Asilomar conference (organised at the Asilomar State Beach in California, USA) of biotechnologists on recombinant DNA molecules in 1975. They discussed potential hazards and the need for regulation of biotechnology. Some 140 biologists, lawyers and physicists participated in the conference and they drew up a set of voluntary guidelines to ensure safety of the recombinant DNA technology, which is about genetic engineering technique that involves combining DNA from different species or creating new genes to alter an organism's genetic makeup.

Geoffrey Hinton in his interview with the website of Nobel Prize stressed thatAI is indeed an existential threat but we still do not know how to tackle it. There are some existential threats like climate change. But not only scientists, the general public also knows that by not burning fossil fuels and cutting down trees, the danger can be averted. That means, humanity knows the answer to the threat posed by the climate change, but it is the greedy businesses and politicians lacking the will who are coming in the way of addressing the threat.

To avert the threat to humanity originating from unregulated AI, mobilising resources by tech companies to conduct research on safety measures is necessary.

Hinton thinks that the linguistics school of Noam Chomsky, for instance, is quite dismissive about AI's capacity for understanding things the way humans do. They (neural networks of AI) cannot process language like humans, the Chomsky School holds.

But Geoffrey Hinton thinks this notion is wrong, since, in his view, neural nets do the job (of processing language) better than what the Chomsky School of Linguistics might imagine.

The harm AI can do is already before all to see. These include AI-generated photos, videos and texts that are flooding the internet. The problem is it is hard to tell the real contents from the fake ones. It can replace jobs, build lethal autonomous weapons by themselves and so on. Here lies the existential threat.

[email protected]


Share if you like