FE Today Logo
Search date: 04-09-2024 Return to current date: Click here

Everyone has a take on AI

Hasnat Abdul Hye | September 04, 2024 00:00:00


SAG-AFTRA union President Fran Drescher and Duncan Crabtree-Ireland, SAG-AFTRA National Executive Director and Chief Negotiator, demonstrate as SAG-AFTRA actors join the Writers Guild of America (WGA) in a strike against the Hollywood studios, on the picket like outside of Netflix offices in Los Angeles, California, USA on July 14, 2023 —Reuters Photo

Everybody who is somebody in the world of technology, economy politics and academia has a take on Artificial Intelligence (AI), particularly in the West where it is undergoing rapid transformation, paving the way for the most important technological advance since the internet. Understandably, the researchers working in the tech giants like Microsoft, Amazon etc and start-ups like OpenAI are waxing eloquent about the rapid breakthroughs in generative AI and what these promise for mankind’s progress. Captains of industry and business tycoons are gleefully watching at the bottom line of profit that will not be the same if the new technology is available for use by them. Politicians and academicians look upon this development in technological innovation with cautious optimism, welcoming the obvious benefits but wary of its adverse impact on employment, privacy and ethical conduct in social transactions. Below is a random review of the recent developments in AI and reactions from different quarters about their consequences for nations and mankind.

Competition: The first question that governments and academic ask is whether AI will be competitive and open to all like internet, without creating monopolies or oligopoly. According to Amazon, which is competing with Google and Microsoft, there will be choice for users of AI and competition among tech companies engaged in the development of this cutting edge technological innovation. It has been pointed out that Large Language Models (LLMs) and image models, the two among other components of AI, and cloud computing on which these models depend can be developed by any tech company. But because of the dependence on computing the cost of building these models is high. On cost considerations some tech companies have outsourced models to startups like OpenAI, Anthropic, A121, StabiltyAI etc. Amazon is building its own model but does not think that this prevents users’ choice. In an interview with Financial Times recently, the head of Amazon’s Web Service (AWS) Adam Selipsky declared that ‘there will not be one generative AI model to rule them all’. According to him, there will be large and small models from different companies or open source over time providing choice to users. Diversity in databases will also ensure that natural monopoly does not occur and there is robust competition, it has been pointed out. But scope and extent of cloud platforms owned by a company may give it some edge if there are not many competitive cloud platforms. Given the huge cost involved in creating cloud platforms the market for AI may end up with three or four big tech companies, it may be concluded. So, the government oversight to ensure competition will be important. The need for maintaining privacy and security in a post-AI world will make this all the more necessary.

Impact on employment: Academic’s main apprehension has been that AI would leave many workers behind and entrench existing inequality, long before it pays off in higher productivity. Holding a contrarian view, MIT economist Prof David Autor believes generative AI could help redress the balance, giving people without a college education the tools they need to do more expert work, win higher wages and close the gap with top earners. Autor, who has spent his career exploring how technological change affects jobs, wages and inequality, argues that the latest advances in AI come at a time when workers are in short supply. He thinks, with the right design, AI can be used to make people’s skills more valuable, rather than to replace them. Giving the example of nurses in hospitals Prof Autor says, ‘We are taking the most elite tasks and allowing someone with somewhat less elite skills to perform them. I think this is something we could use AI for in many other settings .To put it in simple economic terms, the question to ask is for whom is AI a substitute and for whom is it a complement?’ He concedes that there will be job/wage loss and says, ‘If people are doing expert work what pays well and now they have to do generic work that pays poorly, that’s is a concern. The problem is technology can make some expertise much, much more valuable, but in other cases, it directly replaces expertise we already have.’ When asked about the impact of AI on the job market in developing countries Prof Autor said tasks that are outsourced to India and Philippines could be affected by use of AI in finance, business firms, research bodies and industries. Though he does not have much concern about AI’s impact on jobs in developed countries like America, a study by investment banker Goldman Sachs projects that two-thirds of US jobs will be at risk over the next 10 years impacted by AI even though the global economy will grow by an additional amount of $7 trillion during that period. Consulting firm McKinsey anticipates up to 30 per cent of working hours in America will be affected by automation in the next six years.

Advertisement: Multiple and frequent use of internet have been made possible by increasing volume of cash earned through advertisements. According to insider accounts, AI is going to magnify this business manifold and in some cases this may have occurred already. On an investor call in October 2023 Coca-Cola chief executive James Quincey pointed out that Gen Z consumers, born between roughly 1997 and 2012, spend seven to nine hours a day on a screen but very little time watching traditional TV. Unsurprisingly, the company’s media expenditure on advertisements now skews heavily towards digital. In 2019 Quincey said during the call, ‘digital was less than 30 per cent of our total media expenditures and now it is over 60 per cent, largely focussed on digital campaigns that allow the company to segment the population and reach consumers where we earned higher return on investment’. AI will add a further layer to digital advertising because the technology has the potential of replacing many of the company’s traditional functions, from creating ads to placing them in front of consumers. But when paired with the vast amount of customer data that is already enabling targeted advertising, executives say what has so far been a blunt tool can become a precision industry. AI will have the power to create bespoke marketing for individuals at a global scale. This is the ‘age of hyper- targeting’, says Coca Cola’s CEO Quincey. According to him, in this era an individual’s day-to-day experiences, their search and shopping, entertainments and even their news digest, will become increasingly customised and algorithmically driven. In keeping with the temper of the time Coca Cola experimented with an AI platform, using GPT-4 and DALL -E, that allowed people to generate art work featuring on digital billboards in New York’s Times Square and London’s Piccadilly Circus.

But the Holy Grail, says Tamara Rogers, CEO of consumer health company Heleon, is ‘right person, right time, right message, right context.’ This is a marketers dream, she said. Heleon is already dipping its toe into AI – to create a campaign that could replicate the consumer’s mouth, for example, but Roger’ s says the industry is still a long way from making a hyper-targeted approach a reality. Finally, AI has the ability to automate some of the basic function of advertising or what Rogers calls the’ boring stuff’, such as the use of AI to sift through vast amount of documents and product information to support advertising claims. AI can also be used to automate aspects of media buying and planning on digital platforms. The problem for advertising agencies is that a greater reliance on digital media makes those tasks easier and cheaper to complete, potentially lowering the level of fees they are able to charge.

Banking and AI: Pablo Hernandez, head of the Basel Committee on Banking Supervision, the apex body of global banking, urged leaders of the finance gathered in the World Economic Forum (WEF) in Davos last year (2023) to use financial regulations as a blue print for tackling issues such as AI. He said financial stability is only one dimension of the challenges they faced, there were many other potentially more important consequences related to AI that they have to address pro-actively. Delegates attending the WEF discussed the governance of AI, including the longstanding debate on whether the technology should be open-sourced or kept secured in the hands of a few tech companies such as Micrisoft, Amazon, OpenAI. In the meeting at Davos corporate AI leaders rubbed shoulders with regulators from EU, the US, and China who spent 2023 proposing a range of policy solutions via the EU AI Act, the White House Executive Order and the UK’s Bletchley Park Agreement. It was agreed by the delegates that there needs to be a co-ordinated approach as there is no clear pathway in terms of how this new technology (AI) can be regulated globally because of geopolitical implications involved.

Fake news and DeepFake: Gary Marcus, an AI expert and professor at New York University, attending WEF in Davos touched on a separate issue and said that a crucial question for delegates is what can be done to prevent and limit AI- mediated misinformation in democracies as there are as many as 70 elections slated for 2024 around the world. He warned deep fakes are getting better and AI models can be used to create misinformation.

According to Oxford Internet Institute in 2020 social media disinformation campaign had operated in more than 80 countries, orchestrated variously by political parties, shadowy public relations and private sector intelligence groups or governments themselves. In response Google, Mets, TikTok and X introduced rules prohibiting co-ordinated covert influence operations and misinformation about voting and voter suppression. But the advent of generative AI – powerful multi- modal models that can blend text, image, audio and video – has radically changed the potential for deepfakes, putting the ability to create convincing media at scale within the reach of almost everyone. Consequences have varied. Just as some politicians such as Trump have weaponised the concept of ‘fake news’ by levelling the term at narratives they disagree with, so too can growing public awareness of deepfakes ( false pictures of persons) be wielded to discredit truths and deny reality. Research also shows that the very existence of deepfakes deepens distrust in everything online, even if it is real. The future appears to be bleak if we cannot believe what our eyes and ears are telling us. Nothing will be taken at face value, everything will be interrogated.

Open vs Close AI: Manipulation of AI to distort truth and reality has concentrated minds on the issue: whether generative artificial intelligence systems should be open or closed? Supporters of open- source models argue that they broaden access to the technology, stimulate innovation and improve reliability by encouraging inside scrutiny.Far cheaper to devvelop and deploy, smaller models also ensure competition. But critics argue, open models risk lifting the lid on a Pandora’s Box of troubles. Bad actors can exploit them to disseminate personalised disinformation on a global scale, while terrorists might use them to manufacture cyber or bio weapons.

There is an ideological dimension to this debate, too. Yan LeCun, chief scientist of Meta, which has broken ranks with other Silicon Valley tech giants by championing open models, has likened rival companies’ arguments for controlling the technology to mediaeval obscurantism: the belief that only self- selecting priesthood of experts is enough to handle knowledge. In the future all interactions with the vast digital depository of human knowledge will be mediated through AI systems. LeCun believes a handful of Silicon Valley companies should not be allowed to control that access. Just as internet flourished by resisting attempts to enclose it, so will AI thrive by remaining open, he argues.

Wendy Hall, a professor of computer science in Southampton university, recently said, ‘we do not want to live in a world where only the big companies run generative AI. Nor do we want to allow users to do anything they like with open models.’

Human rights: Generative AI is a broad term, describing creative algorithms that can themselves generate new content, including images, text, audio and even computer code. These algorithms are trained on massive datasets and then use that training to create outputs that are often indistinguishable from ‘real’ data, rendering it difficult to tell if content was generated by a person or by an algorithm. Thus AI, using algorthms trained on datasets where inputs by many individuals may be included, has the potential to infringe human rights, including copyrights. Recently, over the past one year, there have been lawsuits filed by news providers as well as by actors, comedians, authors, script writers and other creative professionals who alleged that their works have earn unfairly used to create artificial intelligence that infringe their rights. The strike by Hollywood actors and scriptwriters against the use of AI -based materials that reduce the scope of or altogether abolish their employment was the most dramatic demonstration of this problem of impact of AI on human rights. Another high profile demonstration was the law suit filed by New York Times in which the newspaper accused both Microsoft and OpenAI of unlawfully using millions of pieces of journalistic content to train large language models (algorithm). The newspaper alleged that apart from copy right infringements, the unlawful use of its content will ultimately replace the search traffic that is monetised by tech platforms and publishers.

It is not only traditional content creators who are worried. Brands are now creating their own virtual social media influencers with AI, so they don’t have to pay influencers in the real world. The Hollywood actors and writers strike last year was about this race to the bottom .Whether appropriating content creators or substituting real life influencers, tech companies that dominate the AI technology (Microsoft, OpenAI) are bent on selling in ways that have lower and lower input costs and higher and higher profit margins for them. Increase in the number of tech companies with AI products is not going to change this business model.

After years of wrangling, a global minimum corporate tax of 15 per cent is now finally in effect. This groundbreaking agreement was driven by the desire to prevent big companies, often in the tech sector, from flocking to tax havens or jurisdiction shopping. Rana Faroohar,a journalist who regularly writes for FT, wrote recently: ‘Even though the ink on the global agreement is barely dry, it is time to start talking about a new one, targeted at artificial intelligence companies.’

Regulating use of AI: The hype around AI technology has also led to an increased awareness of its dangers: the potential to create and spread misinformation, particularly during elections; its ability to replace or transform jobs, especially in the creative sector; and the less immediate risk of it becoming more intelligent than and superseding humans.

Considering the seriousness of the situation that has emerged after the appearance of AI (Chat GPT of OpenAI) in public domain and the race by tech companies in America, Europe and China to excel each other in making progress in the sector, European Union passed the EU Act on AI on March, 2024. It is the first ever legal framework on AI to be passed anywhere. It addresses the risks of AI and positions Europe to play a leading role globally .The AI Act lays down harmonised rules on AI, providing AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular the small and medium sized enterprises. The EU Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Co-ordinated Plan on AI. Together, these measures guarantee the safety and fundamental rights of people and businesses when it comes to AI. They also strengthen uptake in investment and innovation in AI across the EU. As the first ever comprehensive legal framework on AI worldwide, the Act aims at fostering trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety and ethical principles and by addressing risks of very powerful and impactful AI models. The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that have to be addressed to avoid undesirable outcomes. The Act is a risk- based approach. All AI systems considered a clear threat to the safety, livelihoods and rights of people have been banned under the Act, from social scoring by governments to toys unsung voice assistance that encourages dangerous behaviour .But it will be sometime before the AI industry is subject to significant levels of scrutiny. The Act includes a grace period of two years for companies to comply. Many companies, ranging from car maker Renault to brewer like Heneken, argued that the rules under the Act created disproportionate compliance costs for companies developing and implementing the technology. ‘We will try to comply, but if we can’t comply, we will cease operating’, said Sam Altman, chief executive of OpenAI. He later backtracked, tweeting the company had no plan to leave Europe. Microsoft and Google would not speculate on whether they would change models because of the requirements under the law but said they would endeavour to comply with local laws.

In November, 2023, the UK government convened a AI Safety Summit which came to be known as the Blencheley Park Summit because of name of the venue where it was held. Leaders of 28 countries including China and executives of tech companies attended the event. The Blencley Park Declaration established a shared understanding of the opportunities and risks by so called ‘frontier artificial intelligence’. The Declaration satisfies a collective commitment to proactively manage potential risks associated with ‘frontier AI’ (highly capable general purpose AI models) to ensure such models are developed and deployed in a safe and responsible way.

In the US, The Executive Order issued in April,2024 underlined bold steps to mitigate all risks from AI, including risks to workers, consumers and to American’s civil rights and ensure that AI’s development and deployment benefits all Americans. It has established a government-wide initiative to guide responsible artificial intelligence development and deployment through federal agency leadership, regulation of industry and engagement with international partners. The Order mentions, the Federal government will seek to promote responsible AI safety and security principles and actions with other nations, including competitors.

The above review of developments on AI around the world and reactions from a cross-section of stakeholders indicate the next phase of development in the new technology is going to be very exciting and consequential. If experience is any guide, it can be safely concluded that government regulations are going to be blindsided by the clever manoeuvre of tech companies. The most optimistic scenario is a re-run of one of the endless race between Tom and Jerry comics.

[email protected]


Share if you like