After decades of developments in computing laboratories, artificial intelligence has broken through into the mainstream. Is this good news or bad?
Wherever you turn, people are talking about “artificial intelligence” (AI), and whether it is a boon for society or a danger to democracy. Some deny that there is such a thing, and say that only humans can be truly intelligent.
But call it what you like, the ability of software programs to collect and analyse data, and to generate new insight and new data, has never been greater.
Ask any secondary school teacher and they will tell you how many times they have spotted children using the generative AI program ChatGPT to write essays. At the other end, look at the Google-owned company DeepMind, based in London, which used AI to determine the three-dimensional structure of proteins simply from the sequence of their amino acids.
DeepMind has partnered with the Cambridge-based European Bioinformatics Institute to make all its predictions – covering most of the roughly 200 million known protein sequences across all life forms – freely available to scientists looking for new drug candidates. That’s the kind of fact beloved of the AI optimists. AI, they say, will bring huge benefits to society.
Dangers
But will it, overall? What are the dangers to privacy and democracy? And how many jobs will be lost in the process?
According to US banker Goldman Sachs, ChatGPT and related AI could threaten as many as 300 million jobs worldwide. With such AI, up to two-thirds of job occupations in the US and Europe could be at least partially automated, it says.
‘Chat GPT and related AI could threaten as many as 300 million jobs worldwide…’
In the same article Goldman Sachs says that AI could raise global GDP by 7 per cent over a decade despite “significant uncertainty” and some strange assumptions on the future. It also claims the jobs displaced by previous automation “have historically been offset by the creation of new jobs” – without real evidence that this will continue to be the case.
The idea of artificial intelligence was first put forward in the late 1940s by the British mathematician Alan Turing. But it could not become a reality without massive developments in computing power and, as importantly, computer storage. By 1997, AI had advanced to the point where an IBM computer, “Deep Blue”, defeated reigning world chess champion Gary Kasparov.
Now there seems no limit to AI. Employers are using it to analyse the speech and non-verbal behaviour of job applicants, and to decide who to sack. Biomedical scientists used it during the Covid-19 pandemic to analyse patient data and point the way towards effective treatments. BT has announced that it is to reduce its workforce by 40,000 to 55,000 with 10,000 of those to be replaced by AI.
Detecting cancer
In September, news emerged that AI could detect more breast cancers than traditional examination of X-rays by two radiologists. Perhaps – the study itself has several caveats. But the question of whether machines can truly be more intelligent than humans, that “digital intelligence” can surpass “biological intelligence”, may soon be answered definitively.
Yet AI is not infallible. Users of ChatGPT found that it can generate misinformation, incorrectly answer coding problems, and produce errors in basic maths. It makes up references that don’t exist. It once even insisted a living person had died and made up a reference to an obituary.
And recently a lawyer had to apologise to a US judge after using ChatGPT to identify case law in a personal injury case. The problem? That case law does not exist.
Against this backdrop, prime minister Rishi Sunak and US president Joe Biden discussed AI at their meeting in June, including the idea of a global regulation framework. (Naturally, this would be “light touch” regulation, that is to say, only the appearance of regulation.)
The world’s great scientific advances have been made by workers, not governments, and often by workers keen to see the benefits flow freely throughout society. Witness the invention of the World Wide Web at CERN, the international particle physics laboratory in Geneva.
Similarly, ChatGPT was developed by OpenAI, founded in 2015 as a non-profit company. Aware of the implications for society, the founders were keen on the concept of ethical AI, with inbuilt safeguards for privacy, for example.
Then Microsoft took a stake. It now owns 49 per cent of OpenAI, and when OpenAI’s board tried to hold back the tide in November and enforce its ethical vision, sacking its CEO, Sam Altman, capitalism struck back. Altman was reinstated and his sackers were sacked.
“AI belongs to the capitalists now.” – New York Times headline
“AI belongs to the capitalists now,” read the article in The New York Times reporting the reinstatement. And it’s quite an asset: estimates of its market value now run at $80 billion.
There’s a lesson here for the utopians, the wishful thinkers, among us. You cannot be sure of controlling the forces of production unless you have control of the means of production. As with the fight for wages, any victory in the war to fight the adverse effects of AI will only be temporary.
The TUC has warned that many workers are being kept in the dark about how AI is being used to make decisions that directly affect them. It also said that the government is failing to protect workers from being “exploited” by new AI technologies.
In a manifesto, Dignity at work and the AI revolution, the TUC said in March 2023 that it believes an AI-driven technological workplace change can boost productivity and “offers an opportunity to improve working lives”. It also identifies the risks about inequality and discrimination as well as unsafe working conditions.
Calls for laws
As well as the TUC, some trade unions are concerned about how AI is used although the GMB also said that it is not anti-technology. But like the TUC many unions are calling for more legislation to control “the worst cost-cutting impulses of bosses”.
The call for a “statutory framework and industrial relations mechanisms” suggests unions have forgotten the history of such legislation: statutory interference in industrial relations either limits workers’ advances, or rolls them back. So TUC and union calls for legislation around AI to protect workers’ rights need to be carefully framed if they are not to make a bad situation worse.
The strongest protection will always be workers’ willingness to organise and fight. And here the best lesson comes from across the Atlantic, with the victory at the end of September of the Writers Guild of America over the Hollywood studios and the TV moguls.
After a 146-day strike, the workers gained not only pay increases but also assurances about the use of AI in the writing of film scripts. It is widely seen as the first industrial battle over AI in the workplace, humans versus machines. The problem, as it has been since the start of the industrial revolution, is that the machines are controlled by capitalists.
As an article in the Los Angeles Times makes clear, the concerns over the use of AI – while laid out in the writers’ claim – were not seen as central. Until, that is, the employers refused to agree to a clause banning the use of AI in the writing of original scripts.
As in everything that workers and their organisations do, a single victory is worth infinitely more than a thousand statements of concern, or motions at a conference. AI may be a new technology, but the truths of class struggle are tried and tested.