Genius or destroyer?: OpenAI founder Sam Altman
Amid the carnage on London’s stock markets, two companies stand out for bucking the trend. Why? The short answer is artificial intelligence (AI).
Online data pioneer and publisher Relx has quietly climbed into the FTSE top ten and is now valued at a stonking £54 billion following a 33 per cent jump in its shares this year.
And Gateshead-based small business software outfit Sage is up 52 per cent and worth £12 billion as it surges up the FTSE 100.
Both companies are early adopters of artificial intelligence, a technology first nurtured in Britain at DeepMind, and swallowed by Google owner Alphabet where it is a growth engine.
Generative AI, developed by OpenAI’s founding genius, the now famous 38-year-old Sam Altman, is technology which can create high quality images, text and code which matches human endeavour.
OpenAI staff had sent a letter to their company’s board, warning of the discovery of a potentially dangerous, powerful new algorithm. This contributed to Altman’s rancorous departure, now reversed.
Generative AI, which includes ChatGPT created by OpenAI, is far from infallible. This is why the UK’s Relx tested it to death before applying it to its legal, scientific and medical data repositories.
In the United States, the world’s most litigious society, the minor accidents of everyday life can yield big money. Last May, airline passenger Robert Mata called his lawyers after being hit by a serving trolley aboard an Avianca flight from Colombia to New York.
Avianca asked a Manhattan judge to throw out the lawsuit. Mata’s legal advisers then cited half a dozen cases – involving airlines including Delta and China Southern – where damages were said to have been paid. But Avianca’s counsel nor judge could verify any of them.
When challenged the claimant’s lawyer Steven A Schwartz admitted that he was in a hurry and instead of using the legal bible Lexus, run by UK data powerhouse Relx, he had turned to ChatGPT, developed by Altman and his colleagues. The generative AI app hallucinated and spewed out invented cases with no legal merit. The case was thrown out. Reliability is still some way off.
The row which saw Altman ousted from his job and re-employed first by major investor Microsoft and then reinstated by OpenAI after a rebellion by almost all of its workforce, has been depicted as failure of governance. But it is more complex than that.
Much of the dispute which provoked a revolt by 743 of OpenAI’s 750 techies and coders, was about whether a non-profit organisation was allowing commercialisation to become the driving force. The reality is that the pass had already been sold when Seattle interloper Microsoft, an outsider amid the San Jose elite, ploughed £800 billion into the enterprise. Small change to £2.2 trillion Microsoft, but a downpayment on the next big thing.
AI uses advanced microchips, developed by Nvidia and others, to mine data at amazing speeds, process information and turn it into intelligible text. It is the breakthrough technology of our time.
A fight has been sparked among the Silicon Valley giants and older established behemoth Microsoft for AI hegemony with billions if not trillions of dollars at stake.
As was the case when search engine Google first crashed the commercial scene two decades ago, it raises profound issues about intellectual property and copyright. The legal status of generative AI creations are being fought out in courtrooms across the world. Top music artists and production companies are having hysterics about song rights after generative AI mined the internet to mimic CDs, vinyl and videos indistinguishable from the originals.
AI’s outsize brainpower and intelligence and an ability to search private and security sensitive information – such as health records and nuclear designs – makes it a potent safety threat. That’s before one even considers the possibility that it will out-think humankind and take charge of us as some of Altman’s colleagues feared.
Governments around the world are struggling to corral its potential capacity to control our lives. The Americans, in thrall to the commercial success of big tech and its ability to generate political donations and win election campaigns, are inclined to trust the likes of Facebook (now Meta) founder Mark Zuckerberg and Microsoft CEO Satya Nadella to police the industry.
These are elephantine corporations which hate regulation. They are monopolists using their market power to gobble up any technology which threatens their dominance. And they don’t like paying taxes.
The United State’s government’s subservience in the face of big tech’s efforts to police AI has outraged a community which believes that the internet is for everyone, not just Silicon Valley.
Emad Mostaque, CEO of British AI unicorn Stability AI, told The Mail on Sunday: ‘There needs to be more checks and balances, particularly given how opaque some of these companies are.
‘Open technology is transparent and more robust so it’s much safer. We saw what happened with social media and the lack of accountability. Humanity should not put its trust in an unelected group to lead the development of AI tech without proper scrutiny.’
The EU is striving to introduce labyrinthine rules requiring enablers and users to conduct extensive risk assessments and make all of the data available.
Rishi Sunak’s attempt to establish global safety monitoring principles may already have been overtaken by the speed of events.
The Altman affair has inserted an old-fashioned human drama and power struggle into the goings on in a secretive corner of commerce. The idea of a non-profit model and open AI – good for all humankind – is away with the fairies.
At its core is a battle for bigger bucks and domination. Stopping arrogant tech giants from taking control will be a nightmare.
Some links in this article may be affiliate links. If you click on them we may earn a small commission. That helps us fund This Is Money, and keep it free to use. We do not write articles to promote products. We do not allow any commercial relationship to affect our editorial independence.