Farewell crypto, hello generative AI. With the selective amnesia that is one of the defining characteristics of their trade, venture capital investors have already moved on from their unfortunate dalliance with the imploding FTX crypto exchange and fallen in love with the next big thing. This year, they say, will be the breakout year for artificial intelligence. Although that statement might have been made in any of the past few years, this time they really mean it.
There are some good reasons to believe this assertion may be true. The launch in November of OpenAI’s ChatGPT language-generation model, with its astonishing ability to generate paragraphs of convincing text at remarkable speed, has opened users’ eyes to the power of generative AI. Large language models, such as ChatGPT, have been trained on vast amounts of data ingested from the internet and are almost instantaneously able to recognise and replicate patterns of text, images, computer code, audio and video. No one is quite sure yet what exactly their killer application will be. But more than 160 start-ups have already been launched to explore the answer.
The promise of generative AI is that it can boost the productivity of workers in creative industries, if not replace them altogether. Just as machines augmented muscle in the industrial revolution, so AI can augment brainpower in the cognitive revolution. This may be particularly good news for jaded copywriters, computer coders, TV scriptwriters and desperate school children late with their homework. But it may also have a big impact on areas as diverse as the automation of customer services, marketing material, scientific research and digital assistants. One intriguing open question is whether it will reinforce the dominance of existing search engines, such as Google’s, or usurp them.
Generative AI is a good example of a broader trend that is taking powerful technologies out of the hands of experts and putting them in those of everyday users. This democratisation of access may have huge implications, and create extraordinary opportunities, for many businesses. The increasing popularity of “low code/no code” software platforms, for example, will enable increasing numbers of non-expert users to create their own powerful mobile and web apps. No longer will product managers be so beholden to their tech teams setting their own agenda.
This obviously carries risks, as well as opportunities. One of the biggest is that the output of generative AI is often wrong, or hallucinatory. Such models can sometimes give different answers to the same question depending on their human inputs and training data. Deterministic technologies, such as a pocket calculator, will always give you the same answer when you tap in 19 x 37. Probabilistic technologies, such as generative AI, will only give you a statistically probable approximation of an answer. They are “stochastic parrots” as the former Google researcher Timnit Gebru described them. For that reason, Stack Overflow, a Q&A website for computer programmers, has already banned ChatGPT-generated responses because they cannot be trusted.
The clear imperfections of generative AI put a particular responsibility on those who are developing these models to consider how they may be abused, before releasing them into the wild. But that is becoming increasingly difficult given the speed at which these models are developing. Users may well enjoy and profit from their use but they should always treat them with caution. While generative AI can help inspire the first thought, it should never be relied upon for the last word.