ChatGPT: from Farce to Tragedy?

15 May 2023

When Karl Marx wrote that history repeats “the first time as tragedy, the second time as farce” might he also have considered the possibility of dynamics moving in the opposite direction—from farce to tragedy? Current trends, revolving around the latest artificial intelligence (AI) models could be a good case to test the hypothesis.

In recent months, the world has been taken by storm by the latest iteration of “generative” AI, notably OpenAI’s chatbot ChatGPT. ChatGPT has achieved record widespread diffusion in a very short time, leaving its competitors trailing in its wake.

Rivers of real and virtual ink have already been spilled to dissect its nature, use, cases, and impact on industry, organisation and society at large – and, of course, to feed (almost farcical levels of) speculation, hype and prophecies on the always-imminent next revolution to come. At the same time, the success of the technology has kick-started a new wave of interest in AI, and questions about how tools like ChatGPT might reduce the role for the ‘human’ in digital work.

My take on the matter is that generative AI is a very powerful new tool, and yet it is just a tool. In this blog, I explain why we don’t need to be panicking about impending tragedy quite yet…

The rise of ChatGPT: more than a tool?

We use and shape our work tools, but they also shape and “use” us. This is true of the advent of new tools through the ages: from forks and knives (linked to the emergence of “good manners” in the Middle-Ages) to the ball-point pen; from spreadsheet software to tunnelling microscopes.

But what kind of tool is ChatGPT? ChatGPT is a specific type of AI – or, better, an AI “solution” – which, in turn, builds on a specific class of AI models: algorithms that process text/language data, called foundation or large language models (LLMs). The -GPT part of ChatGPT stands for Generative Pre-trained Transformer, meaning that the “engine” that processes the data and powers the AI solution is a so-called Transformer architecture. There are other architectures available in the AI world (e.g. so-called “diffusion” models), but the Transformer has quickly risen to be quasi-dominant in the field.

Transformer architecture, as all current AI models, is based on probability. The algorithm (in a nutshell, a complex type of neural network) is pre-trained on large corpuses of textual data so that it “learns” (a better term would be identify) the probability distribution of co-occurrence of words in human language and, as a consequence, can replicate or imitate language.  The Chat- part of ChatGPT relates to the fact that the AI solution has a user-friendly interface that responds to all sorts of text-based prompts. The interface allows for easy interaction with the AI in an app or through websites.

The success of ChatGPT stands precisely in the combination of the two components making up its name: a powerful predictive model of language, and an easy to use interface. In sum, ChatGPT is an identifiable, well-bounded product (service), a recognisable tool that can be packaged, wrapped, embedded in other products and services, and sold. From an economic and strategic viewpoint, this is the key feature of interest, because it tells us the modalities through which ChatGPT and similar solutions can be commercialised and profit their creators/vendors (when they are not distributed by open-source).

All this, by itself, does not make the case for a possible tragedy – AI+interface just equals a new tool, albeit a transformational one, for humans to play with.  ChatGPT’s probabilistic nature (and that of AI language models in general) is not an issue per se – rather, it becomes an issue when used to process and manipulate  information that needs to be correct; in other words, it is problematic in activities with high-stake loss functions, where we can’t tolerate statistical error.

What risks will users accept for a seamless experience?

The issues above arguably refer to the supply side of the technology. Whether a tragedy unfolds also depends on the demand side of the story: do users prefer an ok-ish search experience (i.e. one that might require additional, follow-on searches, back and forth to the search bar, link clicking, opening new tabs, etc.) with low risk of mistakes, or a seamless experience with high(er) risk of mistakes? I dare make a prediction: as humans are very often attracted to more natural, efficient and integrated forms of communication, such as emojis, the more seamless, but high risk, AI interface is more likely to be adopted, compared to other, clunkier but low-risk options. ChatGPT and its future iterations thus have a pretty good chance to thrive.

ChatGPT offers a novel way for users to play around with contents in the Web. It is not only a tool, it is an enabling tool: it lowers the cost of engaging in interactions with information. There has been a blooming of use cases, including some creative and some malicious ones. However, due to its probabilistic nature, it is prone to mistakes, and so is currently unfit to replace more structured ways to search for information.

But this does not mean it won’t. We know that technical superiority often does not stop inferior alternatives from thriving. If ChatGPT and similar solutions do succeed to become a new de-facto standard for interacting with the information space we call the Internet, it means that usability and good interfaces have lured us into a lower-level equilibrium.

As it is has been very aptly described online, what ChatGPT outputs is essentially automated mansplaining, or “mansplaining-as-a-service”: generic and lengthy “lectures” on any topic, produced with utmost confidence and ignoring the level of expertise of the prompter.

This does not seems to be a problem – rather an issue of damage control – for the companies that integrate the solution in their existing services (read: Bing). However, damage that is only collateral for AI companies can be severe for the society. Just think of the exploitation of precarious digital labour contracted to label and moderate the text corpuses necessary to keep the AI models running or to the spread of fake news, now with a more subtle, tricky twist of linear combinations of correct and incorrect pieces of information (also labelled hallucinations) offered by AI chatbots in naturally feeling, interaction-inducing, human-like dialogues..

Is it time to hit pause?

A possible way to address the looming tragedy is to hold back distribution and commercialisation while research and scientific advances proceed. However, this won’t change the probabilistic nature of AI language models. Another alternative is to put emphasis on LLMs that are open source and produced bottom-up by communities of developers (who are more diverse than tech giants).

For these type of alternative “business” models of AI production to succeed, the magical mist around AI solutions needs to rarify quickly, in order to give all of us a clear view on the incentives and constraints as well as market and non-market forces at work. If there is a domain where we should welcome probabilistic outcomes it is the unfolding of the future: the tragedy of AI systems following the farce might certainly be realised, but ultimately it is completely up to us.

Share this: