Artificial intelligence (AI) has been a hype since important advances were made in machine-learning, huge data sets became available (‘big data’), and speed of computing power steeped up. Machine-learning AI systems are designed for tasks that involve pattern recognition, optimization, or trend extrapolation. Well known applications include image recognition, language translation, and behavioral prediction. They have been used in application domains such as medical diagnosis (e.g. in radiology), financial transactions (e.g., ‘fintech’, assessment of credit worthiness), law (e.g., prediction of recidivism), meteorology (e.g., weather forecasting), public services (e.g., detection of possible fraud, eligibility for support by social welfare schemes), and security (e.g., surveillance, with the Chinese social credit system perhaps being the most widely implemented).
In November 2022, when the first edition of ChatGPT was publicly released, AI entered a novel phase: the broad adoption of large language models (LLMs). LLMs differ from machine learning AI in the use of even larger data sets and reliance on even faster computing power, and the integration of pattern recognition, optimization, and trend extrapolation. In terms of functionality, the most important difference is in the ability of LLMs to produce novel output; hence the term ‘generative AI’.
AI systems, whether of the machine-learning or the generative kind, have found many useful applications, but their use is also associated with numerous scandals and undesired outcomes. Anybody who has kept themselves informed will be able to list several examples.
My fascination with AI stems from the question why so many people are so uncritically enthusiastic about AI – viewing it with rose-tinted glasses, emphasizing all the benefits and future possibilities – despite the numerous scandals and undesired outcomes. How come? Some would say that undesired outcomes are largely due to the technology still suffering from ‘child diseases’ that will be cured as the technology is further developed (better data, finetuning of the algorithms). Others claim that a part of the answer is in the attribution of scandals and undesired outcomes to (malicious) intent – think of the Cambridge Analytics scandal – or to excessive commercial interest (‘greed’) – think of the endless scrolling on timelines and newsfeeds in social media.
I believe that these accounts are only part of an explanation, and that they are perhaps not even the most important part. A major factor is that many people attribute sublime qualities to the technology: they are amazed and fascinated by its incredible and unimaginable technical prowess – and perhaps they also feel a bit uncanny because of its human-like-but-not-really-human abilities. These sentiments are fed, I believe, by the language we use when talking about AI, and because language frames understanding, language matters in how people relate to AI, more often than not, trusting in the relevance, validity, and reliability of the output of AI systems. Let me specify.
We speak of artificial ⌈intelligence⌉, machine ⌈learning⌉, image ⌈recognition⌉, algorithmic ⌈decision making⌉, Google’s proficiency in ⌈translating⌉ from one language into another, Amazon ⌈recommending⌉ what you might like next, etcetera, etcetera. [I borrow the use of corner quotes from Brian Cantwell Smith in The Promise of Artificial Intelligence.] In doing so, we are anthropomorphizing AI systems by projecting our own understandings of intelligence, recognition, decision making, etcetera, etcetera onto the actual abilities and workings of these systems. For example, we implicitly equate the ⌈recognition⌉ of an image by an AI system with how we recognize an image, but the system doesn’t ⌈see⌉ a cat (or a dog); it processes a pattern of pixels of varying intensity and wavelength. We tend not to recognize that the semantic meanings of ⌈recognition⌉ and recognition are different. But they are different. Full stop. I believe that the tendency to interpret ⌈learning⌉ as learning, for example, is a major factor in the many undesired outcomes when we rely on AI systems to inform our doings and dealings.
We seem to forget that AI systems run on algorithms. They rely on mathematics, calculus, statistics, and Boolean logic in their processing of data. But “every schoolboy knows,” to quote Gregory Bateson in Mind and Nature, “logic is a poor model of cause and effect.” Likewise, statistical correlation, or co-occurrence, of variables is a necessary but not a sufficient condition for causality. Many decisions are ‘tough calls’ in unique situations; decisions require judgment in the weighing of multiple, often contradictory values and interests. AI systems are good at processing data, yet incapable of judgment. Mistaking the former for the latter can be a source of serious problems because it confuses ⌈intelligence⌉ and intelligence.
Things get potentially even worse when we take data as unproblematic, direct representations of phenomena in the world in which we live. For one, data are always from the past, and therefore the practical value of extrapolation of trends, patterns, and optimizations depends on a belief in the continuity of the domain of (historical) data and the domain of (actual) concern. Furthermore, data are not ‘given’, as the etymological root of the word from the Latin dare (to give) might suggest; they are the result of choice, selection, and construction. We need to question, therefore, not only whether data are ‘complete’, ‘accurate’, and ‘representative’ in a statistical sense, but also how they are infused with values and whether they are reasonable proxies of what is of interest or concern to us. Vilhelm Flüsser (in Toward a Philosophy of Photography) speaks of ‘technical images’ to point out how computers work with data constructions in which their infusion with values goes unacknowledged and in which the question of the representativeness of them as proxies is just assumed. With this image, we can say that AI systems process technical images into technical images and that their relationships with the world in which we live are uncertain. And for that reason, these relationships must be examined and questioned instead of being taken for granted. For example, ChatGPT does not, and cannot ⌈hallucinate⌉. Or perhaps it does, but then all it does is ⌈hallucinating⌉. Relying on highly sophisticated algorithms and incredible amounts of data, ChatGPT just generates strings of words, based on statistical patterns in word sequences – it is a “stochastic parrot”, in the words of Emily Bender and colleagues – but it does not have any sense of the meaning of the word strings. We attribute meaning. We are responsible for the attribution of meaning.
Like any other computing system, AI systems “don’t give a damn” about the world in which we live (the quote is attributed to John Haugeland). And when we forget this basic fact, we can no longer even contemplate the question “what humans lose when we let AI decide.”
Frank den Hond
Ehrnrooth Professor of Management and Organization
This text is based on Frank den Hond’s previous work published in MIT Sloan Management Review (reprint 63307), Academy of Management Learning and Education (doi: 10.5465/amle.2020.0287) and Academy of Management Review (doi: 10.5465/amr.2018.0181).