AI is not a single issue, or a single topic. It refers both to the modern fantasy of ‘replacing humans with machines’ and the scientific and technological status quo – statistics and categorisation.
These types of AI are also called classifiers. What they do is distribute selected input information into selected ‘output’ categories. For example, let’s say we’re trying to determine whether a shape in an image (a mass of pixels) is a person, a dog, a laptop, a truck, etc. To do this we add a level of probability: there’s an 88% probability it’s a dog, 22% it’s a truck, etc. Basically, classifiers classify pre-set data into pre-determined categories. Humans are fully involved and the AI works by classification based on probability.
Its recent implementation in different industry and supply chain sectors involving varying degrees of complexity, has been massive, resulting in a potential $1.3 TRN annual value creation. With AI though, you never really know if you’re talking about robotics, virtual reality, optimised trajectories, automation, algorithms for games (Go, chess), etc. The theme in general is ambiguous.
But AI is not a recent phenomenon. The Markov chains that we often rely on appeared in 1906; the ‘neural networks’ (a dreadful misnomer – they have nothing to do with physiological neurons, which were so imperfectly understood 55 years ago) experienced peaks of scientific interest in the 1960s and again in the 1980s, and have since resurfaced due to the business world. Marvin Minsky, founder of the Artificial Intelligence Laboratory at MIT, eventually declared that ‘AI has been brain dead since the 70s’, roundly endorsing a statement by Luc Julia [who created Siri on iOS/Mac OS] in 2017: ‘AI does not exist’.
Clarification is essential in the face of both a casual (but enthusiastic and positive) flurry of interest and intransigent mistrust, and it is easy to distinguish the engineering sciences from the cognitive sciences. For the first of these, we will attempt to formulate and initiate an action that would traditionally be brought about by humans, using computer and/or mechanical tools (known as ‘weak AI’). For the second, we will focus on the nature of human cognition – focusing in particular on knowledge and memory.
In the history of scientific ideas, the engineering sciences and the cognitive sciences quickly became separated. The first school, or wave, of cognitive sciences was mistaken about the ‘computational’ nature of cognition, and has come up with a number of alternative models since the 1960s. The most recent of these are the embodied, distributed, situated, externalist models which acknowledge the interplay between body, environment and surrounding technical objects, and in general refute the central importance of the brain.
In this context, it is more correct to recognise both the operational power of the tool (erroneously) called ‘AI’ and the legitimacy of cognitive science (not entirely naturalistic) to describe human thought and behaviour. In short, we should continue to capitalise on the power of statistics for industrial optimisation, but try to separate it from the ethical debates which often stem from a poor understanding of its nature; there is no credible form of AI at the moment, and no increase in (computing) power will change this situation; a change of type, however, may. In the meantime, let us talk about AI not as Artificial Intelligence (as in capable of thought as a neural network) but as Augmented (human) Intelligence (i.e. augmented by statistical tools).
Read all “Expert Opinion” articles on the SprintProject blog
This post is also available in : Français (French)