Groucho Marxism

Questions and answers on socialism, Marxism, and related topics

Last summer I entered a competition run by the Alan Turing Institute, the UK’s national institute for data science and artificial intelligence. The competition involved forecasting the incidence of Lyme disease, a bacterial infection transmitted to humans via bites from ticks. The way the competition worked was that the Alan Turing Institute provided data on Lyme disease incidence for a number of years then asked participants to forecast the incidence for the following two years. These forecasts would then be judged based on how close they were to the true figures on Lyme disease incidence for those two years, which only the institute had available to them. I knew nothing about Lyme disease but I had some spare time on my hands so thought I would give it a go.

In the end, I submitted a forecast using a technique called linear regression, an extremely simple method that first I learned about when I was doing A-level statistics. I didn’t hear anything back for six months and had all but forgotten about the competition when a couple of weeks ago I received a phone call from the competition organizer informing me that I had won! I was quite surprised by this news, particular given the simplicity of my approach. My initial thought that I must have been the only entrant so had won by default; but apparently there were around 20 other participants. The competition organizer invited me to visit the Alan Turing Institute two days later to receive my prize (a bottle of champagne and a couple of books).

On the train on the way home I found myself wondering how I had managed to win this competition using such a simple method. I must have been up against some people who had used some very advanced data science and artificial intelligence techniques – yet I had somehow managed to beat them using a technique I learned in school! One explanation is that I just got lucky. That is of course possible, but there is a better explanation. I think what had happened is that I had chosen the most appropriate technique for the task at hand, which was itself actually quite simple. There is no need to use an advanced approach to solve a simple problem; doing so is a classic example of using a sledgehammer to crack a nut.

This got me wondering about artificial intelligence more generally and whether we as a society might be using it to solve problems that we don’t actually need to use it for. Is it possible that we don’t really need it at all? In order to answer this question, we must first define what we mean by ‘artificial intelligence’. Broadly speaking, artificial intelligence is the simulation of human intelligence processes by computer systems, enabling machines to learn, reason, solve problems, perceive environments, and understand language. It can be classified into three types: ‘narrow AI’ (also known as ‘weak AI’); ‘generative AI’; and ‘general AI’ (also referred to as ‘artificial general intelligence’, or AGI). Let us go through these in turn.

Narrow AI is designed to excel at specific tasks, such as voice assistants (e.g. Siri, Alexa), recommendation engines (e.g. Netflix), or image recognition. Such artificial intelligence has been around for some time now and is undoubtedly useful in certain contexts. Generative AI is a subset of artificial intelligence that creates new content, including text (e.g. ChatGPT), images, and code. This is relatively new technology and we as a society are currently in the process of figuring out how to make the best use of it. General AI, on the other hand, is a theoretical, future form of artificial intelligence that would possess human-level intelligence across all domains. As I understand it, we are still a way off developing such technology.

There is reason to be skeptical about the usefulness all three types of AI. Although (as already noted) narrow AI is undoubtedly useful in certain contexts, these contexts usually involve trivial tasks that could easily be performed by humans, such as deciding what to have for dinner or watch on TV. As the forecasting competition demonstrated, when performing a more complicated analytical task, you are better off sticking to a tried-and-trusted analytical technique. Generative AI is also useful in certain contexts but again these usually involve fairly trivial tasks. There is a widespread misconception that generative AI creates new content, when all it is really doing is summarizing and collating information originally generated by human beings.

That leaves general AI, which obviously isn’t very useful as it hasn’t been invented yet. If such an AI could be developed then it would potentially be able to solve problems that humans can’t, which would set it apart from narrow and generative AI. Personally, though, I am skeptical that such technology will be developed any time soon. I think the gap between an AI which can summarize information generated by humans and an AI which can understand the world on a level similar to humans is a lot larger than many people realize. Generative AI doesn’t understand the world any more than a pocket calculator does. The idea that such AI can eventually be scaled up to general AI seems to me to be at best hopelessly naive and at worst deliberately misleading.

Posted in

Leave a comment