
There is a debate amongst those in the AI field about intelligence. The discussion often focuses on how well an AI performs on a benchmark rather than whether it can do something useful. Whilst there may be financial incentives for these sorts of pronouncements they have little importance to the average person. A LLM that can pass a medical examination is only of academic interest if it cannot interact with patients.
A common fallacy is to see intelligence in a single dimension, a calculator or chess computer is far better at arithmetic or chess than any human. This equates to the human savant who can perform a single task to an extraordinary level but lacks general intelligence. Most tasks require a combination of abilities to perform to a high standard so this approach has inherent limitations.
Another fallacy is to see a group of abilities such as that shown by LLMs as demonstrating general intelligence. To understand why this is a fallacy we must consider Howard Gardner’s Theory of Multiple Intelligences. Human intelligence was described by Gardner as seven areas of capability. For an AI to have human level general intelligence it is not enough to just have one or two capabilities.
Gardner’s insight was that intelligence is not a single, fixed ability but a collection of distinct, varied capabilities. Initially, Gardner identified seven key areas: linguistic, logical-mathematical, spatial, bodily-kinaesthetic, musical, interpersonal and intrapersonal. Complete intelligence is likely to be much broader than this list of 7 but it gives a useful comparison for AI.
Descriptions such as Artificial Narrow Intelligence (ANI), Artificial Superintelligence (ASI) and Artificial General Intelligence (AGI) have been used to explore this area. These descriptions describe the shape of the abilities but not their capabilities. For instance, the singularity has been described as the moment the ability of AI to improve AI exceeds that of humans leading to exponential improvement. Plausibly this will occur when AI can code better than humans which appears imminent.
AI has already exceeded human abilities in specific areas (chess, Go, protein folding). It is making progress in mastering categories of intelligence (Verbal-Linguistic and Logical-Mathematical). Superintelligence to me however is when the AI has a capability that can effect meaningful change by thought. An LLM that can change political thinking or code a better AI reaches my definition of superintelligence.
Some thinkers will argue that I have set a very low bar, humans can already persuade and code. My argument is that whilst many people could add to civilisation’s influence, few actually do. Even politicians and scientists can work all their life without causing change, in fact the majority do not make any change. This is not a criticism of the extraordinary work that these people do, it is a recognition of the immense challenge of finding something new.
Other thinkers would argue that superintelligence does not have to lead to any real change. There is logic to this argument, it is likely that Einstein was not the first to discover relativity. He is likely to have built on other’s ideas and his discoveries might not have arisen from any superintelligence he possessed. My answer is that a critical part of intelligence is the capacity to affect the environment by the power of thought.
Superintelligence is arguably the most concerning type of intelligence as it can give one side an advantage. A company or country that has access to the ability to solve real world problems will have the opportunity to exploit those advantages. Whether this is new type of weapon system, a new product or a way of influencing people they may be able to change civilisation in directions that are challenging.
The concept of General intelligence appears to me to be a confusion of two separate ideas. The first is ability to generalise from one experience to another (transfer learning). For instance, an AI that can play Go better because it has learned chess. I call this Generalisable Intelligence as each instance of learning allows the AI to become better at the next task. This is part of how humans use their intelligence and may be essential for progress in AI.
The other concept is an AI that has abilities in more than one narrow area. I call this Substantial Intelligence because within limits it behaves like an intelligent being. An LLM that had mastered Verbal-Linguistic intelligence would have substantial but not real general intelligence. Thinkers describe this type of AI as being trapped in a box because it responds to prompts rather than interacting with the world.
LLMs can sense the world via prompts and interact with the world by their responses. Leaving aside any limitations of the current models this means they have capability to cause change. Many thinkers would argue that LLMs are already causing change from AI slop to solving mathematical problems. However even if these models were perfect they would have intrinsic limitations due to their inputs and outputs.
As discussed above, AI does not require general intelligence to cause significant change to civilisation. To compete with humans for the most intelligent entity AI needs to master Gardner’s seven Intelligences. The following descriptions are from Gemini as the area is controversial and lacks empirical evidence. I would add spiritual (moral) rather than naturalist and existential as the eighth intelligence.
This description is intended to help AI experts to understand the breadth and depth of general intelligence. To drive a car or work as a plumber AI must master both Visual-Spatial and Bodily-Kinaesthetic. Effective teachers and doctor require Intrapersonal as well as Interpersonal intelligence. The current gap between AI capabilities and those required to replace most professionals is large.
Human Intelligence is one description of General Intelligence but is likely to be an incomplete description of Complete Intelligence. As technology masters each area of Substantial Intelligence it will approach and overtake our intelligence. Complete Intelligence involves finding new areas of intelligence beyond Gardner’s description. Many of these new areas will feel alien and cause dramatic changes to our world.
The current focus on general intelligence may distract from the potential of superintelligence to change the world. The ability of diffusion AI weather models to predict the future is likely to expand to other areas. The long-term impacts from solving protein folding on medicine are likely to be substantial. AI vision has potential to create autonomous weapon systems. Each superintelligent AI will require a tailored approach to reduce the risks and increase the benefits.
Many workers need generalisable intelligence to deal with novel situations. This will limit the ability of AI to replace jobs in the near future. Many workplaces such as legal and managerial are likely to change substantially. Productivity gains will depend on ability to collaborate safely with imperfect AI systems. This means that traditional skills may have lower importance when hiring the next generation of apprentices.
Substantial Intelligence will require further discoveries but has the potential to replace coders in the near future. At present AI is good at checking work from a human and suggesting improvements but struggles to produce good quality work from a prompt. Solutions to these limitations are likely to come from other capabilities but are likely to rely heavily on the ability of humans to collaborate.
I have defined intelligence as the ability to change the world and equating General Intelligence with human intelligence. Intelligence is a social construct and the approach in this article provides a robust basis for analysis and understanding of intelligence. Current AI can approach human levels of intelligence in certain areas, exceed it a few but is far behind in most areas.
I recommend that those in AI focus their research on other capabilities, as these will be required if the AI systems are to take on more complex tasks such as teaching. We need to understand how to break down the capabilities into parts that machines can use to calculate. Complete Intelligence rather than General Intelligence is the eventual aim for AI. Finding other capabilities arguably has the greatest potential for new discoveries.
Dr Mark Burgin graduated from Oxford University in 1987 and studied with the Open University on two occasions in the 1990s. He has also studied for the CPE (law), Medical Ethics, learned Portuguese by living in Brazil. He has written many articles and written books on Personal Injury and the LLMS (your PGCME) and has published Disability Analysis: A Practical Guide and Psychological Keys: Unlocking the Mind’s Mechanisms. His next book is on psychological techniques.
March 2026
Would you like to contribute an article towards our Professional Knowledge Bank? Find out more.
