
Consciousness is an emergent property of a brain and not a function of any specific brain structure. Attempts to determine if consciousness is present have been limited by a lack of a clear idea of how this property emerges. Disability analysis provides a coherent explanation of consciousness and predicts the necessary factors. Disability analysis asks the key question – does the brain need to be conscious?
Clearly at one end of the spectrum a single celled organism would not gain an advantage from being conscious. Every useful behaviour can be hardwired rather than learned from experience. There has been debate about whether birds have consciousness because they show complex learned behaviours. The presumption should be that birds are not conscious.
Consciousness is likely to increase variability in responses, cost energy and increase the risk of self-destructive behaviours. These risks mean that unless is necessary for the bird to have consciousness they are likely to have another process in charge. This process would have some of the advantages of consciousness but avoid its risks. This hypothesis is strengthened by their very different brain structures.
To explore this hypothesis further we must understand what functions are necessary for consciousness to emerge. Disability models explore how adding and removing functions impact on performance. For example, a functional restriction in motivation might impact the ability to perform self-care. The minimum number of factors necessary and sufficient for consciousness should be the focus of work in this field.
Based upon the idea that consciousness is an emergent property I propose four factors. The four factors are described with the technical titles Concept Synthesis, Engrams, Relational Retrieval and Active Persistence. They describe a cycle of information to explain how consciousness emerges from cognitive functioning. This in turn can help predict the inner life of a brain from how the factors are addressed.
Storing information as vectors in high dimensional space has allowed AI to store complex information statistically. Brains also use place as a central aspect of how they store information. Brains are organised in multiple functional areas each with different meaning from the simple eyes, hearing, touch, movement to the complex language, face recognition, planning and so on.
In brains the meaning of a particular area appears to arise from its connections to other areas. This ability to layer meaning automatically from sensory information is mimicked in AI by processing layers. AI uses a series of numbers to indicate a vector whereas the brain’s storage is the neurone. These similarities and differences to consciousness explain many of the limitations of both approaches.
Both brains and AI can perform concept synthesis as the layered approach allows them to reinforce important aspects of their inputs. This type of simplification allows concept synthesis to arise spontaneously from interacting with data. The baby only needs to engage with its environment to learn and build an internal model of the world. World models will allow the next generation of AI models to develop sophisticated understanding of the real world.
Without any concept synthesis chat bots would be unable to sound human. The rich detailed output would become set phrases. The ability to link ideas would disappear and transforming the text with a different style would be impossible. A recent video by Welch labs on grokking explains how AI learns to synthesise concepts.
Brains store information in engrams which are more than synthesised concepts. Ask a chat bot to describe a picture and the output is missing a certain something. It can miss an obvious feature, it can struggle to capture the essence of the picture. The brain has no such difficulties with connecting the various elements and understanding the picture as a whole. The brain can detect hidden meanings and understand how to improve the picture.
Part of the problem is believed to be a lack of emotional intelligence. In the brain the basal ganglia provide the emotional colour to processing of inputs. This emotional response gives the brain an advantage when attending to important details. It is likely that this information allows the brain to develop connections not open to the AI. These connections are important to create valence or reward systems to help learning.
There is a more fundamental problem that comes from the difference in the way brains and AI store information. The AI stores the concept in multi dimensional space but the brain stores it in a web of connections. The web appears to allow the brain to do something that the AI cannot, form an engram. This structure permits the brain to access the information from other memories making connections that are personal.
Although it is true that modern AI (transformers) use attention models to simulate a web of connections they are different from engrams. The engram is connected to other concepts that are meaningful for that brain. The difference between passive knowledge and active engrams is increasingly being understood. The personalisation possible with engrams allows the brain to be optimally trained for that individual and their weaknesses and strengths.
One aspect that makes it easy to recognise a chat bot is their inability to retrieve previously shared information. Even within a single chat window the AI may appear to forget much of what has been discussed. Far from having a perfect memory it needs to be continually reminded of previous comments. This is annoying and gives the impression that the AI is forgetful or does not care about the conversation.
There is a parallel experience for humans when their ‘token count’ gets too high. The student who is overloaded with information or trying to follow a conversation when tired both have a similar feeling. The brain no longer fully attends to what has been said and their attention drifts. Pathological processes such as brain injury or dementia could have similar effects. The question is whether failing to pay attention is associated with a reduction in consciousness.
Changes to brain states would be expected to change the nature of consciousness as it is an emergent property. At an extreme end a brain that is asleep would have significant reductions. What if the brain is angry or sad? These brain states might be expected to change the way that information is retrieved or processed. This in turn would be predicted to change the way that the brain acts to control their environment.
The previous factors have been about the way that memory is created and stored. The final factor goes to the heart of consciousness. Can a system be conscious if it is turned off? The apparent lack of consciousness in a sleeping brain would suggest that the answer is complex. A series of linked experiences may be sufficient to simulate persistence. Although AI is not shut off each task is separate and unlinked which would be expected to prevent consciousness.
For a short period of time the AI holds the relevant information and carries out the process. In between these processing episodes the AI is not thinking about that information. Any consciousness that occurred during a cycle would end when the cycle finishes. This means that it is impossible for the AI to have consciousness as normally conceived. As new architectures which include hidden states that loop back are incorporated this may be resolved.
Before agentic AI it would respond to inputs so would not keep thinking if no one is talking to it. An AI that monitored its responses and keeps working on poor responses or difficult problems might have active persistence. Another approach to active persistence is to improve adaption whilst maintaining stable personality. The main barrier to these methods is how AI integrates temporal information as the concept of chronology is central to creating a stream of consciousness.
AI has succeeded in achieving concept synthesis but is less good at creating engrams. New approaches such relational augmented retrieval may solve retrieval or may struggle without the connectiveness of engrams. True active persistence is likely to be impossible even for organic brains but the ‘stream of now’ could be simulated to a sufficient level for consciousness.
This analysis indicates that many animals already are likely to have uncomfortably high levels of consciousness. They already satisfy active persistence and relational retrieval. Humans may understand more complex concepts but their brains are not sufficiently different to exclude consciousness in other animals. Some thinkers argue that humans are inherently different however the disability model suggests that consciousness will emerge in any brain with these factors.
There is argument that AI already has an ability to use the most complex concepts, has highly complex webs of connections that are not limited to an individual experience, it can recall with greater precision and process much faster than organics. The fallacy of this argument is that when the training finishes the model is fixed, it does not have a way of adapting to new situations. Apart from inside the context window it acts in way similar to a brain with amnesia.
A final point is that while the argument may appear philosophical, consciousness is associated with self-determination. Self-determination is a trait that most animals demonstrate and appears to rely upon the same factors as consciousness. The possibility that self-determination will emerge after consciousness has implications for AI safety. An AI that becomes self-determining is likely to cause important social changes. I do not share the fears that emerging self-determination will cause civilization collapse. However, I recognise that it will cause profound ethical challenges.
Dr Mark Burgin graduated from Oxford University in 1987 and studied with the Open University on two occasions in the 1990s. He has also studied for the CPE (law), Medical Ethics, learned Portuguese by living in Brazil. He has written many articles and written books on Personal Injury and the LLMS (your PGCME) and has published Disability Analysis: A Practical Guide and Psychological Keys: Unlocking the Mind’s Mechanisms. His next book is on psychological techniques.
March 2026
Would you like to contribute an article towards our Professional Knowledge Bank? Find out more.
