How does truth emerge?

A common fallacy is to see the world as binary - truth or fake rather than grey scale. To understand why the concept of truth is important despite its limitations we need to consider how it emerges. Data whether in numbers, words or pictures needs meaning to be useful. The emergence of meaning leads to other phenomena such as purpose and effectiveness. Truth emerges from our need to interact with the environment. Truth as functional construct aligns with my perspective as a doctor and disability analyst.

On a most basic level we have learned to test sensory information for danger. The need to identify such dangers from minimal information explains some of the features of ‘truth’. The asymmetric risk profile of danger means the startle response may be appropriate even with a low probability for instance in life threatening situations. These responses can be hard wired such as the response to foul smells. In AI this startle response is problematic making AI poor for tasks such as medical diagnosis where an asymmetrical risk profile is necessary. 

There are cultural heuristics such as the prohibition on eating pork where the truth (that pig’s pathogens are resistant to cooking) is not likely to be the reason for the rule. In this group also are instincts or gut feelings about a situation. The cultural experience guides the person’s emotional understanding and can be more effective than guessing.  AI hallucinations may arise from the tension between social alignment and scientific understanding.  

Scientific truth can be counter intuitive leading to conflicts between instincts and prediction. The difference between proving a hypothesis and providing an explanation limits the ability of science to determine truth. An economist may explain why the stock market changed and may even be able to predict the changes. The stock market is not mechanistic so the changes cannot be calculated scientifically but contains the truth of the trader’s consensus.

Truths emerge from the meaning of patterns

All three types of truth mentioned above (instinctive, cultural and scientific) emerge in a similar way. At first there is data but gradually with experience patterns emerge. These patterns are proto-truths as they appear to have meaning but the significance is unclear. Experience and understanding comes from interacting with the pattern and this gives rise to meaning. 

AI currently cannot test meaning as a human can, apart from in specific situations such as code and mathematics. To the AI any statistical patterns could have meaning so this type of hallucination cannot be resolved simply by training on more data. AIs rely on being trained on data that includes the pattern and the meaning. Without this information they would remain good at predicting but fail to understand truth. 

This hypothesis has a prediction, meaning comes from the observer viewing and interacting with the world. The meaning that a rational body will derive from the pattern depends on their viewpoint. The meaning that AI will derive will be interweaved with their particular experiences and the situation they are in. This divergence between humans and AI may have implications on issues such as intent. 

LLMs are based in the world of language and their meaning is based on the relationship between words. LLMs contain a model of the world created from these word connections which explains many of their strengths and weaknesses. The Symbol Grounding Problem is the lack of a relationship between objects and words. This deficiency will need to be addressed if alignment of aims is desired.

The importance of origin, intent and provenance

The story of Cassendra in Greek Tragedy was of a woman given the power by her lover god to know the future but her lover’s wife cursed her never to be believed. Humans put more weight on information from an authority. That authority will normally explain how they came to their conclusions. AI may show us patterns that we cannot believe or act upon because we do not understand where they came from. 

Scientists have a good reputation for truth but if they stray from facts to opinions they are less likely to be believed. If the explanation of the science is unclear then the reliance on the finding will reduce. Unless we understand the process we cannot weigh up the reliability the discovery and whether it is likely to be refuted in further experiments. Blind faith in science is as likely to lead to fake news as any other source of information. 

AI struggles to be convincing for many reasons not least its propensity to hallucinate. Truths often are assessed to lack ‘humanness’ or emotional intelligence. Those people whose voices are believed are often engaging and persuasive, truth may be as much about how something is said as what is said. Someone who is passionate about a topic is more likely to have considered it deeply than one without emotion. 

People are suspicious of statements from sources such as politicians who may benefit from being believed. Where the source could suffer reputational damage as a result making a false statement people are more likely to believe what is said. Conspiracy theories work by identifying an inconsistency and offering an explanation involving mal-intent. The provenance is then weaponised to create an alternative belief by showing that the origin of the statement should not be believed.

Other views of truth

Provenance does not guarantee truth and often one’s internal moral compass or cultural norms are better indicators. If we were to create an AI to consider confidence it would need to use a variety of techniques to get a holistic view. The AI would have to embrace rather than trying to resolve ambiguity. In science two hypotheses may represent accepted consensus, or the current theory may be known to be wrong. 

Some aspects of human experience do not have a correct answer, not because different people believe different things, but because they are unknowable. The idea that you cannot know the exact position and movement of a particle is described as uncertainty. Different sorts of uncertainty exist in human behaviour and whilst they can be described using a probability distribution they represent a fundamental limit on truth as meaning.

Truth can also be described in terms of its purpose, how are you going to use the pattern? The same pattern would have different meanings depending on how the pattern is used. For example, AI generated art has already replaced the artist or will never replace the artist. You can argue that there is a problem with unemployment in artists or that there are success stories. 

Karl Friston’s theory Active Inference sees truth as a mechanism to reduce surprise. This is linked to understanding of the brain as a prediction machine. The better the brain can predict the future the easier it is to focus on inconsistencies. A person will only startle when necessary. Overreliance on predictions can have implications on our perceptions of truth. Ignoring where truth and reality conflict can lead to cognitive distortions.

Conclusions

Simone de Beauvoir argues that we are situated beings who project meaning from our viewpoint, so truth is ambiguous. Humans navigate this ambiguity by finding meaning in the pattern and considering the implications to them. Humans disagree not because they are irrational but because they have different viewpoints. They see different meanings and different outcomes. 

To have insight into this process AI must learn how to make sense of patterns from the perspectives of humans. The AI would need feedback from many humans on how they see meanings in a pattern and the relevance of those meanings to their lives. Cultural understanding emerges from debate and individual opinions. Truth would emerge from public discussions on the new patterns discovered by AI. 

The importance of these discussions may not be initially clear, because it takes time to realise implications. An obscure pattern could reveal a novel truth that might change medicine, war or the political balance. Humans remain substantially better at asking insightful questions and having new insights than AI. Only a collaboration between humans and AI can identify risks and advances reliably. 

Truth is not a test of certainty or reliability but an emergent property that comes from a process. The meaning of any pattern is a collective project by beings with different viewpoints and AI can be part of that project. The AI can assist by identifying the probability distribution for the public to collapse into meaning. LLMs already are trained on all of human knowledge, in the future they will also hold the truth.


By Dr Mark Burgin BM BCh MA (oxon) MRCGP

Dr Mark Burgin graduated from Oxford University in 1987 and studied with the Open University on two occasions in the 1990s. He has also studied for the CPE (law), Medical Ethics, learned Portuguese by living in Brazil. He has written many articles and written books on Personal Injury and the LLMS (your PGCME) and has published Disability Analysis: A Practical Guide and Psychological Keys: Unlocking the Mind’s Mechanisms.

May 2026

Would you like to contribute an article towards our Professional Knowledge Bank? Find out more.