The top ten questions for AI

1. Is "Explainable AI" (XAI) actually possible?

AI operates in high-dimensional mathematics that often defies human language. Chatting with AI to explain itself has been shown be unreliable. Scanning the "neural circuits" of a model can show patterns of high-dimensional mathematics. As we develop the maths to understand these patterns we will be able to explain AI. 

2. How do you create synthetic data?

We have hit the data wall of human text and attempts to create synthetic data have led to model collapse. The answer appears to be direct interaction through physical and communication so that the AI can learn in other areas. Moving in a world or speaking to people provides direct feedback and a different type of experience. 

3. What is truth?

The idea of a single objective truth is a fallacy and there are likely to be many truths. A more important aspect of information is provenance so that it is clear where information comes from. This allows the information be weighed based on its source rather than labelling it ‘truth or fake’. 

4. What is "safe use" of AI?

Unsupervised AI analysis is unsafe as models often omit or misunderstand critical nuance. "Safe Use" now means anchoring—ensuring prompts contain enough user-generated information to ground the answer. Professional workflows now require AI to look for its own omissions and errors as a standard safety check.

5. The "Productivity Paradox": Where is the GDP?

While AI is everywhere, global GDP has not yet spiked. We are in the "AI Adoption J-Curve," where the high cost of redesigning workflows slows adoption.  The answer lies in moving from training models to pass tests to performing real world tasks with sufficient reliability to make financial sense. 

6. How to create AI with a fraction of the energy?

The current "gigafactory" approach to data centres is creating unsustainable pressure on energy systems. The solution is Analogue and Neuromorphic systems— these can replicate AI solutions but at a fraction of the cost. By building the solution into the hardware we can avoid the massive power drain of traditional GPUs.

7. Will "AI slop" take over and destroy human intelligence?

With the internet flooded with low-quality content there is a need for real-time quality filtering. Social media and search engines cannot provide high-quality, human-verified content. A compromise is automatic filtering based on user preferences and user feedback. Legal requirements for "quality signalling" are now a major policy debate.

8. How to control AI when it becomes self-directive?

As AI agents begin to set their own goals the current process of post training is insufficient. Anthropic’s "Constitutional AI" approach is becoming the industry standard, embedding a "bigger picture" understanding of human values so agents do not pursue goals at the cost of ethics.

9. What the dangers of persuasive AI?

AI is currently poor at changing human opinion as it lacks emotional intelligence however even small improvements may create a massive power imbalance. The failures to effectively address fake news particularly when social media is overloaded with content suggest that filtering is not enough. Unless more resources are put into developing autonomic counter-persuasion systems that explain how a message is trying to manipulate bad actors will continue to have an unfair advantage.  

10. Can humans find new roles?

In most traditional jobs AI cannot replace humans but is still likely to be disruptive. Using AI will make many professionals more effective. This means that people working in most professional jobs will need to learn a new technique. Working with AI means that the person must master collaboration. They must work with the AI towards an optimal solution rather than taking over or allowing the AI to take over.

By Doctor Mark Burgin, BM BCh (oxon) MRCGP

Dr Mark Burgin graduated from Oxford University in 1987 and studied with the Open University on two occasions in the 1990s. He has also studied for the CPE (law), Medical Ethics, learned Portuguese by living in Brazil. He has written many articles and written books on Personal Injury and the LLMS (your PGCME) and has published Disability Analysis: A Practical Guide and Psychological Keys: Unlocking the Mind’s Mechanisms.

May 2026

Would you like to contribute an article towards our Professional Knowledge Bank? Find out more.