Despite humanity’s efforts to tame it, uncertainty remains an unavoidable part of daily life. While many place their hopes in artificial intelligence, two new books suggest that instead of expecting technology to bring order to chaos, we may have to resign ourselves to navigating an increasingly uncertain world.
CAMBRIDGE – “Ah,” the English poet George Meredith lamented more than 150 years ago, “what a dusty answer gets the soul when hot for certainties in this our life!” It’s a sentiment that lies at the heart of two recent books that offer unique insights into the existential challenge of living in an age of heightened uncertainty.
In grappling with the complexities of navigating an increasingly uncertain world, David Spiegelhalter and Neil D. Lawrence, both of the University of Cambridge, draw heavily on their extensive professional experience within and beyond academia. Spiegelhalter, an emeritus professor of statistics, spent years with the UK Medical Research Council’s Biostatistics Unit, playing prominent roles in several high-profile public inquiries. Lawrence, a professor of machine learning, worked as a well-logging engineer on a North Sea drilling platform before completing his PhD, joining Amazon as director of machine learning, and ultimately returning to academia.
The authors’ backgrounds enrich their analyses of the myriad ways humanity has sought to measure and manage uncertainty, from frequentist approaches – most effective when risk can be physically defined – to Bayesian analysis, which incorporates subjective risk estimates. Despite differing in structure, style, and emphasis, their books converge on several key themes.
One common theme is the uniquely human capacity for trust and the pivotal role of reciprocal relationships. Spiegelhalter, for example, relies on philosopher Onora O’Neill’s concept of “intelligent transparency” to illustrate how policymakers can foster trust in the face of uncertainty. Similarly, Lawrence cites O’Neill’s 2002 BBC Reith Lectures, in which she argued that trust is not intrinsic to systems – whether legal, political, or social – but must be earned by the people operating within them.
Another major theme is the rise of generative artificial intelligence, especially large language models (LLMs), which have become the subject of intense and often hyperbolic debate since the launch of ChatGPT in late 2022. By processing vast reservoirs of human-created content to generate textual and visual responses, these systems are seemingly designed to inspire trust. But if, as O’Neill contends, processes divorced from human oversight are not inherently trustworthy, how can we trust machine-operated algorithms? This question, central to Lawrence’s book, also emerges in the final pages of Spiegelhalter’s.
Lastly, Spiegelhalter and Lawrence both invoke the famous thought experiment known as “Laplace’s demon,” which they view as a mirror image of the unpredictability that defines our world. In his 1814 book A Philosophical Essay on Probabilities, the philosopher Pierre Simon Laplace wrote:
“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”
In many ways, Spiegelhalter and Lawrence’s books serve as a counterpoint to the deterministic universe envisioned by Laplace. While Laplace’s demon represents omniscience and perfect predictability, our reality is shaped by unavoidable uncertainties, aptly described by Lawrence as “Laplace’s gremlin.” Despite our best efforts to develop tools to mitigate the effects of blind chance, luck, and ignorance, these forces remain an inescapable part of everyday life.
Taming Uncertainty
Spiegelhalter’s The Art of Uncertainty offers a masterful account of humanity’s efforts to apply probability theory to prediction. Probabilities, he argues, are not objective, independent entities waiting to be discovered. Instead, our relationship with uncertainty is deeply personal, shaped by experience, resources, and other factors that influence how we perceive and approach a given problem. As he puts it, uncertainty is the “conscious awareness of ignorance.”
Consider, for example, the simple act of flipping a coin and covering it with your hand. This scenario, Spiegelhalter explains, involves two distinct types of uncertainty: aleatory, which reflects the inherent randomness of an event (in this case, a coin toss), and epistemic, which stems from a lack of knowledge about something that has already occurred (whether the coin has come up heads or tails).
At a time of escalating global turmoil, there is an urgent need for incisive, informed analysis of the issues and questions driving the news – just what PS has always provided.
Subscribe to Digital or Digital Plus now to secure your discount.
Subscribe Now
Spiegelhalter uses the coin-toss example to illustrate how statistical analysis of past experiences can narrow the range of possible outcomes in less structured situations. While this process can be straightforward using frequentist methods, such as calculating the probability of a six-sided die landing on a specific number, it becomes much harder when outcomes are not clearly defined by physical constraints.
Exploring the concept of model uncertainty, Spiegelhalter points out that our models of the world, like maps, are useful abstractions but never complete representations of reality. While some may be more accurate than others, no model can ever be “true” in a metaphysical sense, especially when it comes to models that try to account for human behavior.
Game theory has added rigor to this analysis, recognizing that humans respond not only to each other’s actions but also to their expectations of those actions. But, as the financier George Soros theorized and demonstrated, consciously reflexive behavior creates a recursive loop that pushes the boundaries of our predictive capacity.
Spiegelhalter rightly emphasizes the pivotal role of Bayes’ Theorem in the development of probability theory. Formulated by the English minister Thomas Bayes and published posthumously in 1763, it gained widespread recognition only after Alan Turing and his team relied on it to break the German Enigma code during World War II.
Bayes’ Theorem formalizes the analysis of uncertainty by relating the prior probability – the likelihood of Outcome A, given Evidence B – to the likelihood of the evidence being observed given the outcome (the likelihood of Evidence B, given Outcome A), conditional on the independent likelihood of separately observing the outcome and the evidence. The output of the exercise is the posterior probability, which summarizes the analysis and is to be updated as new evidence is found.
To bring Bayes’ Theorem to life, Spiegelhalter presents readers with a series of thought-provoking questions. For example, why would more vaccinated people die of COVID-19 than unvaccinated people? And what are the chances that someone flagged by less-than-perfect police imaging software is actually a threat?
By guiding readers through the mechanics of Bayesian analysis, Spiegelhalter not only demystifies it but also underscores the role of subjective expectations in assessing evidence, particularly when probabilities are not a function of physical properties (like those of a coin or a die).
Ultimately, as Spiegelhalter acknowledges, our ability to tame uncertainty is limited. This insight also underpins Cromwell’s Rule, which warns against assigning a probability of zero or one unless something can be logically shown to be false or true. Named by statistician Dennis Lindley, the rule was inspired by Oliver Cromwell’s 1650 plea to the General Assembly of the Church of Scotland: “I beseech you, in the bowels of Christ, consider that you may be mistaken.” It serves as a reminder that outside the “small world” of formal logic, there is always room for doubt and reassessment.
The Trusting Animal
Even as he recognizes the limitations of human understanding, Spiegelhalter firmly rejects the notion of radical uncertainty advanced by Frank Knight and John Maynard Keynes. The idea that “we just don’t know,” as Keynes succinctly put it in 1937, gained prominence before subjective probability estimates gained widespread acceptance.
But having rejected Knight and Keynes, Spiegelhalter offers little reassurance. His “personal conclusion” underscores the limits of formal analysis:
“[A]s we increasingly acknowledge deeper, ontological uncertainty, where we don’t even feel confident in listing what could happen, we move away from attempts at formal analysis and towards a strategy that should perform reasonably well both under situations we have imagined, and those we haven’t.”
Such ontological uncertainty is inextricably tied to the fundamental nature of the world and universe we inhabit. As the second law of thermodynamics dictates, in a closed system, order inevitably gives way to randomness.
The recognition that even our most carefully constructed systems and institutions remain vulnerable to unpredictable shocks connects The Art of Uncertainty to Lawrence’s The Atomic Human. As Lawrence observes, humans’ “natural intelligence emerged in a world where it was constantly being tested against the unexpected.”
Our adaptability and capacity for reciprocal trust are integral to what Lawrence terms the “atomic human.” This concept is Lawrence’s answer to the core question driving his insightful history of AI: Is there a human essence that machines can never replicate?
A master storyteller, Lawrence uses the example of General Dwight Eisenhower, the Allied commander in Europe and future US president, on the day before D-Day. Eisenhower had to synthesize all the intelligence available to him, then rely on his own judgment – or, as Spiegelhalter might put it, his personal relationship with uncertainty – to decide whether to launch the invasion of Nazi-occupied Europe. Having given the order, Eisenhower wrote a memorandum accepting full responsibility should Operation Overlord fail. In Lawrence’s account, this moment exemplifies the atomic human’s ability to reflect on a future he cannot foresee.
Among the resources at Eisenhower’s disposal were the decrypts of German ciphers, cracked by Turing and his team of codebreakers. Lawrence uses their efforts to reverse engineer Nazi Germany’s increasingly complex encryption machines as a starting point for exploring the history of computing and, more specifically, the quest to develop computers capable of genuine intelligence.
Tracing the evolution of computing from cybernetics and “expert systems” to generative AI, neural nets, and machine learning, Lawrence focuses on the scientific work that shaped these developments to show how advances in computational concepts depended on corresponding technological breakthroughs. Notably, it took two generations of innovation to move from the perceptron of the late 1950s – the first system capable of interpreting a digitized image – to today’s LLMs, which use a similar architecture but rely on capabilities made possible by vastly more powerful systems.
The Great AI Fallacy
The Atomic Human’s greatest strength lies in Lawrence’s ability to weave together the history of technology with a profound exploration of human intelligence. Our intelligence, he explains, evolved through natural selection, embodying the persistence and adaptability inherent to organisms shaped by evolutionary processes. By contrast, artificial selection – whether of crops, animals, or computer systems – produces species tailored to specific purposes that are prone to failure when confronted with unexpected conditions.
Lawrence contrasts humans’ “immense cognitive power” with the remarkably slow pace at which we communicate knowledge. Our cognitive ability evolved to help us survive in the unpredictable world of “Laplace’s gremlin,” and we share narratives to make what we know – or what we believe to be true – meaningful to others. Recognizing that our understanding may be flawed, we second-guess ourselves and develop “theories of mind,” modeling other people’s thoughts to compensate for the inherent limitations of slow communication.
But today’s AI models lack these essential qualities of human intelligence. When faced with conditions outside their training data, they falter. Nevertheless, these models perpetuate what Lawrence calls “the great AI fallacy:” the belief that we have created a form of algorithmic intelligence that understands us as deeply as we understand one another.
In reality, LLMs are probabilistic prediction machines. As computer scientist Judea Pearl, a leading expert on causality, explains, “Machine learning models provide us with an efficient way of going from finite sample estimates to probability distributions, and we still need to get to cause-effect relations.”
Trained on vast troves of human-generated content available online, LLMs process expressions of human attempts to navigate an uncertain world. But unlike humans, these systems lack any awareness of their own deficiencies. Consequently, their remarkable ability to draw on training data to predict the next word in a text or pixel in an image is subject to errors they cannot detect or correct.
Lawrence envisions a hypothetical hybrid intelligence arising from the interaction between a human and generative AI – a “human-analogue machine” (HAM), which he describes as a “control stick for the digital machine.” Such a system, he suggests, could augment and extend human capabilities in ways that LLMs cannot. But the risk of reinforcing the “great AI fallacy” remains ever-present:
“The danger we face is believing that the machine will allow us to transcend our humanity. … The atomic human is defined by vulnerabilities, not capabilities. Through those vulnerabilities we have evolved cultures that allow us to communicate and collaborate despite those limitations. Across our history we have developed new tools to assist us in our endeavours, and the computer is just the most recent. But that’s all the computer should ever be – a tool.”
Data or Drivel?
While Lawrence and Spiegelhalter celebrate the human capacity to process data to make informed decisions, they also highlight a fundamental challenge: data alone cannot convey meaning – context is crucial.
The growing use of AI to recommend criminal sentences and evaluate parole applications in the United States is a case in point. When digital prediction systems are introduced into messy social environments, they inevitably mirror the biases and prejudices embedded in their training data.
An even deeper challenge lies in the ontological uncertainty that Spiegelhalter identifies as a driver of the unexpected developments that have shaped human intelligence over millions of years. Simply put, can we trust the processes that generate the data we observe to remain consistent over time? If not, can we trust them at all?
Economist Paul Davidson highlighted this issue in his 2015 book Post Keynesian Theory and Policy, pinpointing a critical flaw in mainstream economics: the assumption that past data can be used to generate probabilistic distributions that remain stable over time, allowing for statistically sound forecasts. “Since drawing a sample from events occurring in the future is impossible,” Davidson observed, “the assumption that the economy is governed by an ergodic stochastic process permits the analyst to assert that samples drawn from the past or current market data are equivalent to drawing samples from future market data.”
To understand the problem with this assumption, consider a young financial analyst at a French bank in 1913, tasked with producing a five-year forecast of Russian bond prices. For decades, France had been a major source of capital for Czarist Russia, providing our hypothetical analyst with ample data on Russian bond prices. But while these data may have captured the impact of Russia’s defeat in the 1905 war against Japan, the subsequent popular uprising, and gradual industrialization, could any forecast have anticipated that by 1918, all Russian bonds would become worthless?
Likewise, the 2008 global financial crisis shattered the long-held belief that uncertainty was under control. Strategies for hedging against ignorance, such as increasing banks’ capital requirements, quickly became a top priority. These institutional responses were designed to address past crises and have done little to prepare for future ones.
Lawrence identifies another challenge: while “our imagination operates in tandem with the world around it and relies on that world to provide the consistency it needs,” history is anything but consistent. Instead, it is marked by disruptions, regime changes, and revolutions.
With this in mind, economist Richard Zeckhauser developed a useful model illustrating how varying levels of knowledge about the state of the world correspond to different investment environments. His model categorizes decision-making scenarios into three distinct domains: risk, uncertainty, and ignorance.
In this framework, “risk” refers to situations where both the possible states of the world and their probabilities are known, along with the distribution of investment returns. By contrast, “uncertainty” describes scenarios where the possible states of the world are known, but their probabilities are not. The third domain, “ignorance,” applies to situations where even the possible states of the world are unknown and “the distributions of returns [are] conjectured, often from deductions about others’ behavior.”
Zeckhauser’s concept of ignorance echoes Keynes’s notion of uncertainty. Recognizing that our ignorance often prevails, we understand that unknowable outcomes can become self-fulfilling prophecies driven by mass herding. And so, as in Keynes’s famous beauty contest metaphor, we observe others closely, hoping not to be left behind or trampled.
Spiegelhalter himself concedes this point: “sometimes we cannot conceptualize all the possibilities.” Sometimes, “we may just have to admit we don’t know.”
To have unlimited access to our content including in-depth commentaries, book reviews, exclusive interviews, PS OnPoint and PS The Big Picture, please subscribe
Ricardo Hausmann
urges the US to issue more H1-B visas, argues that Europe must become a military superpower in its own right, applies the “growth diagnostics” framework to Venezuela, and more.
From cutting taxes to raising tariffs to eroding central-bank independence, US President-elect Donald Trump has made a wide range of economic promises, many of which threaten to blow up the deficit and fuel inflation. But powerful institutional, political, and economic constraints, together with Trump’s capriciousness, have spurred disagreement about how worried we should be.
The Truth About MigrationTimothy Hearsum/Design Pics Editorial/Universal Images Group via Getty Images
CAMBRIDGE – “Ah,” the English poet George Meredith lamented more than 150 years ago, “what a dusty answer gets the soul when hot for certainties in this our life!” It’s a sentiment that lies at the heart of two recent books that offer unique insights into the existential challenge of living in an age of heightened uncertainty.
In grappling with the complexities of navigating an increasingly uncertain world, David Spiegelhalter and Neil D. Lawrence, both of the University of Cambridge, draw heavily on their extensive professional experience within and beyond academia. Spiegelhalter, an emeritus professor of statistics, spent years with the UK Medical Research Council’s Biostatistics Unit, playing prominent roles in several high-profile public inquiries. Lawrence, a professor of machine learning, worked as a well-logging engineer on a North Sea drilling platform before completing his PhD, joining Amazon as director of machine learning, and ultimately returning to academia.
The authors’ backgrounds enrich their analyses of the myriad ways humanity has sought to measure and manage uncertainty, from frequentist approaches – most effective when risk can be physically defined – to Bayesian analysis, which incorporates subjective risk estimates. Despite differing in structure, style, and emphasis, their books converge on several key themes.
One common theme is the uniquely human capacity for trust and the pivotal role of reciprocal relationships. Spiegelhalter, for example, relies on philosopher Onora O’Neill’s concept of “intelligent transparency” to illustrate how policymakers can foster trust in the face of uncertainty. Similarly, Lawrence cites O’Neill’s 2002 BBC Reith Lectures, in which she argued that trust is not intrinsic to systems – whether legal, political, or social – but must be earned by the people operating within them.
Another major theme is the rise of generative artificial intelligence, especially large language models (LLMs), which have become the subject of intense and often hyperbolic debate since the launch of ChatGPT in late 2022. By processing vast reservoirs of human-created content to generate textual and visual responses, these systems are seemingly designed to inspire trust. But if, as O’Neill contends, processes divorced from human oversight are not inherently trustworthy, how can we trust machine-operated algorithms? This question, central to Lawrence’s book, also emerges in the final pages of Spiegelhalter’s.
Lastly, Spiegelhalter and Lawrence both invoke the famous thought experiment known as “Laplace’s demon,” which they view as a mirror image of the unpredictability that defines our world. In his 1814 book A Philosophical Essay on Probabilities, the philosopher Pierre Simon Laplace wrote:
“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”
In many ways, Spiegelhalter and Lawrence’s books serve as a counterpoint to the deterministic universe envisioned by Laplace. While Laplace’s demon represents omniscience and perfect predictability, our reality is shaped by unavoidable uncertainties, aptly described by Lawrence as “Laplace’s gremlin.” Despite our best efforts to develop tools to mitigate the effects of blind chance, luck, and ignorance, these forces remain an inescapable part of everyday life.
Taming Uncertainty
Spiegelhalter’s The Art of Uncertainty offers a masterful account of humanity’s efforts to apply probability theory to prediction. Probabilities, he argues, are not objective, independent entities waiting to be discovered. Instead, our relationship with uncertainty is deeply personal, shaped by experience, resources, and other factors that influence how we perceive and approach a given problem. As he puts it, uncertainty is the “conscious awareness of ignorance.”
Consider, for example, the simple act of flipping a coin and covering it with your hand. This scenario, Spiegelhalter explains, involves two distinct types of uncertainty: aleatory, which reflects the inherent randomness of an event (in this case, a coin toss), and epistemic, which stems from a lack of knowledge about something that has already occurred (whether the coin has come up heads or tails).
Winter Sale: Save 40% on a new PS subscription
At a time of escalating global turmoil, there is an urgent need for incisive, informed analysis of the issues and questions driving the news – just what PS has always provided.
Subscribe to Digital or Digital Plus now to secure your discount.
Subscribe Now
Spiegelhalter uses the coin-toss example to illustrate how statistical analysis of past experiences can narrow the range of possible outcomes in less structured situations. While this process can be straightforward using frequentist methods, such as calculating the probability of a six-sided die landing on a specific number, it becomes much harder when outcomes are not clearly defined by physical constraints.
Exploring the concept of model uncertainty, Spiegelhalter points out that our models of the world, like maps, are useful abstractions but never complete representations of reality. While some may be more accurate than others, no model can ever be “true” in a metaphysical sense, especially when it comes to models that try to account for human behavior.
Game theory has added rigor to this analysis, recognizing that humans respond not only to each other’s actions but also to their expectations of those actions. But, as the financier George Soros theorized and demonstrated, consciously reflexive behavior creates a recursive loop that pushes the boundaries of our predictive capacity.
Spiegelhalter rightly emphasizes the pivotal role of Bayes’ Theorem in the development of probability theory. Formulated by the English minister Thomas Bayes and published posthumously in 1763, it gained widespread recognition only after Alan Turing and his team relied on it to break the German Enigma code during World War II.
Bayes’ Theorem formalizes the analysis of uncertainty by relating the prior probability – the likelihood of Outcome A, given Evidence B – to the likelihood of the evidence being observed given the outcome (the likelihood of Evidence B, given Outcome A), conditional on the independent likelihood of separately observing the outcome and the evidence. The output of the exercise is the posterior probability, which summarizes the analysis and is to be updated as new evidence is found.
To bring Bayes’ Theorem to life, Spiegelhalter presents readers with a series of thought-provoking questions. For example, why would more vaccinated people die of COVID-19 than unvaccinated people? And what are the chances that someone flagged by less-than-perfect police imaging software is actually a threat?
By guiding readers through the mechanics of Bayesian analysis, Spiegelhalter not only demystifies it but also underscores the role of subjective expectations in assessing evidence, particularly when probabilities are not a function of physical properties (like those of a coin or a die).
Ultimately, as Spiegelhalter acknowledges, our ability to tame uncertainty is limited. This insight also underpins Cromwell’s Rule, which warns against assigning a probability of zero or one unless something can be logically shown to be false or true. Named by statistician Dennis Lindley, the rule was inspired by Oliver Cromwell’s 1650 plea to the General Assembly of the Church of Scotland: “I beseech you, in the bowels of Christ, consider that you may be mistaken.” It serves as a reminder that outside the “small world” of formal logic, there is always room for doubt and reassessment.
The Trusting Animal
Even as he recognizes the limitations of human understanding, Spiegelhalter firmly rejects the notion of radical uncertainty advanced by Frank Knight and John Maynard Keynes. The idea that “we just don’t know,” as Keynes succinctly put it in 1937, gained prominence before subjective probability estimates gained widespread acceptance.
But having rejected Knight and Keynes, Spiegelhalter offers little reassurance. His “personal conclusion” underscores the limits of formal analysis:
“[A]s we increasingly acknowledge deeper, ontological uncertainty, where we don’t even feel confident in listing what could happen, we move away from attempts at formal analysis and towards a strategy that should perform reasonably well both under situations we have imagined, and those we haven’t.”
Such ontological uncertainty is inextricably tied to the fundamental nature of the world and universe we inhabit. As the second law of thermodynamics dictates, in a closed system, order inevitably gives way to randomness.
The recognition that even our most carefully constructed systems and institutions remain vulnerable to unpredictable shocks connects The Art of Uncertainty to Lawrence’s The Atomic Human. As Lawrence observes, humans’ “natural intelligence emerged in a world where it was constantly being tested against the unexpected.”
Our adaptability and capacity for reciprocal trust are integral to what Lawrence terms the “atomic human.” This concept is Lawrence’s answer to the core question driving his insightful history of AI: Is there a human essence that machines can never replicate?
A master storyteller, Lawrence uses the example of General Dwight Eisenhower, the Allied commander in Europe and future US president, on the day before D-Day. Eisenhower had to synthesize all the intelligence available to him, then rely on his own judgment – or, as Spiegelhalter might put it, his personal relationship with uncertainty – to decide whether to launch the invasion of Nazi-occupied Europe. Having given the order, Eisenhower wrote a memorandum accepting full responsibility should Operation Overlord fail. In Lawrence’s account, this moment exemplifies the atomic human’s ability to reflect on a future he cannot foresee.
Among the resources at Eisenhower’s disposal were the decrypts of German ciphers, cracked by Turing and his team of codebreakers. Lawrence uses their efforts to reverse engineer Nazi Germany’s increasingly complex encryption machines as a starting point for exploring the history of computing and, more specifically, the quest to develop computers capable of genuine intelligence.
Tracing the evolution of computing from cybernetics and “expert systems” to generative AI, neural nets, and machine learning, Lawrence focuses on the scientific work that shaped these developments to show how advances in computational concepts depended on corresponding technological breakthroughs. Notably, it took two generations of innovation to move from the perceptron of the late 1950s – the first system capable of interpreting a digitized image – to today’s LLMs, which use a similar architecture but rely on capabilities made possible by vastly more powerful systems.
The Great AI Fallacy
The Atomic Human’s greatest strength lies in Lawrence’s ability to weave together the history of technology with a profound exploration of human intelligence. Our intelligence, he explains, evolved through natural selection, embodying the persistence and adaptability inherent to organisms shaped by evolutionary processes. By contrast, artificial selection – whether of crops, animals, or computer systems – produces species tailored to specific purposes that are prone to failure when confronted with unexpected conditions.
Lawrence contrasts humans’ “immense cognitive power” with the remarkably slow pace at which we communicate knowledge. Our cognitive ability evolved to help us survive in the unpredictable world of “Laplace’s gremlin,” and we share narratives to make what we know – or what we believe to be true – meaningful to others. Recognizing that our understanding may be flawed, we second-guess ourselves and develop “theories of mind,” modeling other people’s thoughts to compensate for the inherent limitations of slow communication.
But today’s AI models lack these essential qualities of human intelligence. When faced with conditions outside their training data, they falter. Nevertheless, these models perpetuate what Lawrence calls “the great AI fallacy:” the belief that we have created a form of algorithmic intelligence that understands us as deeply as we understand one another.
In reality, LLMs are probabilistic prediction machines. As computer scientist Judea Pearl, a leading expert on causality, explains, “Machine learning models provide us with an efficient way of going from finite sample estimates to probability distributions, and we still need to get to cause-effect relations.”
Trained on vast troves of human-generated content available online, LLMs process expressions of human attempts to navigate an uncertain world. But unlike humans, these systems lack any awareness of their own deficiencies. Consequently, their remarkable ability to draw on training data to predict the next word in a text or pixel in an image is subject to errors they cannot detect or correct.
Lawrence envisions a hypothetical hybrid intelligence arising from the interaction between a human and generative AI – a “human-analogue machine” (HAM), which he describes as a “control stick for the digital machine.” Such a system, he suggests, could augment and extend human capabilities in ways that LLMs cannot. But the risk of reinforcing the “great AI fallacy” remains ever-present:
“The danger we face is believing that the machine will allow us to transcend our humanity. … The atomic human is defined by vulnerabilities, not capabilities. Through those vulnerabilities we have evolved cultures that allow us to communicate and collaborate despite those limitations. Across our history we have developed new tools to assist us in our endeavours, and the computer is just the most recent. But that’s all the computer should ever be – a tool.”
Data or Drivel?
While Lawrence and Spiegelhalter celebrate the human capacity to process data to make informed decisions, they also highlight a fundamental challenge: data alone cannot convey meaning – context is crucial.
The growing use of AI to recommend criminal sentences and evaluate parole applications in the United States is a case in point. When digital prediction systems are introduced into messy social environments, they inevitably mirror the biases and prejudices embedded in their training data.
An even deeper challenge lies in the ontological uncertainty that Spiegelhalter identifies as a driver of the unexpected developments that have shaped human intelligence over millions of years. Simply put, can we trust the processes that generate the data we observe to remain consistent over time? If not, can we trust them at all?
Economist Paul Davidson highlighted this issue in his 2015 book Post Keynesian Theory and Policy, pinpointing a critical flaw in mainstream economics: the assumption that past data can be used to generate probabilistic distributions that remain stable over time, allowing for statistically sound forecasts. “Since drawing a sample from events occurring in the future is impossible,” Davidson observed, “the assumption that the economy is governed by an ergodic stochastic process permits the analyst to assert that samples drawn from the past or current market data are equivalent to drawing samples from future market data.”
To understand the problem with this assumption, consider a young financial analyst at a French bank in 1913, tasked with producing a five-year forecast of Russian bond prices. For decades, France had been a major source of capital for Czarist Russia, providing our hypothetical analyst with ample data on Russian bond prices. But while these data may have captured the impact of Russia’s defeat in the 1905 war against Japan, the subsequent popular uprising, and gradual industrialization, could any forecast have anticipated that by 1918, all Russian bonds would become worthless?
Likewise, the 2008 global financial crisis shattered the long-held belief that uncertainty was under control. Strategies for hedging against ignorance, such as increasing banks’ capital requirements, quickly became a top priority. These institutional responses were designed to address past crises and have done little to prepare for future ones.
Lawrence identifies another challenge: while “our imagination operates in tandem with the world around it and relies on that world to provide the consistency it needs,” history is anything but consistent. Instead, it is marked by disruptions, regime changes, and revolutions.
With this in mind, economist Richard Zeckhauser developed a useful model illustrating how varying levels of knowledge about the state of the world correspond to different investment environments. His model categorizes decision-making scenarios into three distinct domains: risk, uncertainty, and ignorance.
In this framework, “risk” refers to situations where both the possible states of the world and their probabilities are known, along with the distribution of investment returns. By contrast, “uncertainty” describes scenarios where the possible states of the world are known, but their probabilities are not. The third domain, “ignorance,” applies to situations where even the possible states of the world are unknown and “the distributions of returns [are] conjectured, often from deductions about others’ behavior.”
Zeckhauser’s concept of ignorance echoes Keynes’s notion of uncertainty. Recognizing that our ignorance often prevails, we understand that unknowable outcomes can become self-fulfilling prophecies driven by mass herding. And so, as in Keynes’s famous beauty contest metaphor, we observe others closely, hoping not to be left behind or trampled.
Spiegelhalter himself concedes this point: “sometimes we cannot conceptualize all the possibilities.” Sometimes, “we may just have to admit we don’t know.”
David Spiegelhalter, The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk and Luck (UK: Pelican, 2024; US: W. W. Norton & Company, 2025).