Daniel Kahneman, 87, was awarded the Nobel prize in economics in 2002 for his work on the psychology of judgment and decision-making. His first book, Thinking, Fast and Slow, a worldwide bestseller, set out his revolutionary ideas about human error and bias and how those traits might be recognised and mitigated. A new book, Noise: A Flaw in Human Judgment, written with Olivier Sibony and Cass R Sunstein, applies those ideas to organisations. This interview took place last week by Zoom with Kahneman at his home in New York.
I guess the pandemic is quite a good place to start. In one way it has been the biggest ever hour-by-hour experiment in global political decision-making. Do you think it’s a watershed moment in the understanding that we need to “listen to science”?
Yes and no, because clearly, not listening to science is bad. On the other hand, it took science quite a while to get its act together.
One of the key problems seems to have been the widespread inability to grasp the basic idea of exponential growth. Does that surprise you?
Exponential phenomena are almost impossible for us to grasp. We are very experienced in a more or less linear world. And if things are accelerating, they’re usually accelerating within reason. Exponential change [as with the spread of the virus] is really something else.
We’re not equipped for it. It takes a long time to educate intuition.
See: The future relationship between AI and IP
Do you think the cacophony of opinion on social media exacerbates that?
I know too little about social media, there’s just too large a generational gap. But clearly the potential for misinformation to spread has grown. It’s a new kind of media that has essentially no responsibility for accuracy and not even reputational controls.
Could you define what you mean by “noise” in the book, in layman’s terms – how does it differ from things like subjectivity or error?
Our main subject is really system noise. System noise is not a phenomenon within the individual, it’s a phenomenon within an organisation or within a system that is supposed to come to decisions that are uniform. It’s really a very different thing from subjectivity or bias. You have to look statistically at a great number of cases. And then you see noise.
I suppose people are instinctively or emotionally still more inclined to trust human systems than more abstract processes?
That is certainly the case. We see that, for example, in terms of the attitude to vaccination. People are willing to take far, far fewer risks when they face vaccination than when they face the disease. So this gap between the natural and the artificial is found everywhere.
In part that is because when artificial intelligence makes a mistake, that mistake looks completely foolish to humans, or almost evil.
You end your book with some ideas for eliminating noise, creating checklists for decision making, having “designated decision observers” and so on. I was reminded of those studies that show how corporate efforts to reduce unconscious racial and gender bias through compulsory training have been either ineffective or counter-productive. How do you take account of such unforeseen consequences?
There is always a risk of that. And those ideas you mention are largely untested but, we think, worth considering. Others in the book are founded on more experience, are more solid.
See: FINTECH FRIDAY$ (EP.17-Nov 9): How Artificial Intelligence is Optimizing Sales and the Future of Business AI with Asad Naeem, Co-founder and President, Fortuna.ai
Do you feel that there are wider dangers in using data and AI to augment or replace human judgment?
There are going to be massive consequences of that change that are already beginning to happen. Some medical specialties are clearly in danger of being replaced, certainly in terms of diagnosis. And there are rather frightening scenarios when you’re talking about leadership. Once it’s demonstrably true that you can have an AI that has far better business judgment, say, what will that do to human leadership?
Leave a Reply