Posted by Jim Guszcza on October 31, 2016
The relationship between human and artificial intelligence (AI) is becoming one of the major issues of the day. A recent World Economic Forum report predicted that more than five million jobs will be lost to AI-fueled automation and robotics over the next four years1. It’s interesting to consider the relative abilities of human and machine intelligence in a specific arena: making predictions and forecasts. When is AI better at predicting outcomes, and when are humans? What happens when you combine forces? And what role–if any–will human judgment play as algorithms continue to evolve? It turns out that algorithmic forecasting has limits that machine learning-based AI methods cannot surpass; human judgment will not be automated away anytime soon.
When algorithms outperform experts
While computer algorithms cannot replace human judgment, they can improve it. Human-computer collaboration is a major avenue for improving our abilities to make forecasts and judgments under uncertainty. Hundreds of academic studies and business initiatives conducted over many decades have compared expert and algorithmic prediction, and the results have been decisive: statistical algorithms nearly always outperforming unaided human judgment in a wide variety of domains. This is because, paraphrasing Daniel Kahneman, the human mind is a “machine for jumping to conclusions.” Our biases are numerous: We overgeneralize from personal experience, act as if the evidence before us is the only information relevant to the decision at hand, base probability estimates on how easily the relevant scenarios leap to mind, downplay the risks of options to which we are emotionally predisposed, and generally overestimate our abilities and the accuracy of our judgments. So we need algorithms. But that’s not the whole story: algorithms tend to do poorly in domains that require conceptual or causal understanding, commonsense reasoning, creativity, or the ability to extrapolate into new situations. Figuratively speaking, the equation should be not “algorithms > experts” but instead, “experts + algorithms > experts.”
What computers still can’t do
Although algorithms can augment human judgment, they cannot replace it altogether. At the same time, training people to be better forecasters and pooling the judgments and fragments of partial information of smartly assembled teams of experts can yield still-better accuracy. The domain experts for whom predictive models are designed (hiring managers, bank loan or insurance underwriters, physicians, fraud investigators, public-sector case workers, and so on) are the best source of information on what factors should be included in the models. Even after the model has been built and deployed, human judgment is typically required to assess the applicability of a model’s prediction in any particular case. Models can guide—but typically cannot replace—human experts.
What’s feeding the movement now?
Continually streaming data from Internet of Things sensors, cloud computing, and advances in machine learning techniques are giving rise to a renaissance in AI that will likely reshape people’s relationship with computers. As data volumes grow and machine learning methods continue to improve, pattern recognition applications will likely better mimic human reasoning, but reflecting and originating meaning is reserved for human judgment. It is true that computers can automate certain tasks traditionally performed only by humans. But more generally, they can only assist—not supplant—the characteristically human ability to make judgments under uncertainty.
Although predictive models and other AI applications can automate certain routine tasks, it is highly unlikely that human judgment will be outsourced to algorithms any time soon. Human judgment will continue to be realigned, augmented, and amplified by methods of psychology and the products of data science and artificial intelligence. Humans will remain “in the loop” for the foreseeable future.
Excerpts taken from Minds & Machines: The Art of Forecasting on Deloitte University Press. Read the full article.