“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”
Sherlock Holmes, in Arthur Conan Doyle’s The Sign of Four”
A question I get very frequently (including just a few days ago at a meeting in Lahore, Pakistan) is: “Can artificial intelligence reason?”. First, some foundational discussion of reasoning as a concept:
How do we define reasoning?
Reasoning is the cognitive process of logically connecting elements of information to reach a goal such as solving a problem, making a decision, or generating an insight. There are several other words related to reasoning that can perhaps lead to some confusion. Thinking is a very broad mental activity that includes reasoning but also includes a myriad of other mental activities such as imagining and remembering. “Critical” thinking is even more often confused with reasoning, but the key difference is that the former includes not only reasoning, but it also possesses other elements such as judgment and skepticism. Cognition, like thinking, describes a more broad mental process that includes other mental processes (other than reasoning) such as perception and memory. Inference, the act of reaching a conclusion from available evidence, is different than reasoning as reasoning involves broader thought process than inference alone. Logic is the systematic study and structure framework of valid reasoning and argumentation. Logic establishes the rules and principles for differentiating correct from incorrect reasoning, and is therefore foundational to reasoning. To further this convoluted conundrum, “logical” thinking is applying the aforementioned logic informally in everyday decision-making).
Types of reasoning
The popular discussions about reasoning is that there are two distinct types:
1) deductive reasoning, drawing specific conclusions from general principles (“All humans are mortal. Sam Altman is human, therefore Sam Altman is mortal.”)
2) inductive reasoning, formulating general conclusions from specific observations (“The moon has appeared every evening, so it will also appear tonight.”)
There is, however, a lesser known type of reasoning called abductive reasoning. This type of reasoning infers the most likely explanation from incomplete or uncertain information (“Mr. Smith has all the risk factors for coronary artery disease and now has chest pain radiating down the left arm with ST changes on his electrocardiogram. He probably has a myocardial infarction.”). Abductive reasoning, therefore, derives a conclusion that is hypothetical based on plausibility, and is often used in medical diagnoses (the “best guess”). Of all three types of reasoning, it is abductive reasoning that has the highest degree of uncertainty. Although it is commonly written that the well known fictional detective Sherlock Holmes utilizes deductive reasoning in solving crimes, he often deploys abductive reasoning. An interesting factoid: Sir Arthur Canon Doyle, the renowned author of the Sherlock Holmes book series, used his surgeon mentor and role model Dr. Joseph Bell as an inspiration for the astute detective Sherlock Holmes.
In addition to the aforementioned types, there are also other lesser known types of reasoning: analogical reasoning is based on analogy between different situations or domains; causal reasoning is identifying the relationships between cause of effect; probabilistic reasoning is using probability theory to manage uncertainty; temporal reasoning is using time or sequence as a relationship with implications; and counterfactual reasoning is considering hypothetical scenarios and outcomes contrary to known facts.
Can AI reason?
Reasoning is becoming an increasingly common part of the discussion about advanced AI models such as OpenAI’s o1 pro and o3 mini or DeepSeek’s R1. While these models are purportedly able to “reason”, perhaps this is premature declaration as an AI cognitive feat. These models purportedly “think” prior to responding by deconstructing big problems into smaller components so that there is a “step-by-step” process called chain-of-thought reasoning. While AI skeptics refuse to think that AI is indeed reasoning but rather that these models are mimicking the process that humans engage in, others feel that these models are genuinely performing some degree of lower form reasoning. Melanie Mitchell, professor at the Santa Fe Institute, stated that while o3 performed well on abstract reasoning tests, it used a very large amount of computation and “a bag of heuristics” to achieve this. Other skeptics would term this “meta-mimicry” or argue that these AI models are pairing a lot of memorization with a little bit of reasoning. A term that some use to describe this part-memorization and part-reasoning model is “jagged intelligence”.
As artificial intelligence demonstrates more capabilities, perhaps words like reasoning and others need more clarity and maybe even be redefined. AI deploys a myriad of computational processes to simulate human reasoning, but it ultimately lacks several key dimensions as compared to humans:
1) contextual understanding: AI often struggles to grasp context and nuances of situations (it still lacks “common sense”)
2) explainability: AI cannot always have transparency in its output, although humans cannot always explain the reasoning steps either
3) creativity: AI lacks the ability to have not only creativity and insight, but also intuition and abstraction.
In other words, AI seems to struggle coming up with “new” ideas that absolutely no one else had thought of before (so-called “innovation”). One notable example of innovation by AI, however, is the innovative move of the second Go game during the tournament between Lee Sedol and AlphaGo of DeepMind (when AI made a move that initially seemed illogical but turned out to be a brilliant pivotal move). This demonstration of AI innovation, however, was accomplished with the capability of deep reinforcement learning.
In short, AI can mimic reasoning but this reasoning is fundamentally different than human reasoning, and is mainly based on structured, data-driven reasoning tasks. Based on our prior definitions (that perhaps need to be modified or even refined with the emergence of AI), AI seems to have elements of reasoning, but not the more involved process of human reasoning or critical thinking that humans can perform. The future of AI reasoning perhaps should include the old symbolic AI with explicit logic to be on a higher plane of synergy with humans.
There will be many discussions about AI and its ability to reason at the exciting annual AIMed meeting (AIMed25) at the Manchester Grand Hyatt in San Diego on November 10-12, 2025 later this year. See you there!