Reasoning in the Age of Artificial Intelligence

Simultaneous Visions by Umberto Boccioni

Lately, I often hear people asking: “Will Artificial Intelligence replace my job?” Perhaps you’ve had this thought too. More than just a matter of the job market or salary expectations, this question challenges our role in society and our ability to remain relevant over time.

It’s worth addressing this doubt once and for all, especially since it is a shared concern. This is why I’ve written this short essay—I hope my reflections can help people recognize the many opportunities the future holds.

Together, through four sections, we will explore the following:

  1. A theoretical overview of the capabilities and limitations of Artificial Intelligence.
  2. A practical analysis of its real-world applications and boundaries.
  3. Conclusions and personal reflections we can draw from these scenarios.
  4. Key suggestions on how to make the best use of Artificial Intelligence (AI).

Part 1 – A Theoretical Premise

Back in 2019, some already recognized the existence of a “long history of failures at achieving General Intelligence in Artificial Intelligence” [1], a challenge that remains unsolved. This is mainly due to the elusive nature of the definition of intelligence itself, as well as that of reasoning.

On this topic, Yu and others observed: “while reasoning has attracted increasing attention […], there is still lacking a distinct definition of reasoning” [3].

After 2023, we entered the “Post-ChatGPT” era [2]. But has this really given us a clearer idea of what intelligence is?

One could define intelligence as “fulfillment of goals” [1], while reasoning might be described as “using evidence and logic to arrive at conclusions” [3].

While these definitions are not perfect, they allow us to encompass a broad range of our cognitive processes, offering a starting point for a better understanding of Artificial Intelligence and its limits.

Let’s examine three types of reasoning that fit within the given definition:

  • Inductive reasoning: involves deriving a general rule from specific cases. Its reliability depends on the strength of the premises.
  • Deductive reasoning: allows us to reach a specific conclusion from a general rule. It is always reliable, provided the premises are correct.
  • Abductive reasoning: leads to a general conclusion from a general rule, but its outcome is always uncertain.

While humans are capable of using all three types of reasoning, attempts to train large language models and pre-trained language machines show that these systems “still heavily rely on human effort” [4] to reach conclusions similar to human reasoning.

For machines to follow deductive reasoning, they must be enabled to “generalize from what they know to make predictions in new contexts” [5]. This represents one of the most complex challenges in the evolution of Artificial Intelligence.

We might hypothesize that, in the future, significant results could be achieved by combining Artificial Intelligence with quantum computing [6].

This combination, or “hybrid quantum computing” [7], which integrates quantum computing with classical algorithms [7], could represent the evolution of future computational architectures.

However, experts remain cautious about the timeline for quantum computing development, especially in critical sectors such as cybersecurity [9]. In general, we can state that “Despite the progress, several key challenges continue to impede the widespread adoption of quantum computing.”

Research and expert opinions suggest that merely building a quantum computer will not be enough to overcome all the limitations of current processors. In fact, “Quantum computers can easily perform even the most complex simulations but, they may not provide any speed-up in simple tasks like day-to-day internet browsing” [10].

Currently, the use of language models like BERT has demonstrated that we can imitate inductive reasoning only in certain empirical circumstances [5]. However, the inability to achieve significant progress in other logical domains beyond deductive logic is not just a matter of technical development.

Studies indicate that to develop an Artificial Intelligence capable of more faithfully imitating human reasoning, it will be necessary to integrate elements of psychological theories [5] and philosophy [4] into technological research.

These elements, along with the growing discussion on whether intelligence can be reduced to a single capability, challenge traditional definitions of the concept itself [11].

We can, therefore, conclude that to create a machine capable of thinking like a human being, we must first overcome a series of technical and cultural obstacles. The real challenge does not lie solely in the ability to translate human reasoning into a repeatable process but, above all, in our own understanding of how we reason.

See also  What is Web Spoofing? Understanding the Threat and How to Stay Safe

Part 2 – A Practical Explanation

In the previous section, we described some types of reasoning, but others could be added to the list, such as lateral thinking [11] and other forms of cognitive processing [12].

Referring back to the definitions of intelligence as “fulfillment of goals” [1] and reasoning as “the use of evidence and logic to arrive at conclusions” [3], two uncovered areas immediately emerge:

  • Intelligent activities that are not goal-oriented, such as pure creativity or reflection without a specific purpose.
  • Reasoning activities based on common sense, which does not rely solely on formal evidence but on intuition and “implicit commonsense knowledge [3].

These aspects present additional challenges for building an Artificial Intelligence that is truly close to human thinking.

Let’s consider a simple empirical example of how the three types of logic examined in the previous section would be applied without common sense:

“I love Milan when it rains and the streets are wet. Even now, in summer, I’d like to experience that feeling. What should I do?”

  • Lateral thinking → Check the weather and come back when it rains. (A practical and direct solution based on observing reality.)
  • Inductive logic → Even though you can’t know exactly when it will rain sooner or later, as it is a statistically possible event. If you accept uncertainty, you should check the weather every hour or every day until it rains. (A solution based on probability and past observations.)
  • Deductive logic → If Milan is not wet as it is when it rains, then you cannot love it. To solve the problem, Milan’s streets must be made wet. (A rigorous but nonsensical solution, as it rigidly applies a premise without considering feasibility.)

This example demonstrates how, without common sense, reasoning can produce solutions that, while logical, are not necessarily practical or realistic.

If we were to ask AI for a solution to the same problem, we would likely get something like this: “If you want to recreate the feeling of Milan’s wet streets, here are some ideas that might help you evoke the atmosphere you love:”

  • Visiting areas with water features.
  • Cleaning the streets early in the morning to simulate humidity.
  • Creating a personal atmosphere, perhaps with appropriate lighting and sounds.
  • Going to an area with high humidity.
  • Listening to rain sounds through recordings.

AI would thus suggest creative solutions that, while not strictly based on traditional logic, adopt a more flexible and practical approach to satisfy the desire for a sensory experience.

However, doesn’t the AI’s solution resemble the deductive one? It cannot escape the premise that it is not raining, so it must find another way to make Milan wet—perhaps the entire city—to maximize the chances of achieving the desired outcome.

Even if we trained our AI to provide a more realistic response by shaping the question differently, we would essentially be pushing the model to imitate an inductive or natural logic.

This happens because deductive reasoning is only valid based on the strength of its premises. If an AI lacks sufficient informational context, it will not be bound by the limits of what is probable or reasonable, potentially leading to conclusions that, while absurd, are consistent with the available data.

Currently, language models (LLMs) do not interact directly with the world [4], which limits the effectiveness of their deductive logic and makes them incapable of consistently and autonomously applying inductive logic—let alone lateral thinking.

Moreover, none of these three logics would truly be able to imitate common sense, which would probably suggest the only sensible answer to the question: “Maybe you should move to London” (with a touch of irony).

While AI is training to better understand our needs, we, too, could train ourselves to recognize the detachment from reality that often characterizes AI-generated responses, learning to find something authentic within that detachment.

See also  Offline AI: A Step-by-Step Guide to Installing ChatGPT with LM Studio

Part 3 – My Conclusion

In the previous section, we analyzed the challenges AI will need to overcome in the coming years to significantly improve its reasoning capabilities, with the goal of becoming a useful alternative in scientific fields […, 8, 9, 10]. We also examined a concrete example highlighting the limitations of current language models (LLMs) in handling complex reasoning tasks and the consequences of their inability to recognize “absurd” situations or a lack of common sense.

Additionally, we noted that there are different types of intelligence. Arjonilla and Kobayashi state that evolution is a process of randomness and selection, both intelligent and simple: “[t]his simplicity is much explained by the lack of models, which does not impede evolution to reach complex goals” [1].

We also know that the lack of strict modeling and inefficiency has not prevented life from evolving and coping with resource scarcity; on the contrary, these factors have contributed to its development.

We can, therefore, hypothesize that the development of artificial intelligence and that of our species follow different paths: on the one hand, we strive to define and deepen our understanding of intelligence by exploring its various aspects; on the other, AI development, as we have structured it, tends to optimize only the subsets of intelligence that we have been able to define and comprehend.

There will always be a gap to bridge between our intelligence and a more efficient version of it. To narrow this gap, we will have to evolve further, increasing the amount of intelligence and reasoning we can make efficient—entering a cycle that will likely be infinite and paradoxical.

Perhaps one day, AI will assist or even replace us in all those tasks that are currently repetitive and predictable. The goal will be to give us more time to enhance the less obvious aspects of the work we do.

In the future, our jobs will become so complex that AI, as we know it today, will be just a tool within a broader and more intricate system.

So, back to the initial question: Will AI replace our jobs?

We cannot know if, in the future, we will use various algorithms together, just as we use different tools for different tasks today. However, in my view, seeing this process of delegating repetitive activities to machines as a threat to our place in the world often stems from our tendency to imagine dramatic and apocalyptic scenarios where ultimate justice will be meted out on the world [13].

Just as AI makes mistakes by applying overly simplistic reasoning strategies to complex realities, we too often fall victim to our own logical fallacies.

For example:

  • “If we develop AI, it will automate more and more jobs; if it automates more jobs, it will control every sector. If it controls every sector, humans will become dependent on it. Therefore, AI will replace us.” → This is a slippery slope fallacy, where the initial premise assumes that each step inevitably leads to the next without considering any other possibilities.
  • “If AI will replace humanity, we will see it taking more and more jobs. AI is already taking more jobs. Therefore, AI will replace humanity.” → This is an affirmation of the consequent fallacy, where the second part of a proposition is assumed to prove the first part, leading to an incorrect conclusion.

Our tendency to accept these and other logical fallacies not only distances us from truly understanding technological phenomena but it also slows down technological and societal progress.

Perhaps this is the most profound aspect to try and grasp, closely tied to what we call common sense: our ability to live with contradictions—an intelligence of its own.

Not taking this ability for granted makes us more empathetic, fostering a healthier environment for ourselves and others.

Often, what we consider a flaw in small things—such as the impulse to act against common sense to prove that we can challenge an established truth—is actually a demonstration of how we can push beyond the limits of computational thinking, applying intelligence and reasoning in complex, nonlinear ways.

See also  Boost Your Cybersecurity with AI: Easy-to-Understand Guide

This kind of thinking encourages us to reflect on those around us in a more positive light, allowing us to appreciate aspects of their personalities that, from a purely social efficiency perspective, might otherwise be discouraged.

In a way, we might say that we are now more purpose-driven than ever before. Now that we have begun to map out parts of our intelligence, we will have more time to question what else makes us human—and that is an encouraging thought.


Part 4 – Tips for Making the Most of AI

Now that we’ve explored the relationship between artificial intelligence and our humanity, why not practice integrating it into our daily lives?

For users of language models (such as ChatGPT, Google Bard, Copilot, and others), there are four simple Cyber Hygiene practices that I find particularly useful for maximizing AI’s potential while minimizing risks:

  1. Seek insights, not solutions – A blank canvas with some guidance doesn’t necessarily limit an artist’s creativity; rather, it helps build a solid foundation for design. Similarly, AI can provide useful prompts and ideas that you can further develop and refine through your own critical thinking.
  2. Provide context and segment conversations effectively – You don’t have to reinvent the wheel every time—neither for yourself nor for the AI. Some solutions and approaches are already known to be more effective than others. By providing the right context and directing the conversation toward specific topics, you can obtain more precise and useful responses.
  3. Tell AI which decision-making framework or reference it should follow – There are infinite lines that pass through a single point, but only one that connects two. If you already understand the problem, why not clarify the goal as well? By defining both the starting and ending points, you help AI draw the straight line between them, making it easier to find the best solution.
  4. Ask AI to explain and justify its reasoning – The previous suggestions can help you get an optimally theoretical response, but in practice, there might still be gaps in reality. It’s useful to ask AI to justify its choices so you can identify any blind spots and determine whether the proposed solution meets your criteria for acceptability.

By applying these strategies, we can approach AI as a tool that enriches our thinking and helps us achieve better results—all while keeping its limitations and biases in check.

Author: Alessandro Mirani

Sources: