Learn Computing from the Experts | The Rheinwerk Computing Blog

What Is the Difference Between Narrow AI and General AI?

Written by Rheinwerk Computing | Jun 27, 2025 1:00:00 PM

AI systems have become more and more powerful, mostly absent from the public eye. But every now and then, their capabilities reach the public’s attention.

 

The figure below shows some of the narrow achievements of early AI systems.

 

 

In 1997, IBM’s Deep Blue, an extremely sophisticated chess playing machine, beat world chess champion Garry Kasparov, and the world was stunned that a machine won. For centuries, chess was associated with outstanding cognitive capabilities. Furthermore, good chess players have intuition, which is required to anticipate the moves of the opponent. But the machine won and showed superhuman capabilities at playing chess. In hindsight, we must recognize that this was merely the result of a comparably simple search algorithms and simulations of millions of moves. Deep Blue did not apply any advanced artificial intelligence. Nevertheless, the AI scored its first big win in several battles of wit between man and machines.

 

Much more impressive was when Google DeepMind’s system AlphaGo beat Lee Sedol, one of the best Go players in the world in a 2015 match. AlphaGo won four out of the five games and received a $1 million prize money. Experts in Go did not expect AlphaGo to win at all because of the game’s extraordinary high game tree complexity. Sometimes called the “Shannon number” (after Claude Shannon), game tree complexity refers to the number of possible games that could theoretically be played. The Shannon number for chess is estimated to be around 10123. Although an insanely large number, many combinations have been simulated for chess.

 

In Go, a simple “brute force” approach to simulating all game combinations is impossible. The Shannon number is estimated to be around 10360. (We’ll save you the trouble of reading a number with 360 zeros.) The reason for this large difference in complexity is mainly the board’s size. In chess, the board is an 8 × 8, while Go is played on a 19 × 19 grid. The game dynamics are also different: In chess, a typical game involves about 40 to 60 moves per player. In Go, a game can last around 200 to 300 moves per player.

 

The significance of a machine winning at Go cannot be overestimated: Now, humans lose to machines in all one-versus-one games. We thought only humans were capable of intuition, an essential part of winning at Go.

 

In the second game of the five-game match, the AI made a move that was completely incomprehensible to the experts. “Move 37” later became famous as one of the most significant moves in the history of the game of Go. AlphaGo made a move that no human would ever have made at that point in the game. A Go stone was placed in a position on the board that was completely unexpected. The experts were unanimous: The AI made a major mistake that would take its revenge later. In fact, much later in the game, it turned out to be a brilliant move that the experts would analyze for months to come. What’s more, this move not only proved that AI has intuition in its own way. After thousands of years of playing Go, an AI was able to present humans with abilities far beyond their own.

 

At this point, we have an AI system able to beat the best Go players in the world. AlphaGo was at the top of the Go world. But just two years later, in 2017, AlphaGo lost 100 out of 100 games against its opponent. What happened? Did humankind strike back? Not really. No human will ever hold the crown in Go again. What happened was evolution, more specifically AI evolution. AlphaGo lost against its offspring, a more advanced AI system called AlphaGo Zero.

 

AlphaGo Zero did not rely on studying gameplay from human games. It learned by playing against itself and improved in mere 21 days to the level of AlphaGo. After 40 days, AlphaGo Zero had exceeded all previous versions.

 

OK, you might think, at this point, AI systems can reach expert-like capabilities in very narrow fields, and you’re right! But could AI systems be successful in real-time, multiplayer games? They are not capable of teamwork and setting dynamic objectives, right? Well, in 2019, a system from OpenAI called “Five” played the video game Dota 2. This system controlled five AI agents, one for each hero on a team. These agents had to coordinate seamlessly to execute quite complex tasks and strategies. The AI system had to make decisions 20 times per second. Short-term and long-term tactics had to be carefully balanced.

 

In a live event in 2019, “Five” played against the international champions OG who won “The International” a year earlier. The event took place over four days. The AI agents played close to 43,000 games of which they won a staggering ratio of 99.4%.

 

More impressive AI systems were developed that could play and win games like Diplomacy. In this complex strategy board game, negotiations, collaborations, and strategic planning are crucial to success. Meta developed a system called Cicero that can excel at human-like communication and social reasoning.

 

AI systems started in rather narrow domains, like Chess and Go, and have already achieved mastery in complex environments that require far more skills that just being a solitary genius.

 

Where are we today, in 2025? AI systems can solve more and more cognitive tasks.

 

Editor’s note: This post has been adapted from a section of the book Generative AI with Python: The Developer’s Guide to Pretrained LLMs, Vector Databases, Retrieval-Augmented Generation, and Agentic Systems by Bert Gollnick. Bert is a senior data scientist who specializes in renewable energies. For many years, he has taught courses about data science and machine learning, and more recently, about generative AI and natural language processing. Bert studied aeronautics at the Technical University of Berlin and economics at the University of Hagen. His main areas of interest are machine learning and data science.

 

This post was originally published 6/2025.