From August 19 to 22, the 15th annual AGI Conference was held ( http://agi-conf.org/2022/ ). But in the reports, you will not find a story about how it was possible to create a system that claims to be AGI.
Naturally, the question arises, why did this happen?
As is usually the case when expectations do not come true, there are several reasons at once. We list those of them that seem to be the most significant.
It is difficult to find an application that would obviously require AGI.
The complexity of the actions performed by automata is growing - now they assemble cars and electronics, sort garbage, control spacecraft, power grids, and factories, recognize faces and cats, suggest the correct spelling of words and phrases, look for unknown chemical compounds - but in each case, a system is sufficient that does not change behavior based on experience. Namely, this ultimately distinguishes AGI from other systems now classified as narrow AI systems. The remaining differences come down to the complexity of programmed behavior in one way or another. In predetermined situations, they do what they were created for but cannot reasonably respond to unforeseen situations.
The sufficiency (or illusion of sufficiency) of the predefined behavior of practically useful systems leads to the fact that it is difficult for AGI developers to find funding. Secondly, it is difficult to find test/demonstration tasks that demonstrate AGI's usefulness. This leads to few attempts to develop a true AGI, and ongoing projects prioritize practical usefulness over meeting the requirements for AGI - with corresponding consequences.
Misconceptions about how the intellect works.
At the time of the birth of the term AGI, many (if not most) AI developers had the idea that neuroscience, psychology, statistics, and logic together give a clear picture of how human intelligence works - it remains only to implement this understanding in program code and use sufficiently powerful computers. It turned out, however, that the common understanding of intelligence failed the "Richard Feynmann test" ("If I can't build it, I don't understand it").
The models proposed by both neuroscience and psychology have turned out to be descriptive, superficial, and inapplicable in practice. As a result, the development of AI turned out to be similar to "do this, I don't know what." To this day, we have no constructive definition of the concept of "AI system" that allows answering the question of whether a particular system is a kind of AI or not. There is no even weaker "negative" definition listing the requirements that would qualify a system as non-AI. As a result, there is not only no generally accepted way to determine in which cases an AI system should be classified as narrow AI, and in which as AGI/strong AI/Human-level AI and so on, but also to answer the question of whether this system is a variant of narrow AI or this is not AI at all. No clear understanding - no precise terminology.
Artificial neural networks that implement the models proposed by neuroscientists do not work as expected. At the level of individual elements, everything is more or less similar, but the system as a whole turns out to be radically different from the natural prototype. Moreover, each new and more practical modification turned out to be more distinct from what neuroscientists discovered and what AGI requires. As a result, systems based on neural networks, while helpful, are unsuitable for permanent learning, do not provide the possibility of modifying behavior without damaging existing skills, and do not provide the ability to analyze how the result is obtained. As a result, they are unsuitable for the role of the basis of AGI (although they can be used as components of a much more complex system).
The models proposed by psychologists were not used to implement what is under the hood but to imitate what is observed from the outside.
Statistical methods of the analysis proved unsuitable for many of the functions required by AGI, including finding causal relationships and discovering previously unknown objects.
To apply logic - which is obviously necessary - it is required to formalize what it operates with (objects, tasks, goals, and so on). When the system is hard-coded in some way before exploitation, it is done by a human. Still, in the case of AGI, formalization is required for the concepts/objects created by the system itself, and this remains an unresolved problem.
The complexity of the intellect.
This obvious circumstance leads to the developers' attempts to implement their plans revealing problems that were not known or the inapplicability of those methods that were supposed to be used. For example, until recently, it was generally accepted that the discovery of causal relationships is feasible using classical statistics. Today, it gradually finds an understanding that the detection of unknown objects/structures/concepts requires the search for invariant functions and, like the search for cause-and-effect relationships, is inherently a combinatorial problem.
In practical terms, the importance of using a universal knowledge representation suitable for different types of knowledge and various kinds of information sources (physical and virtual sensors) has long been underestimated.
The complexity of intellectual processes leads to developers being required to be familiar with many areas of knowledge that are loosely related (such as statistics, control systems theory, programming, computer vision, and linguistics). Lack of awareness of developers does not allow them to achieve the desired.
Orientation of developers to the use of natural language as an essential element of the AI system.
The demand for technical systems that use natural language communication with users has determined the choice of many AGI developers in the relevant application areas. It turned out to be a trap. The complexity of natural language requires much effort and money to develop such systems. This redistributes efforts and resources, not in favor of the rest of the system components. Explicit or implicit disregard that language is a tool of information exchange, and text interpretation requires background knowledge separate from the text. As a result, instead of an intelligent system that uses language only for information exchange, a system is created that uses language as knowledge storage with such a selection of information on request that does not take into account the meaning of the text.
Results of two decades.
While the desired goal of creating AGI has not been achieved in two decades, progress is being made. First, it became clear which approaches could not be the core of AGI. Secondly, there is an understanding of what subsystems need to be implemented for a fully functional AGI and what techniques suit this. Which, of course, does not exclude the discovery of new problems during development.