The term "neurofetishism" was first used, as far as we know, by the philosopher Markus Gabriel - and, naturally, in philosophical discussions; in this text, the term is used to denote the exaggerated expectations of artificial neural networks that underlie the Large Language Model (LLM) and the like
.
The unsuitability of LLM, Generative AI, and related neural network variants as the basis for AGI is becoming increasingly clear, and this raises two questions: first, it is interesting to understand what caused the emergence of neurofetishism, that is, the belief in the magical properties of neural networks that make it possible to use them as the main element of AGI (Artificial General Intelligence); second, what path can actually lead us to the creation of AGI and what obstacles are on this path.
Causes of neurofetishism
As in most cases, the emergence of neurofetishism, which has led to the fact that today even explanatory dictionaries suggest that the term "neural network" be considered a synonym for the term "artificial intelligence", is the result of a combination of many reasons.
The most obvious reason is the overly aggressive advertising of the expected results by the developers of neural network-based systems. These developments require enormous investments, which can only be obtained if equally impressive results are expected. The fact of receiving such investments works as positive feedback: if large investments take place, this contributes to the confidence of society in the achievability of the stated goals.
In turn, the vagueness of the ideas about what AGI/Strong AI is helped convince investors of the achievability of the declared goals: the abilities required of a system claiming such a role are traditionally described by general phrases like "can do everything that people demonstrate." Of course, this is a very deceitful quasi-definition, since it does not in any way imply the ability to cleverly lie, shirk work, make obvious logical errors, hide useful information, forget useful information, etc. That is, AI is expected to reproduce useful human abilities with higher efficiency and avoid undesirable human abilities - which in total differs significantly from the primitivist definition of "acting like a person."
The second convincing argument for both the public and investors is the confidence that the neural networks being developed are some kind of analog of the human brain, which is undoubtedly a natural neural network. However, upon closer inspection, this confidence turns out to be absolutely groundless. A natural neural network is not a brain, but the entire nervous system, which initially did not contain a brain when it emerged during the evolution of plants into animals (see jellyfish in the picture above). Accordingly, while a neural network can, in principle, be an analog of the brain, it can be an analog of the peripheral or brainless nervous system, from which no one expects intellectual abilities, and an analogue of a computer, which is also a network of elements exchanging signals (with the difference that they are more complex than artificial neurons and are not called neurons). That is, to create an analog of a natural neural network of the brain, it is necessary to know the information about the functionality of dozens of different types of natural neurons and the structure of their connections - and science does not yet have this knowledge; neurons in today's artificial neural networks have practically nothing in common with natural neurons except that they form a specific network. Of course, the network structure is not a particular feature of neural networks. Thus, the argument about the similarity of artificial neural networks and the brain does not stand up to criticism.
The third reason is the lack of clarity in defining what constitutes "Artificial General Intelligence" among those directly involved in the relevant developments. While the starting point for most developers is the thesis that the capabilities of an AGI system are similar to those of humans, the lists of specific requirements vary among developers, and a typical feature is the lack of a critical analysis of whether there are already existing systems that meet the selected set of criteria; if there are, then the set of criteria obviously contradicts the implicit assumption that an AGI system does not yet exist. Finally, listing requirements for a potential AGI system is almost never accompanied by a description of verifying that the required ones are realized. A typical example: a requirement for understanding text in natural language is declared, but at the same time, no tests are proposed to distinguish the ability to interpret text from the ability to search in memory for a text similar to a given one among those read ("learned") earlier; any teacher knows that these are two different abilities. Ignoring the difference between interpretation and similarity-seeking is the basis of the illusion of reasoning ability when what is really present is only the ability to find similarity by pattern; the meaningless results are then given the name "hallucination" as a fig leaf covering the lack of reasoning ability (an absence that obviously excludes the system from being classified as AGI).
The road leading to AGI
First of all, we need to define in more detail the set of our requirements for the system that allows us to consider it a general-purpose system with intelligence - to define the desired set in such a way that it can be objectively distinguished from systems that do not have intelligence.
One of the fundamental differences between real intelligence (including AGI) and other sets of capabilities/abilities is the ability to develop rules of reasonable behavior in changing conditions autonomously, without teachers, who are directly or indirectly people in existing systems. An AGI system should be able to find reasonable solutions the way Robinson Crusoe was able to do on an inhabitable island - without receiving any information from people. Moreover, a strong intellect should be able to act reasonably when receiving false statements from the outside (like Sherlock Holmes and any military leader the enemy is trying to deceive).
The requirement of the ability to learn autonomously is present in many versions of the definition of AGI, but in most cases, the mediated presence of a human teacher who has formed a set of texts, images, etc., is explicitly or implicitly assumed, which essentially disavows the concept of autonomy, replacing it with the ability to remember rules and apply them. Two essentially different concepts are confused - the acquisition of knowledge constructed by people, and the independent construction of knowledge (the distinction between learning what the teacher knows and independently discovering the unknown is ignored). The ability to acquire existing knowledge is inherent in intelligence, but this ability is obviously insufficient; any computer acquires knowledge represented by the source code of programs and uses it in a useful way - but no one thinks of considering any computer an AGI system for this reason.
AGI is expected to behave "human-like." However, behavior is not the same as reaction to external events. If a system is unable to act in the absence of external events, then it is pointless to talk about behavior - we should speak about reactions to external events. In turn, the ability to act proactively without external stimuli requires the presence of a mission - something for which the system acts both in the presence of external stimuli and in their absence.
All creatures we consider to have intelligence of one or another power are curious. This affects the two properties mentioned above: curiosity forces one to autonomously search for information that allows one to autonomously form rules of behavior and to do so regardless of the presence of external stimuli.
Thus, an honest interpretation of the requirement for an AGI system to be "as intelligent as a human" means the requirement for the ability to permanently self-learn without using human-prepared data, including in the absence of external stimuli, due to the presence of curiosity.
Among the already existing and widely used systems, automatic control systems are the closest to these requirements. At the same time, the lion's share of the missing capabilities can be added using known algorithms. One exception is the perception-concept gap (see AGI: PERCEPTION-CONCEPT GAP), which requires serious efforts to overcome.
Obstacles to AGI
Practical obstacles
As a rule, the goal of the developer of a system that can be classified as an AGI variant from his point of view is to create a product applicable to a particular area. Naturally, the first step on this path is a minimum set of capabilities that ensures the product's applicability (or demonstration of potential applicability). When it turns out that a full-fledged AGI is not required for this product, the method for implementing the missing AGI elements is still unclear, then the development branch of AGI itself dies.
The second practical problem is the need to have a team of developers with very different specializations to implement the entire range of the system's capabilities and a technology leader who is sufficiently oriented in each of these aspects and responsible for the system's architecture as a whole. It is extremely difficult to find a leader of this kind - especially among active and ambitious young specialists: as a rule, their high professionalism in one area is combined with poor education in others, which are no less important, just because acquiring even minimal experience in each of the areas requires a lot of time. The result is what happened in Tesla, which changed its development teams at least three times with the same (in terms of achieving AGI) result in all three cases.
Investors contribute to the problem by thinking that the "silver bullet" is the use of the latest developments (and, accordingly, a leader who knows how to do it), while the real key to success is the right combination of tools and approaches, the lion's share of which has long been known to specialists (and sometimes for a very long time), and the presence of a widely educated leader.
Conceptual Obstacles
The fundamentally irremovable problem is that to get what a human can do from AGI, we explicitly or implicitly require excluding undesirable aspects. A home assistant is required to be able to independently find the items needed for work, but not be interested in the contents of your wallet; to be able to use natural language, but not be interested in your bank statements and not retell your guests what you said about them in their absence; the list is potentially endless. The trouble is that the same components/algorithms/principles/abilities provide both the desired and undesirable actions. That is, a system as smart as a human can lie, hide important information from its boss, help its master's enemies, etc. When it comes to people, these problems are solved by (explicit or implicit) agreements between them and the existence of a system that enforces the fulfillment of mutual obligations. Applying this approach to AGI systems essentially means either recognizing their rights as accepted in human society, or organizing a segregated (caste) social environment with free individuals and AGI slaves; each option obviously has a lot of opponents.
I have observed the long training period for Tesla Full Self Driving from 2018 to now. In 2018 the car drove like an overconfident teenager and made many mistakes. Elon moved to training data that was from very good drivers only. Now the Tesla approach human level with a nice smoothness. But it’s not learning that on its own but rather via highly tuned human data.