When formulating the goal of developing AGI, we usually mean developing an AI system that can do what humans, the embodiment of natural intelligence, can do. The specific list of abilities (from different authors) may be distinct in essence and terminology, but it is not essential in such a case. The important thing is that the next step in building an AGI development plan often sounds like this: human intelligence is represented by the brain, and if this is so, then it is natural to focus on how the natural brain works and, by performing a kind of reverse engineering, get the desired AGI.
This approach has a right to exist, naturally. However, rational decision-making involves comparing options and choosing the best possible one. But, with the implicit absence of an alternative to the brain as a model for reverse engineering, the decision to focus on the brain seems obvious (see Obvious choice ).
The practical implementation of this approach immediately faces the problem of insufficient knowledge of the brain's functioning. Neuroscientists' research results are insufficient for AGI developers to obtain the desired results. The brain, as a set of biological components, is extremely complex. Accordingly, it is difficult to separate the functions that maintain the vital activity of the brain from the functions of storing and processing information, and modeling everything not only looks unrealizable with today's capabilities but is also notoriously ineffective.
The consequence of this approach is that the most advanced versions of AI systems that claim to be AGI prototypes are based on Artificial neural nets. But they borrow from the natural brain only the basic idea of a network of neurons, ignoring the apparent differences between individual artificial net neurons and biological brain neurons; essential features of the structure of a natural neural network (permanent changes in the structure of a natural neural network, absence of division into layers, etc.) are ignored. As a result, the capabilities of these AI systems are no less far from the capabilities of natural intelligence.
Therefore, there is a reason to try to find another prototype of general intelligence as a subject for reverse engineering - one that has been studied better than the human brain.
And there are such prototypes! They are used everywhere, and the corresponding systems are no less natural than the human brain. As we know, the brain (together with the rest of the nervous system) is a system for controlling the actions of an individual person. The sought-after alternative prototype of AGI is a system for managing armies, businesses, election campaigns, etc. Until relatively recently, in intellectual processes in such systems, exclusively natural human brains were used to process information - that is, these are entirely natural intelligent systems, not inferior in their intellectual abilities to individual brains (taking into account the variability of the abilities of both, of course), and often beyond the capabilities of a single person.
These control systems are well structured, the functions of the structural units of these systems are known, and the algorithms for processing information and making decisions are known - that is, everything that has been very superficially studied by neuroscientists at the level of the individual brain is known. In addition, all such control systems, despite the differences in spheres of activity, have a very similar structure - it is dictated by the specifics of intelligence as such, and not the field of application, with minimal influence from the biological basis of humans.
Moreover, this approach is de facto implemented in many projects, the goal of which is not to create some universal version of AGI but to develop control systems that provide autonomous functioning for a specific application. "Generality" in these systems does not lie in the code's universality but in the architecture's universality, which is actually the idea of AGI. Just like when developing cars, espresso machines, boots, airplanes, etc., the "one size fits all" approach is not suitable - even though the composition of the components and design principles are similar.
What prevents such systems from being considered versions of AGI? The notorious perception-concept gap was discussed in several previous chapters. Incorporating this missing component into an advanced control system will turn it into some form of AGI.