Eighty years ago, the first electronic computers were designed, and they were used mainly in the military sector. At that time, computers using the same electronic components were divided into two significantly different classes: analog and digital. Each of the two types had its advantages and disadvantages, and for some time, they "coexisted peacefully", having their own application areas.
What was the difference in design? Analog computers were a network of elements, each of which received several input signals (voltage) generated according to a specific "rule," an output signal (voltage), which played the role of one of the input signals of other network elements. The "rules" of generation were determined by the parameters of the electronic components of this element. By selecting the network structure and parameters of network elements, it was possible to ensure that, receiving a specific signal at the input, the required signal was received at the output. That is, from the point of view of a mathematician, an analog computer is a physical implementation of a particular mathematical function.
Analog computers were created to control missiles, aircraft, engines, etc. The life cycle of an analog computer looked like this: a mathematical analysis of a control system for a particular object resulted in a control function, which received signals from sensors at the input and issued signals to the controls at the output. Then, this function was implemented in the form of an analog computer that was installed on the controlled object. Accordingly, making changes meant reconfiguring or reworking the entire analog computer.
Digital computers were created as electronic analogs of mechanical calculators for processing digital data - accounting information, statistical data, and mathematical calculations according to complex rules that cannot be reduced to a specific function - for example, calculating the trajectory of space objects. In this case, the result of the calculations was determined not by the design of the computer but by the sequence of arithmetic operations. This, naturally, required some way to specify both the numerical information to be processed (input data) and the sequence of the operation (action program). Therefore, an integral part of a digital computer is a device that stores/updates/extracts input information, a sequence of actions, intermediate data in the calculation process, and the results of calculations.
The significant design difference between these two families of computers is that an analog computer does not have the referred division of the digital computer into memory, which stores data and procedures, and a processor, which processes data extracted from memory and the results added there. Information in an analog computer (including current data in the form of voltages and the order of actions in the form of the structure of connections and parameters of network nodes) is "spread out". It belongs to the "distributed memory" class. The linguistic difference between analog and digital computers reflects only how information is represented - numbers or voltages and parameters of electronic components (capacitance, resistance, inductance,..), but masks the difference regarding the presence or absence of division of the system into memory and processor.
The design difference described above has radical consequences in capabilities: a digital computer is not only capable of calculating everything that an analog one can calculate but is capable of solving problems of a type that an analog one is fundamentally incapable of - for example, combinatorial problems that cannot be reduced to calculating in advance a given number of values. This difference, coupled with the radical reduction in cost and compactization of digital computers, led to the fact that analog computers ceased to be used anywhere in the 70s, giving way to digital ones. Many modern IT professionals simply do not know what an analog computer is.
As noted above, a digital computer is capable of simulating the operation of an analog computer; that is, it can work as a digital model of an analog one. Let's now compare an analog computer's description with a modern neural network (LLM, GPT, etc.). It becomes evident that these neural networks are nothing more than digital models of analog computers, with all the shortcomings inherent in analog computers, which led to the "extinction" of analog ones.
In turn, this allows us to draw two conclusions. Firstly, the reason is that the natural neural network of humans and the most intelligent animals, including the brain, has significant advantages compared to the brains of primitive animals and modern neural networks. Evolution led to the division of the functionally homogeneous structure of primitive neural networks, typical for primitive animals and analog computers, into memory and processor, like those of digital computers. In natural neural networks, and in artificial neural nets, and human-made computers, the same elementary elements can be assembled into both an analog and a digital computer.
Secondly, this indicates the direction for improving systems based on neural networks: the transition from homogeneous neural structures (distributed memory) to those divided into an information storage subsystem and an information processing subsystem.
Finally, let's imagine that the developers of neural network systems managed to design a neural network where the division into memory and processor is implemented. What does this mean in practice? This means that instead of a digital model of an analog computer, such a neural network will be a digital model of a digital computer. The question immediately arises: why do we need to use a digital model of a digital computer and not just the digital computer itself? It is difficult to find arguments in favor of this.
Accordingly, it becomes clear that to build intelligent systems with capabilities exceeding current LLMs, one must develop some mandatory component of advanced systems (divided into an information storage subsystem and an information processing subsystem): information processing algorithms. That is precisely the subject of the activities of those "fathers of AI" who coined the term "Artificial Intelligence". An attempt to find an easy way to build human-level AI by scaling an analog computer to the size of the Cheops pyramid is doomed to failure. As we see, what is required is a more intelligent approach - the development of intelligence algorithms.
I could not agree more with this statement: "An attempt to find an easy way to build human-level AI by scaling an analog computer to the size of the Cheops pyramid is doomed to failure. As we see, what is required is a more intelligent approach - the development of intelligence algorithms".