The term AI emerged almost 70 years ago at the dawn of the computer era when the first versions of using computers to solve problems previously performed by people and classified as intellectual activity took place. The concepts of "intellectual activity" and "artificial intelligence" were modified as computer technology, algorithms, and their implementation in computer program code were developed. In ancient times, almost any activity in which a person was not used as a source of mechanical energy was considered intellectual. With the advent of the computer, any activity involving computer use was considered intellectual since only professionals were engaged in it. Today, it is difficult to consider the work of a driver or a courier using a smartphone as an intellectual one. And vice versa, if half a century ago, the ability of a buyer to check the correctness of change in a store was not considered a manifestation of intelligence, it is today.
|Another aspect - the ability to conduct a dialogue with a person in his language - until now was an undoubted manifestation of intelligence. Today, with the spread of LLM ("Large Language Model"), capable of issuing information in the form of neat text in natural language, the understanding is gradually beginning to mature that real intelligence requires the ability not only to combine quotes from the set of texts on which the system like ChatGPT, Grock, etc. was "trained" (actually programmed), but also the ability to analyze the meaning of the text, and that current LLMs do not have this ability. The assertiveness of the advertising of the developers of these systems and the practical absence of publicly presented testers/reviewers of these systems, like the Federal Drug Administration and National Highway Traffic Safety Administration in the USA, slow down the understanding of this circumstance, but Google text generated by the LLM system is already accompanied by the disclaimer "AI responses may include mistakes"; This is a clear indication of a deficit in reasoning ability - that it is difficult for the user to recognize the power of the ability to quote the results of reasoning presented in the "training" data sets and memorized by the LLM system. It is difficult to imagine that the public would be so tolerant of such a disclaimer on the screen of a calculator or the web page of their bank account, is it? Obviously, there is a demand for alternative systems that can guarantee the absence of absurdities, "hallucinations," and other errors; therefore, sooner or later, they will appear.
Changing expectations and ideas about the capabilities of computer systems are changing the corresponding terminology. Today, the public understands AI as LLM based on artificial neural networks. In the 1970s, Prolog and expert systems were understood as AI. What do previous and current systems that claim to be intelligent have in common, and what can we expect from future systems, in particular, those that developers call AGI (Artificial General Intelligence)?
It is easy to find that the list of capabilities of future AI versions declared by different developers varies. This is natural - any new system implies a particular specific area of ​​application, which leaves its mark. However, we will find even more differences if we analyze the list of what developers do not expect from their brainchildren; as a rule, such a list is absent and must be reconstructed by analyzing the omissions. We will try to do this by analyzing everyday situations that almost everyone has been in.
Situation: the car key, glasses, or lipstick are missing from their usual place. A search begins, in which the glasses are found on the shelf in the shower, the key in the pocket of jeans in the washing machine, and the lipstick under the seat in the car. The following details are essential for our analysis:
- No one commissioned us to do this search; the decision to do it is our need for a result.
- Neither the Internet nor books nor the knowledge gained in college can help you gain knowledge about what you are looking for.
Now, let's expand the range of situations.
In 2016, an octopus escaped from an Australian marine zoo ( https://www.facebook.com/watch/?v=1094299902221547 ).
In 1931, Kurt Gödel proved the fundamental Gödel's incompleteness theorems ( https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems ).
Both cases have the same characteristics as the above: the actions were not taken to respond to an external command/instruction, and required knowledge that could not be obtained from someone who already had it. This radically distinguishes the described actions of natural intelligent systems from the way LLM systems and earlier expert systems function: the actions of human-created AI systems must be initiated by the user, the purpose of the actions is also determined by the user, and all knowledge used by such a system is already accumulated by humans.
Let's look at it from another angle. In 1956, computer systems were developed that received a task to perform arithmetic and logical operations in the form of text in a language understandable to the user, checked the text received from a person for errors and ambiguities, performed calculations, and gave the user the results in the form of text in a language understandable to the user. These systems - FORTRAN and COBOL - are used to this day. The similarity of principles with today's LLM systems and the differences with natural intelligence are apparent: the impossibility of actions on one's own initiative due to one's own needs and complete dependence on the knowledge provided to them from outside.
Thus, two fundamental differences, the autonomy of decision-making and the ability to acquire knowledge independently, distinguish natural intelligence from modern computer systems, even in cases where the latter are classified as AI. The creation of new computer systems, regardless of whether the developers position them as AI systems, presupposes an explicit or implicit answer to whether the developed system will retain the described differences from natural intelligence.
The essential thing is that in all publicly known developments, the designed system is deprived of the ability to do anything other than what the user requires - a system is designed that is deliberately deprived of autonomy. This, in fact, means the lack of ability to find new knowledge along with the manipulation of knowledge obtained from the outside. This, in turn, means that the system can be much more productive and easier to use than FORTRAN, COBOL, or Prolog, which were developed half a century ago but are not more intelligent in their essence. A non-autonomous system is not capable of implementing full-fledged intelligence, is not capable of finding new knowledge, and becomes only a tool of natural intelligence, helping to effectively use existing knowledge. This is useful, of course, but nevertheless does not meet the public's expectations, especially in the presence of loud promises to the contrary.