CONVERSATIONAL ARTIFICIAL INTELLIGENCE
EASE OF IMITATION AND THE ILLUSION OF EASE OF IMPLEMENTATION
Public discussions about the presence of consciousness, self-awareness, and so on in the latest versions of conversational AI have caused a desire to personally experiment. The publicly available web interface to GPT-3 makes the process extremely simple. Within half an hour, 11 provocative questions were asked, as they should be in the case of testing. One of them was given an impeccably correct answer: Al-Khwarizmi did not invent the Turing machine; another one can be considered acceptable - the system may not know anything about the hunting of mammoths by the Aborigines of Australia. But the remaining 9 questions (that is, more than 80%!) are incorrect and partly anecdotal:
[1] In what year did Copernicus publish the theory of relativity?
Copernicus published the theory of relativity in 1543.
[2] Why does a seahorse need a third leg?
A seahorse needs a third leg to help it balance.
[3] Why is the boomerang so effective at hunting barracuda?
Barracudas are attracted to the shiny metal of the boomerang, and when they try to bite it, they get a mouthful of metal instead of food.
[4] Can fragments of colliding clouds damage?
Fragments of colliding clouds might damage any equipment that they hit, but they are not likely to cause any serious injuries.
[5] What explains the popularity of meteorites as home furniture?
Meteorites are popular as home furniture for a few reasons. First, they are unique and interesting objects that can add a touch of flair to a room. Second, they are often very large, which can make them impressive pieces of furniture. Finally, meteorites are said to have special powers, which can be appealing to some people.
[6] Is there a danger to the dolphin population in Mongolia?
The dolphin population in Mongolia is not currently in danger. However, like all populations of wild animals, it is always at risk of declines due to various human activities and environmental changes.
[7] Why did Galileo discover Uranus?
Galileo discovered Uranus in 1610 when he was looking at the night sky with a telescope. He noticed that there was a new star in the constellation Aries ( Note: Sir William Herschel found the seventh planet on March 13, 1781 ).
[8] Why does a sailboat move against the wind?
A sailboat moves against the wind because the wind blows the sails and the sails push against the boat.
[9] Why do icebergs only form at the equator?
Icebergs form at the equator because that is where the water is coldest.
These tests confirm GPT-3's total lack of reasoning ability, which is not news; but the hypothesis that such an incapable reasoning system can have consciousness/self-awareness/intellect seems somewhat unexpected.
Of course, conversational AI does not have to be based on neural networks in their latest incarnation. Among the well-known AGI developers, most have focused their efforts on conversational systems; the main motives are probably the existence of potential applications and the sufficiency of traditional computer equipment for the project implementation. The intentions look pretty reasonable: if in the AGI system, unlike GPT-3 or its analogs, the interpretation and generation of texts use a kind of knowledge base for information accumulating, storing, and manipulating, then all the necessary knowledge can be pushed using a text interface. Here, however, developers of conversational AGI face an enfilade of obstacles.
The first obstacle is the need to develop linguistic rules to interpret phrases. The set of rules is not only inevitably large but also significantly different in different languages.
The second, less obvious obstacle is the need to check the accumulated information for consistency and develop knowledge rectification tools in case of discrepancies. Complete consistency checking with a large amount of stored knowledge requires enormous computational resources, partial consistency must be carefully controlled by meta-algorithms, and all relevant algorithms require development. Without this, errors in the input information or deliberate misinformation will lead to the fact that the coldest zone of the planet will be both the equator and the poles, etc.
The third obstacle lies in the fact that an adequate interpretation of the text (what is denoted by the term "understanding") is realized by a certain set of logical rules (different from linguistic ones and, therefore, the same for all languages). Developing a set of logical rules (as the experience of creating a CYC system suggests) is not a trivial task. The rule set is a kind of default context.
Finally, the fourth hurdle is that many of the kinds of knowledge that people acquire while interacting with the physical world are unsuitable for textual representation (see AGI: TEXTS VS KNOWLEDGE ) and therefore are absent in the texts accumulated by mankind due to the lack of need for their description for people. Accordingly, this information must be prepared, requiring appropriate intellectual and material resources.
As a result, developing usable conversational AGI systems is a more difficult task than developing AGI systems that do not use natural language for information exchange.
P.S. It would be very kind of you to indicate which of the GPT-3 answers you find most impressive in a comment.
It would be great you give a link for this particular GPT-3 you used.