Most modern computer systems, positioned as a variant of artificial intelligence, have a life cycle divided into the "training" and operation stages. At the "training" stage, a specific set of information that is unchanged in the future is formed, providing at the operation stage those capabilities of the system for which it was created. The consequence of the immutability of this information is the immutability of the system's behavior during operation. Depending on the system's purpose, behavior consistency can be both positive (for example, ensuring the absence of unexpected results/consequences) and negative. This chapter will look at a different aspect of the situation.
The immutability of behavior during operation means that it is impossible to distinguish instances of an artificial intelligence system by their behavior - the differences come down to the serial number and personal name assigned to the system by the user. In other words, such systems are devoid of personality; Replacing one instance with another does not change anything for the user.
The situation changes radically if the intelligent system is capable of self-learning during operation - capable of generating new concepts, discovering new dependencies between them, forming new skills, and ultimately capable of changing behavior (which is what self-learning is essentially helpful for). The result of self-learning is a gradual divergence in the behavior of system instances as they are used in different environments to perform different tasks, that is, the formation of personalities.
Accordingly, there are several consequences.
The obvious consequence is that replacing one instance of a self-learning system with another means some changes for the user. That is, different instances of the same system but with different experiences have different values depending on how, where, and how a particular instance is supposed to be used - similar to how, for an employer, different specialists have different values ​​depending on the experience they have accumulated. Since, unlike people, computer systems allow cloning; this opens up new business opportunities: training the operation of an intelligent system in a specific environment for a particular category of tasks with the subsequent sale of its clones to clients.
New technological challenges and opportunities are also emerging.
The presence of several personalized instances of an intellectual system with different experiences and, accordingly, different skills causes a natural desire to obtain a more advanced instance by transferring (exchanging) knowledge and skills from one system instance to another. Ensuring the transferability of knowledge and skills requires significant changes in the design of an intelligent system and is a non-trivial task.
Finally, communication is possible between system instances during operation, ensuring a permanent exchange of knowledge and skills between systems of a certain group of systems. This ensures the collective accumulation of knowledge/skills, potentially faster and more effective than individual self-learning.
Self-learning is obviously associated with the possibility of experimentation, that is, taking actions the result of which could be wittingly unknown/unpredictable based on current experience. The more opportunities for experimentation, the faster the accumulation of knowledge and the formation of skills, and the higher the risks of creating undesirable or unacceptable situations. Naturally, in any artificial system capable of self-learning, it is not difficult to have the switch to disable the ability to experiment. A natural approach, therefore, is to use self-learning with a vast opportunity for experimentation in a training environment (test site), followed by blocking the possibility of experimentation (switch to safe mode) during critical operation. Precisely, this approach is implemented in nature: experimentation is blocked when the time comes to raise offspring.