The control - that includes AGI systems - is permanently in the state of searching for an answer to the question "what to do," that is, in the process of developing a decision on action (or inaction as a specific variant of action) or in the process of generating an action plan. Naturally, the basis for this is prediction, which in any complex environment requires prediction of the actions of other active objects - people, machines, animals, etc.
The most straightforward accounting for the activity of other objects for decision-making is based on the assumption that the object will continue to do what is observed at the moment - move at the same speed in the same direction, etc. This is usually acceptable, but only for a short time. More reasonable, of course, is considering that the intentions of active objects in the environment can change.
Naturally, the intentions of other actors in the vast majority of cases cannot be known. The car's autopilot does not know the intentions of the pedestrian, the deer on the side of the road, and often the driver in the next lane; the pilot of a military aircraft does not know the intentions of the enemy pilot. The situation looks like the necessity of the impossible - to predict the unpredictable.
The first thing that a statistician will suggest in this situation is to use statistical information. In practice, this path is rarely applicable due to the need for more necessary data that really accessible, which should include collecting not only the outcome of certain situations (for example, accidents) but also many factors leading to this situation. But even if the necessary information is available, it is not evident that proceeding from the assumption that the object will choose the most likely action in the past is the best option.
There is another possibility, based on the natural fact that the resources available to the object whose actions it is desirable to take into account are always limited. A person cannot run at a speed of more than 12 m/s, the car has a limited turning radius, its speed is limited by the curvature of the trajectory, and acceleration and braking are limited by the coefficient of tire friction on the road surface; the capacity of the gas tank limits the driving range without refueling, etc. Therefore, for known types of objects, it is possible, based on the current parameters of the situation, to estimate what positions/states can be achieved during the time for which the forecast is made. In the control and optimization theory, such a set is called the feasibility area. This information makes it possible to separate possible scenarios from those that are impossible due to the limited capabilities of the actors.
This allows for more intelligent planning of actions without making assumptions about the intentions of other actors. For example, you can exclude from consideration those objects that are obviously incapable of interacting with a given intelligent system within the "planning horizon."
To do this, the controlled system must know the limits of its capabilities and build its reachability area. Depending on the intentions, it may be helpful to have a non-empty intersection of one's reachability area with the reachability area of another object and vice versa in others. In any case, the analysis is reduced to finding overlapping objects' feasibility areas.
For those who have had the opportunity to observe the behavior of animals in nature, it will not be a discovery that intelligent animals (including birds and mammals) use the described approach in one form or another, experimentally accumulating information about the reachability areas of those species with which they are adjacent.
Experiential learning is also possible in the case of AI/AGI; in those cases where the system is pre-equipped with the knowledge that allows the identification of the type of object, the corresponding information can be included among the attributes of the object class.
Naturally, the use of feasibility estimates is limited to situations where the type of the object is known. However, the rule of "two sides of the coin" also applies here: by observing what possibilities an object implements, one can narrow the range of possible options for its identification.
The approach itself is very versatile, and depending on the type of environment in which the AI/AGI system operates, the reachability area can be not only in the area of physical space but also formulated in terms of temperature, voltage, and even such abstract concepts as the level of awareness.
Mathematics lovers can get acquainted with the technique of using the approach in the classic monograph by Rufus Isaacs, "Differential Games."