The chapter AGI: WHERE EVENTS COME FROM describes how the discontinuity points of the situation parameters that depend on time can be "event generators." In a continual environment, the unique role of discontinuity points extends to spatial parameters, turning this feature into a universal principle.

It is entirely proper for the processing of visual information. Object image outlines are a crucial element of visual scene analysis (see ARTIFICIAL GENERAL VISION) that, by definition, are a set of luminance/color discontinuities. The widely used * edge filters* are designed to provide discontinuity detectors. Considering the image as a

*of brightness/color, mathematical*

**continuous field***reflect the degree of discontinuity*

**filtering operators***. For puritanical mathematicians, using operators that assume continuity over the entire domain in situations with known discontinuities is obviously incorrect. Still, in practice, it seems to be quite a functional approach.*

**without explicitly declaring it**However, the result of edge detection by applying edge filters has a significant drawback, eventually leading to a strange situation. The information content of the contours is evident, there are ways to construct a contour image using edge filters, but this feature is practically not used in AI applications. This is a consequence of the mentioned internal contradiction, suggesting that the brightness/color per image is a continuous function of the coordinates. The result of the filtering is also a * continuous brightness/color function*! The initially formulated goal is to find the contour

*as a set of discontinuity points. Instead, we get a new*

**lines***of color/brightness, not a collection of*

**two-dimensional field***. Of course, looking at a picture with drawn contours, a human sees the contours - but he sees them the same way in the original image! The desired result in the form of contour*

**contour lines represented by one of the methods adopted for representing lines***in the AI â€‹â€‹system, is missed. Edge filters work for human vision but are useless for an AI system.*

**lines, represented by one of the ways of describing lines**A valuable operation for AGI is to build a set of contour lines as a set of discontinuity points, not a two-dimensional picture, as the edge filter does. Theoretically, we can use the result produced by the edge filter as the first phase of a multi-stage process, including selecting edge picture points belonging to the contour and forming the contour elements themselves. This technology, however, has not yet developed. This was why our experiments with edge detection algorithms were based directly on finding a set of discontinuity points in an image.

This approach uses a technique for event detection by finding points of parameter discontinuity points changes over time. The difference lies in replacing * time* as an independent variable with a

*. The basic idea is to build a forecast of parameter changes along the spatial coordinate. Then the points of a*

**spatial coordinate***between the forecast and the actual values give a*

**local maximum discrepancy***. Naturally, analysis in two orthogonal directions is required.*

**set of discontinuity points**The essential difference from edge filtering is that the result is a discrete set of breakpoints rather than a 2D field. This does **not use*** point sifting* using a specific

*since the coordinates of discontinuity points are found as*

**threshold***of the forecast error function.*

**arg max**Another feature is that the analysis can be carried out not for all rows/columns of pixels of the original image but with a specific step, which reduces the number of calculations. The algorithm is well suited for parallelization.

The approach does not require converting the original color image (including multispectral) to grayscale; that is, discontinuity points of the color/intensity vector are detected.

The images show an example of the original image and the resulting set of breakpoints. The original image with reduced brightness, converted to grayscale, is used only as a background, on which the points of the constructed set of contour points are plotted.

Of course, to construct contour elements as * lines (curves)* of their set of contour

*, the following step is required; this is the subject of one of the following chapters.*

**points**