This is an early work toward extracting information on the internal structure of a complex geographical region as a well-defined whole, as opposed, for instance, to a bag of logically disconnected image pixels. The approach uses points in space (“sensors”) as two-state flags showing if that point does or does not belong to some region. A region is rigid; its shape, arbitrarily complex, must not change even if it is allowed to move. During a measurement, the direction, speed, and orientation of motion may not change. The output from the sensors is used to produce topological information about the region, including its internal structure. In this sensor fusion technique, reasoning about the region is done dynamically, instead of requiring a description of the region in some algebraic form. The approach also acquires the region shape and internal structure, rather than tracking some shape known a priori.
The core strategy is open-form algorithmic integration of distinct sets of sensor measurements minimizing mismatches in translation, rotation, and scaling. It is not immediately obvious what makes measurement sets “distinct”: for instance, if only a time series of measurement sets applies, or even a partition of fixed-time measurements qualifies, the latter case appears unsuitable. The region structure is captured in a well-defined data structure. Producing this open-form data structure allows further algorithmic reasoning on arbitrary shapes. Hopefully, the future will relax some assumptions, for instance, rigidity. An example of an application, which is missing, would have helped readers gain an easier intuitive understanding of this sensor fusion algorithm.