Di Sciascio et al. propose an algorithm for retrieval by spatial similarity that measures the degree of similarity between the spatial layout of objects in a sketched query, and the layout of objects in a symbolic image. The algorithm is invariant to scaling, rotation, and translation, and can deal with multiple instances of an object in an image. An important feature of the algorithm is that the user can refine her or his search by adding new details, and, at least, elements explicitly included in the query will be present in the retrieved set.
The new algorithm was compared with other well-known algorithms [1,2]. For testing, a small dataset of symbolic images was used. The experiments showed that the algorithm was robust with respect to rotation, translation, and scaling. The algorithm also provides better response to user expectations than the other two algorithms. Although the paper is well written, some important points are missing. For example, the time-complexity of the algorithm is not discussed. Also, its behavior on large data collections is not mentioned. In any case, the paper is valuable for anybody working with symbolic images.