Classic local descriptors for computer vision applications are typically based on scale-invariant feature transform (SIFT) or speeded up robust features (SURF) technologies. SIFT was introduced by David Lowe  as an algorithm to describe and detect local features in pictures and images. Successful applications of this approach include gesture recognition, object detection, robotic navigation, and moving target tracking. Significant keypoints are first identified in object models and stored in a model database. Recognition is accomplished by comparing the features of the suggested object found in an image with those found in the database corresponding to the postulated object identified. This is verified by calculating the Euclidean distance between the keypoints found in the image and the corresponding projected keypoints from the hypothesized object found in the database. An iterative technique will provide the necessary location, scale, and orientation of the object found in the image.
A second classic approach is that of SURF, developed and patented by Bay et al.  in 2009. It arises from the SIFT concepts, but as the name implies, its implementation is much faster than that of SIFT. SURF uses integer approximations of the determinant to a Hessian region detector in order to identify keypoints. (Such regions are often referred to as blobs.) Despite the mathematical rigor of this approach, the computation actually only requires three integer operations based on the integral image, also known as the summed area table in computer graphics. Feature identification is based on the Haar wavelet about a point of interest, which again can be computed based on the integral image. Whereas SIFT is invariant to scaling and orientation, some implementations of SURF are not invariant to rotation.
The above two classical approaches comprise the first part of the book, the first two chapters. The second part of the book (chapters 3 and 4) presents modern technologies. Chapter 3 collects recent research on intensity-based descriptors. Common feature detectors locate gradient distortions but typically ignore intensity information because it is sensitive to slight changes in illumination. These recent papers have shown how to capitalize on intensity information with the net result being a more robust method to achieve an even more compact description than SIFT. Hardware has greatly improved over the years, and as such, there is a recent trend to go as low-level as possible and incorporate binary descriptors (chapter 4). For example, binary robust independent element features (BRIEF), introduced in 2010 by Calonder et al. , has the added advantage of requiring a minimal amount of storage for the stored feature descriptors. Also, binary robust invariant scalable keypoints (BRISK), proposed in 2011 by Leutenegger , uses a novel method for keypoint identification and subsequent matching based on a novel scale-space version to obtain features from accelerated segment tests (the latter termed FAST). Some learning-based methods are included in this part as well.
The third and final part of the book (chapters 5 and 6) is chock full of practical information for the computer vision practitioner on how to choose the right approach for a given application for both 2D and 3D applications. Sufficient details are provided, so the reader can see the role that feature descriptors play in computer vision systems. Finally, chapter 6 concludes the discussion by giving advice to researchers on how to obtain and benchmark datasets and future directions that they would be interested in. This book was published under the “Springer Briefs” series, and although it is only 100 pages in length (literally, not including initial prefaces, remarks, and the table of contents), it does a great job at bringing together the important technologies for a critical phase of any computer vision or robotic imaging system.