This is a very important paper because it combines machine learning, facial expressions analysis, and pulse rate in order to detect deception in an interview context. The results are supported by strong machine learning validation settings, applying cross validation and F1 scores. Additionally, the authors build a data benchmark based on webcam footage and wearable devices (for example, a smartwatch with pulse rate sensing).
What I like most about this paper is the authors’ feature extraction methodology that relies on OpenFace [1] landmarks and the subject’s eyes. For instance, the distance between eyebrows and eyes (right and left), facial contour (for example, head position), and the mouth area are essential in this regard. Gaze has noticeable influence in the methodology, too. It should also be noted that the authors took bias into consideration when removing missing values and outliers in their study.
The paper is well organized and very clear to read. Notably, the random forest machine learning classifier that was implemented achieved promising results (that is, in the range of 80 percent classification accuracy).
I would recommend this paper for researchers in the field of image processing, machine learning, and affective computing (for example, deception detection), as well as a more general computer science (CS) audience. It is very significant and beneficial in its outcome and methodology, in addition to being a fun read.