In this talk, we present a subject independent facial action unit (AU) detection method by introducing the concept of relative AU detection when the neutral face is not provided. In our method, by changing the classification objective, we analyze the neighborhood of the current frame of an input image sequence to decide if the expression recently increased, decreased or showed no change. This is in sharp contrast with conventional frame-based methods which decide about presence/absence of an AU without considering the temporal information. The proposed method is more robust to individual differences among subjects such as age, face scale, shape, texture and transitions among expressions as well as lower intensity of expressions. Experiments on Extended Cohn-Kanade (CK+), DISFA and Bosphorus databases by the proposed method, show improvement in terms of F1-score in comparison to the frame-based baseline method. Moreover, by doing pairwise t-test, the proposed method shows statistically significance improvement in terms of F1 score.
Keywords- facial action coding system (FACS), temporal information, statistical machine learning and computer vision.