Facial Expression Detection In Video-Recorded Images Using a Mobilenet-Based Transfer Learning Approach

Authors

  • Sulthon Adam Maulana STMIK AMIKBANDUNG, Indonesia

DOI:

https://doi.org/10.31316/astro.v4i2.8575

Abstract

Emotions play an important role in human communication, and facial expressions are one of the main indicators for recognizing emotional states. Most studies in Facial Expression Recognition (FER) still focus on static images or real-time webcam tracking, while evaluation approaches based on recorded video remain less explored. This study aims to design a simple but functional pipeline to evaluate the performance of MobileNetV2 with transfer learning on verbal interaction video data. The Karolinska Directed Emotional Faces (KDEF) dataset was used for training with seven basic emotion classes, while the test data came from video recordings processed frame-by-frame. The pipeline includes frame extraction, face detection using Haar Cascade, image preprocessing, and classification with the fine-tuned MobileNetV2 model. Evaluation metrics such as accuracy, precision, recall, and F1-score were applied. The results show that the model reached 87% validation accuracy and was able to identify dominant emotions in video, although predictions tended to be biased toward the neutral class in subtle expressions such as anger and disgust. On the other hand, clearer expressions such as happy were detected more reliably. In conclusion, the proposed pipeline successfully bridges static-image models with video data, offering a practical and efficient evaluation approach that can support Human-Computer Interaction (HCI) applications on resource-limited devices.

Downloads

Published

2025-11-30

How to Cite

Sulthon Adam Maulana. (2025). Facial Expression Detection In Video-Recorded Images Using a Mobilenet-Based Transfer Learning Approach. APPLIED SCIENCE AND TECHNOLOGY REASERCH JOURNAL, 4(2). https://doi.org/10.31316/astro.v4i2.8575

Citation Check