Yazar "Han, H." seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Facial Expression Recognition in the Wild with Application in Robotics(Institute of Electrical and Electronics Engineers Inc., 2021) Han, H.; Karadeniz, O.; Sönmez, E.B.; Dalyan, T.; Sanoğlu, B.One of the major problems with robot companions is their lack of credibility. Since emotions play a key role in human behaviour their implementation in virtual agents is a conditio sine-qua-non for realistic models. That is, correct classification of facial expressions in the wild is a necessary preprocessing step for implementing artificial empathy. The aim of this work is to implement a robust Facial Expression Recognition (FER) module into a robot. Considering the results of an empirical comparison among the most successful deep learning algorithms used for FER, this study fixes the state-ofthe-art performance of 75% on the FER2013 database with the ensemble method. With a single model, the best performance of 70.8% has been reached using the VGG16 architecture. Finally, the VGG16-based FER module has been been implemented into a robot and reached a performance of 70% when tested with wild expressive faces. © 2021 IEEEÖğe Facial Expression Recognition on Wild and Multi-Label Faces with Deep Learning(Institute of Electrical and Electronics Engineers Inc., 2023) Han, H.; Sonmez, E.B.The analysis of facial expressions is a powerful tool to decode nonverbal behavior in humans. Due to its importance, several studies have already been done in the past. However, facial expression recognition on wild and multi-label faces is under-investigated also due to the limited number of available databases. This paper fills in the current lack by challenging the RAF-ML dataset and fixing the state-of-the-art performance of 50.5% on the "single label experiment". The proposed method is also tested in a second experiment, suggested by this work, which considers only wild faces having a dominant expression. The benchmark performance for the second trial is 56.1%. The deep-learning algorithms presented in this work are described in detail to facilitate their reproduction. © 2023 IEEE.