Yazar "Sönmez, E.B." seçeneğine göre listele
Listeleniyor 1 - 5 / 5
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A complete human verified Turkish caption dataset for MS COCO and performance evaluation with well-known image caption models trained against it(Institute of Electrical and Electronics Engineers Inc., 2022) Golech, S.B.; Karacan, S.B.; Sönmez, E.B.; Ayral, H.The procedure of generating natural language captions for an image is known as image captioning. Automatic image captioning is a particularly challenging task that stands at the junction of Computer Vision and Natural Language Processing. It has a variety of applications, including text-based image retrieval, assisting visually impaired users, and human-robot interaction. The majority of publications on the subject focus on the English language, which is an analytical language with characteristics differing from the agglutinative Turkish language. This work introduces the Turkish MS COCO dataset that extends the original MS COCO collection with captions in the Turkish language; experimental results surpass the current state-of-the-art for the Turkish image captioning field. Furthermore, the newly introduced database is also applicable for the study of machine translation. On the Turkish MS COCO dataset, the best performance has been achieved with the Meshed Memory Transformers with a Bleu-1 score of 0.72. The database is publicly available at https://github.com/BilgiAILAB/TurkishImageCaptioning. It is desired that the Turkish MS COCO dataset with the proposed benchmark will be an excellent resource for future studies on Turkish image captioning. © 2022 IEEE.Öğe Artificial Intelligence Contribution to Art-Therapy using Drawings of the House-Person-Tree Test(Institute of Electrical and Electronics Engineers Inc., 2023) Salar, A.A.; Faiyad, H.; Sönmez, E.B.; Hafton, S.This paper applies computer vision and artificial intelligence algorithms to the HTP (House-Tree-Person) test, a projective test intended to measure different aspects of personality using drawings. The drawn pictures are assumed to represent the subject's attitudes and feelings regarding themselves, other and their family. The House-Tree-Person evaluation uses "Qualitative Scoring,"which is a subjective analysis influenced by the therapists that can be used to infer aggressive, depressive or anxious characteristics in the drawings. This paper is part of a larger project that aims to use artificial intelligence and image-processing techniques to support this process, hence reducing the bias factor from the equation. With the collaboration of the Department of Psychology at Istanbul Bilgi University, the project investigates on possible approaches to extract discriminative features out of HTP sketch images and it searches for a meaningful combination, which will support therapists in their diagnostic assessments. After data pre-processing, image classification of clinical HTP data was conducted using the ResNet152 model and achieving a test accuracy of 66%. Furthermore, the experiment of the detection of the "pen pressure"feature was performed using Skeletonization and morphological image processing; however, due to a lack of ground truth, the performance of the proposed algorithm is not determined yet. © 2023 IEEE.Öğe Emotion recognition in the wild using deep neural networks and Bayesian classifiers(Association for Computing Machinery, Inc, 2017) Surace, L.; Patacchiola, M.; Sönmez, E.B.; Spataro, W.; Cangelosi, A.Group emotion recognition in the wild is a challenging problem, due to the unstructured environments in which everyday life pictures are taken. Some of the obstacles for an effective classification are occlusions, variable lighting conditions, and image quality. In this work we present a solution based on a novel combination of deep neural networks and Bayesian classifiers. The neural network works on a bottom-up approach, analyzing emotions expressed by isolated faces. The Bayesian classifier estimates a global emotion integrating top-down features obtained through a scene descriptor. In order to validate the system we tested the framework on the dataset released for the Emotion Recognition in the Wild Challenge 2017. Our method achieved an accuracy of 64.68% on the test set, significantly outperforming the 53.62% competition baseline. © 2017 Association for Computing Machinery.Öğe Extracting Psychological Features out of Drawings of HTP test(Institute of Electrical and Electronics Engineers Inc., 2023) Abdullah, L.; Halfon, S.; Sönmez, E.B.The House-Tree-Person (HTP) test is extensively used in different contexts to assess personality related issues. At test time, the respondent is requested to draw a house, a tree and a person. The drawings are later interpreted by therapists to assess emotional indicators. Due to its simplicity, this test is used with individuals aged over 3 years. This paper presents the first results of a bigger project, which aims to use artificial intelligence and image processing techniques to classify House-Tree-Person pictures, to extract psychological features out of those drawings and to find the optimal combination of those signs that may be indicative of psychological maladjustment. Having done the initial HTP classification task, this research focuses on the pictures of houses, searching for the presence/absence of the roof. This work used sketches provided by the department of Psychology of Istanbul Bilgi University and other drawings retrieved from multiple resources and from the internet. In the HTP classification task, the reached performance is 96.81% and 98.84% in the validation and test sets, respectively. In the roof detection task, the achieved accuracy is 70.73%, over a test set of 41 "house"images from which 29 were correctly classified. © 2023 IEEE.Öğe Facial Expression Recognition in the Wild with Application in Robotics(Institute of Electrical and Electronics Engineers Inc., 2021) Han, H.; Karadeniz, O.; Sönmez, E.B.; Dalyan, T.; Sanoğlu, B.One of the major problems with robot companions is their lack of credibility. Since emotions play a key role in human behaviour their implementation in virtual agents is a conditio sine-qua-non for realistic models. That is, correct classification of facial expressions in the wild is a necessary preprocessing step for implementing artificial empathy. The aim of this work is to implement a robust Facial Expression Recognition (FER) module into a robot. Considering the results of an empirical comparison among the most successful deep learning algorithms used for FER, this study fixes the state-ofthe-art performance of 75% on the FER2013 database with the ensemble method. With a single model, the best performance of 70.8% has been reached using the VGG16 architecture. Finally, the VGG16-based FER module has been been implemented into a robot and reached a performance of 70% when tested with wild expressive faces. © 2021 IEEE