Yazar "Ozdemir, Ozgur" seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A comparative study of neural machine translation models for Turkish language(Ios Press, 2022) Ozdemir, Ozgur; Akin, Emre Salih; Velioglu, Riza; Dalyan, TugbaMachine translation (MT) is an important challenge in the fields of Computational Linguistics. In this study, we conducted neural machine translation (NMT) experiments on two different architectures. First, Sequence to Sequence (Seq2Seq) architecture along with a variation that utilizes attention mechanism is performed on translation task. Second, an architecture that is fully based on the self-attention mechanism, namely Transformer, is employed to perform a comprehensive comparison. Besides, the contribution of employing Byte Pair Encoding (BPE) and Gumbel Softmax distributions are examined for both architectures. The experiments are conducted on two different datasets: TED Talks that is one of the popular benchmark datasets for NMT especially among morphologically rich languages like Turkish and WMT18 News dataset that is provided by The Third Conference on Machine Translation (WMT) for shared tasks on various aspects of machine translation. The evaluation of Turkish-to-English translations' results demonstrate that the Transformer model with combination of BPE and Gumbel Softmax achieved 22.4 BLEU score on TED Talks and 38.7 BLUE score on WMT18 News dataset. The empirical results support that using Gumbel Softmax distribution improves the quality of translations for both architectures.Öğe Corporate social responsibility and idiosyncratic risk in the restaurant industry: does brand diversification matter?(Emerald Group Publishing Ltd, 2020) Ozdemir, Ozgur; Erkmen, Ezgi; Kim, MinjiPurpose This study aims to examine the link between corporate social responsibility (CSR) and idiosyncratic risk in the restaurant industry. The study also explores whether brand diversification magnifies the risk reduction effect of CSR in the restaurant industry. Design/methodology/approach The study uses an unbalanced panel of 274 firm-year observations for 43 restaurant firms over the period 1995-2015. Models are estimated via fixed effect regression with robust standard errors. Findings The study finds that CSR involvement reduces idiosyncratic risk and this risk reduction is intensified when restaurant firms operate a portfolio of brands. Research limitations/implications The study's findings are limited to restaurant industry, therefore, generalization of the findings to other industries requires delicate care. Brand diversification is a simple brand count due to a lack of brand sales data. Practical implications CSR activities are not cost burden for restaurant firms. Indeed, CSR could be a viable strategy to reduce the volatility in future expected cash flows, hence the idiosyncratic risk. This risk reduction could help owners/managers access to capital with lower cost. Moreover, the study suggests that CSR practices should not be implemented in isolation from firm marketing strategy such as portfolio of brands. Originality/value Although prior hospitality research puts forth some evidence using systematic risk as the measure of firm risk, this measure may not best suit the purpose in CSR context given that CSR is a direct, firm-specific strategy. Hence, the current study provides both new evidence with firm-specific, idiosyncratic risk and introduces an important contingency situation when the risk reduction effect of CSR would become more profound for restaurant firms.Öğe Utilizing Large Programming Language Models on Software Vulnerability Detection(Institute of Electrical and Electronics Engineers Inc., 2025) Aslan, Mert Kaan; Alkan, Yunus Emre; Alican, Muhammed Burak; Ozdemir, OzgurFollowing the success of large language models, pre-trained programming language models (PLMs) have shown prominent achievements in the software engineering field. This paper focuses on examining the performance of pre-trained PLMs in detecting software vulnerabilities in source codes. In this study, two distinct transformer-based approaches are utilized: the encoder-only CodeBERT and the decoder-only Qwen-2.5Coder. The selected models are evaluated on two benchmark datasets, namely PrimeVul and BigVul, differing significantly in terms of data duplication and label quality. Experimental studies reveal that while Qwen-2.5-Coder outperforms CodeBERT on the BigVul benchmark, both models suffer a substantial performance drop on the realistic and deduplicated PrimeVul dataset. Notably, Qwen-2.5-Coder shows extreme sensitivity to high-quality samples, achieving only 2.37% recall, suggesting that decoder-only models may overfit on noisy or redundant data. In contrast, CodeBERT demonstrates relatively more stable behavior with its encoder architecture's suitability for classification tasks. These findings highlight not only the critical role of dataset design, such as duplication control and label accuracy, but also the impact of architectural choices on generalization. This paper aims to contribute to the development of more effective vulnerability detection tools that can automatically detect software vulnerabilities by leveraging these findings. © 2025 IEEE.











