Bayram, M. AliFincan, Ali ArdaGumus, Ahmet SemihDiri, BanuYildirim, SavasAytas, Oner2026-04-042026-04-042025979-8-3315-6656-2979-8-3315-6655-52165-0608https://doi.org/10.1109/SIU66497.2025.11112154https://hdl.handle.net/11411/1058933rd Conference on Signal Processing and Communications Applications-SIU-Annual -- JUN 25-28, 2025 -- Istanbul, TURKIYELanguage models have made significant advancements in understanding and generating human language, achieving remarkable success in various applications. However, evaluating these models remains a challenge, particularly for resource-limited languages like Turkish. To address this issue, we introduce the Turkish MMLU (TR-MMLU) benchmark, a comprehensive evaluation framework designed to assess the linguistic and conceptual capabilities of large language models (LLMs) in Turkish. TR-MMLU is based on a meticulously curated dataset comprising 6,200 multiple-choice questions across 62 sections within the Turkish education system. This benchmark provides a standard framework for Turkish NLP research, enabling detailed analyses of LLMs' capabilities in processing Turkish text. In this study, we evaluated state-of-the-art LLMs on TR-MMLU, highlighting areas for improvement in model design. TR-MMLU sets a new standard for advancing Turkish NLP research and inspiring future innovations.trinfo:eu-repo/semantics/openAccessLarge Language Models (Llm)Natural Language Processing (Nlp)Artificial IntelligenceTurkish NlpTR-MMLU Benchmark for Large Language Models: Performance Evaluation, Challenges, and Opportunities for ImprovementConference Object2-s2.0-10501556421710.1109/SIU66497.2025.1111215410.1109/SIU66497.2025.11112154N/AN/AWOS:001575462500215