Bayram, M. AliDiri, BanuYildirim, Savas2026-04-042026-04-0420252375-46992375-4702https://doi.org/10.1145/3772000https://hdl.handle.net/11411/10619The development of a Turkish-specific Large Language Model (LLM) for healthcare presents a unique opportunity to enhance AI's accessibility and relevance for Turkish-speaking medical practitioners and patients. This study introduces a specialized Turkish Medical LLM fine-tuned on over 167,732 real patient-doctor question-answer pairs sourced from a trusted medical platform and capturing authentic linguistics in Turkish medical language. Utilizing models like LLAMA 3, the fine-tuning process was supported by Low-Rank Adaptation (LoRA) and involved innovative methods to mitigate catastrophic forgetting, including spherical linear interpolation (Slerp) merging. Evaluation of the model's performance through similarity scores, GPT-3.5 assessments, and expert reviews indicates significant improvement in the model's ability to generate medically accurate responses. This Turkish Medical LLM demonstrates potential to support medical decision-making and patient interaction in Turkish healthcare settings, offering an essential resource for enhancing AI inclusivity across languages.eninfo:eu-repo/semantics/openAccessTurkish Medical LlmHealthcare AiPatient-Doctor InteractionsModel Fine-TuningCatastrophic ForgettingLow-Rank AdaptationHealthcare-Focused Turkish Medical LLM: Training on Real Patient-Doctor Question-Answer Data for Enhanced Medical InsightArticle2-s2.0-10502407089410.1145/377200010.1145/377200011Q224Q3WOS:001632497500011