Healthcare-Focused Turkish Medical LLM: Training on Real Patient-Doctor Question-Answer Data for Enhanced Medical Insight

Küçük Resim Yok

Tarih

2025

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Assoc Computing Machinery

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

The development of a Turkish-specific Large Language Model (LLM) for healthcare presents a unique opportunity to enhance AI's accessibility and relevance for Turkish-speaking medical practitioners and patients. This study introduces a specialized Turkish Medical LLM fine-tuned on over 167,732 real patient-doctor question-answer pairs sourced from a trusted medical platform and capturing authentic linguistics in Turkish medical language. Utilizing models like LLAMA 3, the fine-tuning process was supported by Low-Rank Adaptation (LoRA) and involved innovative methods to mitigate catastrophic forgetting, including spherical linear interpolation (Slerp) merging. Evaluation of the model's performance through similarity scores, GPT-3.5 assessments, and expert reviews indicates significant improvement in the model's ability to generate medically accurate responses. This Turkish Medical LLM demonstrates potential to support medical decision-making and patient interaction in Turkish healthcare settings, offering an essential resource for enhancing AI inclusivity across languages.

Açıklama

Anahtar Kelimeler

Turkish Medical Llm, Healthcare Ai, Patient-Doctor Interactions, Model Fine-Tuning, Catastrophic Forgetting, Low-Rank Adaptation

Kaynak

Acm Transactions on Asian and Low-Resource Language Information Processing

WoS Q Değeri

Q3

Scopus Q Değeri

Q2

Cilt

24

Sayı

11

Künye