| --- |
| base_model: unsloth/Qwen2.5-1.5B-Instruct |
| library_name: peft |
| license: mit |
| datasets: |
| - ituperceptron/turkish_medical_reasoning |
| language: |
| - tr |
| pipeline_tag: text-generation |
| tags: |
| - medical |
| - biology |
| - transformers |
| - unsloth |
| - trl |
| --- |
| |
| # Model Card for Turkish-Medical-R1 |
|
|
|
|
| ## Model Details |
|
|
| This model is a fine-tuned version of Qwen2.5-1.5B-Instruct for medical reasoning in Turkish. The model was trained on ituperceptron/turkish_medical_reasoning dataset, which contains |
| instruction-tuned examples focused on clinical reasoning, diagnosis, patient care, and medical decision-making. |
|
|
| ### Model Description |
|
|
|
|
| - **Developed by:** Rustam Shiriyev |
| - **Language(s) (NLP):** Turkish |
| - **License:** MIT |
| - **Finetuned from model:** unsloth/Qwen2.5-1.5B-Instruct |
| |
|
|
| ## Uses |
|
|
| ### Direct Use |
|
|
| - Medical Q&A in Turkish |
| - Clinical reasoning tasks (educational or non-diagnostic) |
| - Research on medical domain adaptation and multilingual LLMs |
|
|
| ### Out-of-Scope Use |
|
|
| This model is intended for research and educational purposes only. It should not be used for real-world medical decision-making or patient care. |
|
|
|
|
| ## How to Get Started with the Model |
|
|
| Use the code below to get started with the model. |
|
|
| ```python |
| |
| from huggingface_hub import login |
| from transformers import AutoTokenizer, AutoModelForCausalLM |
| from peft import PeftModel |
| |
| login(token="") |
| |
| tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-1.5B-Instruct",) |
| base_model = AutoModelForCausalLM.from_pretrained( |
| "unsloth/Qwen2.5-1.5B-Instruct", |
| device_map={"": 0}, token="" |
| ) |
| |
| model = PeftModel.from_pretrained(base_model,"Rustamshry/Turkish-Medical-R1") |
| |
| |
| question = "Medüller tiroid karsinomu örneklerinin elektron mikroskopisinde gözlemlenen spesifik özellik nedir?" |
| |
| prompt = ( |
| |
| "### Talimat:\n" |
| "Siz bir tıbb alanında uzmanlaşmış yapay zeka asistanısınız. Gelen soruları yalnızca Türkçe olarak, " |
| "açıklayıcı bir şekilde yanıtlayın.\n\n" |
| f"### Soru:\n{question.strip()}\n\n" |
| f"### Cevap:\n" |
| |
| ) |
| |
| input_ids = tokenizer(prompt, return_tensors="pt").to(model.device) |
| |
| outputs = model.generate( |
| **input_ids, |
| max_new_tokens=2048, |
| ) |
| |
| print(tokenizer.decode(outputs[0])) |
| ``` |
|
|
| ## Training Data |
|
|
| - Dataset: ituperceptron/turkish_medical_reasoning; Translated version of FreedomIntelligence/medical-o1-reasoning-SFT (Turkish, ~7K examples) |
|
|
|
|
| ## Evaluation |
|
|
| No formal quantitative evaluation yet. |
|
|
|
|
| ### Framework versions |
|
|
| - PEFT 0.15.2 |