Mini-Coder
Mini-Coder is build on top of Qwen3.5-9B model with Continual Pretraining (CPT), we feed ~500k high-quality curated luau samples to improves the luau coding tasks capability.
We also inject over 14k samples from open-source of claude 4.6 distillations with a fews additional samples for Supervised-Finetuning (SFT) to improves the model reasoning, We also see the average consumed tokens has drastically reduced.
It's fine-tuned efficiently using LoRA (16-bit) and rsLoRA with Rank (r) set to 64 and Alpha (α) set to 128, ensuring strong adaptation and retention of new complex logic, it were trained specifically to handle up to 32,768 (32k) tokens of maximum output (recommended).
Uploaded finetuned model
- Developed by: khtsly
- License: apache-2.0
- Finetuned from model : khtsly/Coder-9B
This qwen3_5_text model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 337
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
