Ailiance β€” Devstral-Small-2-24B-BF16 cpp (bf16) LoRA

LoRA adapter fine-tuned on mistralai/Devstral-Small-2-24B-Instruct-2512 for cpp tasks.

Variant: trained on the BF16 base for higher numerical fidelity.

Maintained by Ailiance β€” French AI org publishing EU AI Act aligned LoRA adapters and datasets.

Quick start (MLX)

from mlx_lm import load, generate

model, tokenizer = load(
    "mistralai/Devstral-Small-2-24B-Instruct-2512",
    adapter_path="Ailiance-fr/devstral-cpp-bf16-lora",
)

print(generate(model, tokenizer, prompt="..."))

Training

Hyperparameter Value
Base model mistralai/Devstral-Small-2-24B-Instruct-2512
Method LoRA via mlx-lm
Rank 16
Scale 2.0
Alpha 32
Max seq length 4096
Iterations 500
Optimizer Adam, LR 1e-5
Hardware Apple M3 Ultra 512 GB

Training data lineage

Derived from the internal eu-kiki / mascarade curation. All upstream samples are synthetic, permissively-licensed, or generated from Apache-2.0 base resources. See the Ailiance-fr catalog for related cards.

Benchmark roadmap

This LoRA has not yet been evaluated through electron-bench (the current pipeline supports gemma-4-E4B base only). Training was completed with the standard mlx-lm LoRA trainer (rank 16, alpha 32, scale 2.0, AdamW LR 1e-5, 500 iters) β€” full hyperparameters are in the Training table above.

Planned evaluations:

  • Perplexity on the validation split of the training data
  • Functional benchmark on devstral-specific tasks
  • Comparison vs base mistralai/Devstral-Small-2-24B-Instruct-2512

Track progress: ailiance-bench issues.

For reference benchmarks on the gemma-4-E4B base, see the base-vs-LoRA matrix.

License chain

Component License
Base model (mistralai/Devstral-Small-2-24B-Instruct-2512) apache-2.0
Training data (internal Ailiance curation (synthetic + permissive sources)) apache-2.0
LoRA adapter (this repo) apache-2.0

All upstream components are Apache 2.0 / MIT β€” LoRA inherits permissive terms.

EU AI Act compliance

  • Article 53(1)(c): training data licenses preserved (per-dataset cards declare upstream licenses).
  • Article 53(1)(d): training data summary β€” see upstream dataset cards on Ailiance-fr.
  • GPAI Code of Practice (July 2025): base mistralai/Devstral-Small-2-24B-Instruct-2512 released under apache-2.0.
  • No web scraping by Ailiance, no licensed data, no PII.
  • Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.

License

LoRA weights: apache-2.0 β€” see License chain table above for derivation rationale.

Citation

@misc{ailiance_devstral_cpp_bf16_2026,
  author    = {Ailiance},
  title     = {Ailiance β€” Devstral-Small-2-24B-BF16 cpp (bf16) LoRA},
  year      = {2026},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/Ailiance-fr/devstral-cpp-bf16-lora}
}

Related

See the full Ailiance-fr LoRA collection.

Bench comparison (2026-05-11)

Base model (Devstral-Small-2-24B-MLX-4bit) capability

Task Score Notes
GSM8K-CoT flex EM 0.96 W3 lm-eval-harness (--limit 100)
ARC-Easy acc / acc_norm 0.80 / 0.75
MMLU-Pro Computer Science 0.64

Source: https://github.com/ailiance/ailiance/tree/main/output/lm-eval-base-2026-05-11

This LoRA (tuned) β€” bench PENDING

Will include kicad-sch / iact-bench validators + W3 lm-eval delta. See spec for methodology: https://github.com/ailiance/ailiance-bench/blob/main/docs/superpowers/specs/2026-05-11-kicad-sch-gap-design.md

Downloads last month
-
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Ailiance-fr/devstral-cpp-bf16-lora