⚑ Each donation = another big MoE quantized

I host 25+ free APEX MoE quantizations as independent research. My only local hardware is an NVIDIA DGX Spark (122 GB unified memory) β€” enough for ~30-50B-class MoEs, but bigger ones (200B+) require rented compute on H100/H200/Blackwell, typically $20-100 per quant.
If APEX quants are useful to you, your support directly funds those bigger runs.

πŸŽ‰ Patreon (Monthly)  |  β˜• Buy Me a Coffee  |  ⭐ GitHub Sponsors

πŸ’š Big thanks to Hugging Face for generously donating additional storage β€” much appreciated.

Qwen3.5-35B-A3B Claude-Distilled APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled.

Brought to you by the LocalAI team | APEX Project | Technical Report

Benchmark Results

Benchmarks coming soon. For reference APEX benchmarks on the same Qwen3.5-MoE architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.

Available Files

File Profile Size Best For
Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Balanced.gguf I-Balanced ~24 GB Best overall quality/size ratio
Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Quality.gguf I-Quality ~22 GB Highest quality with imatrix
Qwen3.5-35B-A3B-Claude-Distilled-APEX-Quality.gguf Quality ~22 GB Highest quality standard
Qwen3.5-35B-A3B-Claude-Distilled-APEX-Balanced.gguf Balanced ~24 GB General purpose
Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Compact.gguf I-Compact ~17 GB Consumer GPUs, best quality/size
Qwen3.5-35B-A3B-Claude-Distilled-APEX-Compact.gguf Compact ~17 GB Consumer GPUs
Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Mini.gguf I-Mini ~13 GB Smallest viable, fastest inference

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: Qwen3.5-35B-A3B-Claude-Distilled (Qwen3.5-MoE, distilled from Claude 4.6 Opus reasoning)
  • Layers: 40
  • Experts: 256 routed + 1 shared (8 active per token)
  • Total Parameters: ~35B
  • Active Parameters: ~3B per token
  • APEX Config: 5+5 symmetric edge gradient across 40 layers
  • Calibration: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia)

Run with LocalAI

local-ai run mudler/Qwen3.5-35B-A3B-Claude-Distilled-APEX-GGUF@Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Downloads last month
15,044
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mudler/Qwen3.5-35B-A3B-Claude-Distilled-APEX-GGUF

Collection including mudler/Qwen3.5-35B-A3B-Claude-Distilled-APEX-GGUF