"Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj"
#4 opened 1 day ago
by
MrRyukami
XPU Not working "No backend can handle 'dequantize_per_tensor_fp8': eager: x: device xpu not in {'cuda', 'cpu'}"
3
#3 opened 5 days ago
by
AI-Joe-git
Create README.md
#2 opened 6 days ago
by
dayz1593572159
Fp8 text encoder
🔥
👍
9
5
#1 opened 6 days ago
by
kakkkarotto