Q4_0 Speed comparison Poor GPU Vega8 Vulkan
Hi, since I requested it for better speed so felt obliged to compare and share speed. I have horrible internet connection 6Mpbs and that itself mostly does not act like it, so downloading took few days.
Conclusion is not what I expected but it is what it is. Since saving extra RAM and HDD has no benefit for me, yours is said to have better perplexity and KLD so I will stick with it and try to see quality. There is no real world usable speed difference. My conclusion will be to stick with whatever has best quality in such case.
llama.cpp mainline: build: 2943210c1 (8157)
Thanks for creating the easy to read graphs!
Very cool seems like this Q4_0 mix is slightly faster PP which makes sense as PP is generally compute bottlenecked and using a quantization type with more efficient kernel for vulkan can give a little more speed.
Also makes sense TG is likely memory bandwidth bottlenecked so the quant with the smallest active parameter size will be more important than the exact quantization type!
MXFP4 is fairly fast PP as well on Vulkan as it is basically a worse quality Q4_0 in terms of implemention, though that quant likely has a few other quant types e.g. q4_K or q6_K here and there.



