]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
CUDA: experimental native mxfp4 support for blackwell (#17906)
authorAman Gupta <redacted>
Wed, 24 Dec 2025 14:28:26 +0000 (22:28 +0800)
committerGitHub <redacted>
Wed, 24 Dec 2025 14:28:26 +0000 (22:28 +0800)
commitc8a2417d7b65705231eeb5b4080d3e0428c9c5d2
treec03767eeae20990e0002867598e2f9638ab29ab8
parent54132f1b1fa16b419d589ac03d3266178259eb25
CUDA: experimental native mxfp4 support for blackwell (#17906)

* CUDA: experimental native mxfp4 support for blackwell

* optimize load_tiles

* optimize quantize_mxfp4

* cleanup

* first pass review: formatting

* use interleaved layout for mma

* mmq: add assert for size

* use __nv_fp4x4_e2m1

* use iter_k as 512, cleanup

* Use 1200 as blackwell instead of 1000

* address review comments

* mmq: fix stride

* quantize.cu: use reference impl of e8m0 scale

* address review comments

* add 120f-virtual + minor fixes

---------

Co-authored-by: Aman Gupta <aman>
ggml/src/ggml-cuda/CMakeLists.txt
ggml/src/ggml-cuda/common.cuh
ggml/src/ggml-cuda/mma.cuh
ggml/src/ggml-cuda/mmq.cu
ggml/src/ggml-cuda/mmq.cuh
ggml/src/ggml-cuda/quantize.cu
ggml/src/ggml-cuda/quantize.cuh
ggml/src/ggml-cuda/vendors/cuda.h