]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
Add support for CUMSUM and TRI for CUDA. (llama/17584)
authorPiotr Wilkin (ilintar) <redacted>
Thu, 4 Dec 2025 21:19:51 +0000 (22:19 +0100)
committerGeorgi Gerganov <redacted>
Thu, 11 Dec 2025 13:32:54 +0000 (15:32 +0200)
commit3571e69bdded2e7df7c6caa6723aee83918d6ac8
tree064520c173127d95e7468fc0ed0fc911e0eda09a
parent0f8cfa0f1c236808fcd4da7deb32964766c653ab
Add support for CUMSUM and TRI for CUDA. (llama/17584)

* Add support for CUMSUM and TRI for CUDA.

* Minor optimizations.

* Correct warp_prefix_inclusive_sum in float2 variant to return float2

* Optimize TRI

* Whitespace

* Fix strides.

* Implement double loop

* Whitespace

* Fix HIP compilation bugs

* Optimizations + big case performance tests

* Implement using CUB with fallback to custom kernel

* Remove error message.

* Fixes from code review

* Comment out CPU-unsupported F16/BF16 cases to fix CI

* Fine, you win :P

* Fix last cast, use NO_DEVICE_CODE and GGML_UNUSED_VARS

* Vary warp-size based on physical warp size

* Add GGML_UNUSED_VARS in tri as well

* Use constexpr and call prefix_inclusive with warp_size template param

* Update ggml/src/ggml-cuda/cumsum.cu

Co-authored-by: Johannes Gäßler <redacted>
* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Change to tid % warp_size

* Fix strides; hardcode mask; add ggml_lane_mask_t

* Missing renames, remove unused get_warp_mask(), explicit calls to ggml_cuda_info()

* Too hasty...

---------

Co-authored-by: Johannes Gäßler <redacted>
src/ggml-cuda/common.cuh
src/ggml-cuda/cumsum.cu [new file with mode: 0644]
src/ggml-cuda/cumsum.cuh [new file with mode: 0644]
src/ggml-cuda/ggml-cuda.cu
src/ggml-cuda/tri.cu [new file with mode: 0644]
src/ggml-cuda/tri.cuh [new file with mode: 0644]
tests/test-backend-ops.cpp