]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
Add support for CUMSUM and TRI for CUDA. (llama/17584)
authorPiotr Wilkin (ilintar) <redacted>
Thu, 4 Dec 2025 21:19:51 +0000 (22:19 +0100)
committerGeorgi Gerganov <redacted>
Fri, 12 Dec 2025 15:53:17 +0000 (17:53 +0200)
commit8d44d6181a362b9b9447baa6acf5b33cb70ef208
tree0be82679d09a8a4060e7e84bd08de80c7a9a534a
parent8902c9d976774d73c050ef856361445ed83a4547
Add support for CUMSUM and TRI for CUDA. (llama/17584)

* Add support for CUMSUM and TRI for CUDA.

* Minor optimizations.

* Correct warp_prefix_inclusive_sum in float2 variant to return float2

* Optimize TRI

* Whitespace

* Fix strides.

* Implement double loop

* Whitespace

* Fix HIP compilation bugs

* Optimizations + big case performance tests

* Implement using CUB with fallback to custom kernel

* Remove error message.

* Fixes from code review

* Comment out CPU-unsupported F16/BF16 cases to fix CI

* Fine, you win :P

* Fix last cast, use NO_DEVICE_CODE and GGML_UNUSED_VARS

* Vary warp-size based on physical warp size

* Add GGML_UNUSED_VARS in tri as well

* Use constexpr and call prefix_inclusive with warp_size template param

* Update ggml/src/ggml-cuda/cumsum.cu

Co-authored-by: Johannes Gäßler <redacted>
* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Change to tid % warp_size

* Fix strides; hardcode mask; add ggml_lane_mask_t

* Missing renames, remove unused get_warp_mask(), explicit calls to ggml_cuda_info()

* Too hasty...

---------

Co-authored-by: Johannes Gäßler <redacted>
ggml/src/ggml-cuda/common.cuh
ggml/src/ggml-cuda/cumsum.cu [new file with mode: 0644]
ggml/src/ggml-cuda/cumsum.cuh [new file with mode: 0644]
ggml/src/ggml-cuda/ggml-cuda.cu
ggml/src/ggml-cuda/tri.cu [new file with mode: 0644]
ggml/src/ggml-cuda/tri.cuh [new file with mode: 0644]