]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
2 weeks agoFix arm64 build debian/latest debian/0.9.4.185-1
Mathieu Baudier [Tue, 11 Nov 2025 10:25:23 +0000 (11:25 +0100)]
Fix arm64 build

2 weeks agoImprove Ubuntu ARM64 build
Mathieu Baudier [Tue, 11 Nov 2025 10:22:05 +0000 (11:22 +0100)]
Improve Ubuntu ARM64 build

3 weeks agoUpdate upstream and improve packaging
Mathieu Baudier [Tue, 11 Nov 2025 09:00:26 +0000 (10:00 +0100)]
Update upstream and improve packaging

3 weeks agoMerge tag 'upstream/0.9.4.185' into debian/latest
Mathieu Baudier [Tue, 11 Nov 2025 07:52:12 +0000 (08:52 +0100)]
Merge tag 'upstream/0.9.4.185' into debian/latest

Upstream pinned commit

3 weeks agosync : whisper.cpp upstream/0.9.4.185
Georgi Gerganov [Sun, 9 Nov 2025 21:40:36 +0000 (23:40 +0200)]
sync : whisper.cpp

3 weeks agosync : llama.cpp
Georgi Gerganov [Sun, 9 Nov 2025 12:46:57 +0000 (14:46 +0200)]
sync : llama.cpp

3 weeks agovulkan: iGPU memory reporting fix (llama/17110)
Ruben Ortlam [Sun, 9 Nov 2025 08:54:47 +0000 (09:54 +0100)]
vulkan: iGPU memory reporting fix (llama/17110)

* vulkan: use all device-local heaps for memory availability reporting

Co-authored-by: Giuseppe Scrivano <redacted>
* use all available heaps for iGPU memory reporting

* Allow multiple memory types per buffer request for devices with split heaps

---------

Co-authored-by: Giuseppe Scrivano <redacted>
3 weeks agovulkan: fix mmq out of bounds reads (llama/17108)
Ruben Ortlam [Sun, 9 Nov 2025 08:52:57 +0000 (09:52 +0100)]
vulkan: fix mmq out of bounds reads (llama/17108)

* vulkan: fix mmq out of bounds reads, streamline outdated matmul host code

* fix mul_mat_id quantization call

* Fix compiler warnings

3 weeks agovulkan: fuse mul_mat_id + mul (llama/17095)
Jeff Bolz [Sun, 9 Nov 2025 08:48:42 +0000 (02:48 -0600)]
vulkan: fuse mul_mat_id + mul (llama/17095)

* vulkan: fuse mul_mat_id + mul

This comes up in qwen3 moe.

* split mul_mat_id fusion tests into a separate class

3 weeks agometal : retain src and dst buffers during async ops (llama/17101)
Georgi Gerganov [Sun, 9 Nov 2025 06:28:51 +0000 (08:28 +0200)]
metal : retain src and dst buffers during async ops (llama/17101)

3 weeks agovulkan: Use spec constants for conv2d s/d/p and kernel W/H (llama/16978)
Jeff Bolz [Sat, 8 Nov 2025 19:24:29 +0000 (13:24 -0600)]
vulkan: Use spec constants for conv2d s/d/p and kernel W/H (llama/16978)

* vulkan: Use spec constants for conv2d s/d/p and kernel W/H

Also add some additional unroll hints, which seems to help.

* lock around map lookup

3 weeks agoRevert "CUDA: add expert reduce kernel (#16857)" (llama/17100)
Aman Gupta [Sat, 8 Nov 2025 13:05:19 +0000 (21:05 +0800)]
Revert "CUDA: add expert reduce kernel (#16857)" (llama/17100)

3 weeks agoCUDA: skip fusion for repeating adds in bias (llama/17080)
Aman Gupta [Sat, 8 Nov 2025 08:58:05 +0000 (16:58 +0800)]
CUDA: skip fusion for repeating adds in bias (llama/17080)

3 weeks agovulkan: Increase BK to 32; use BK/4 for non-CM mul_mm.comp (llama/16636)
SavicStefan [Sat, 8 Nov 2025 08:28:22 +0000 (09:28 +0100)]
vulkan: Increase BK to 32; use BK/4 for non-CM mul_mm.comp (llama/16636)

Signed-off-by: Stefan Savic <redacted>
Co-authored-by: Stefan Savic <redacted>
3 weeks agoggml: disable vxe for cross-compilation by default (llama/16966)
Aleksei Nikiforov [Sat, 8 Nov 2025 08:00:20 +0000 (09:00 +0100)]
ggml: disable vxe for cross-compilation by default (llama/16966)

Otherwise compilation will fail due to enabling -mvx -mzvector
and not setting corresponding -march options.

3 weeks agovulkan: fuse rms_norm + mul + rope (+ view + set_rows) (llama/16977)
Jeff Bolz [Sat, 8 Nov 2025 07:52:15 +0000 (01:52 -0600)]
vulkan: fuse rms_norm + mul + rope (+ view + set_rows) (llama/16977)

This change combines the rms_norm+mul and rope+view+set_rows fusions to
allow fusing the whole sequence together. This comes up in Qwen3, Bailing,
and some other models.

3 weeks agovulkan: Fix test-thread-safety crashes (llama/17024)
Jeff Bolz [Sat, 8 Nov 2025 07:39:45 +0000 (01:39 -0600)]
vulkan: Fix test-thread-safety crashes (llama/17024)

The std::map pipeline_flash_attn_f32_f16 could be searched and inserted at the
same time, which needs to hold the lock. To be safe, hold the lock for all of
ggml_vk_load_shaders.

3 weeks agoCUDA: fix MMQ stream-k fixup ne1 indices (llama/17089)
Johannes Gäßler [Sat, 8 Nov 2025 07:26:18 +0000 (08:26 +0100)]
CUDA: fix MMQ stream-k fixup ne1 indices (llama/17089)

3 weeks agoggml webgpu: faster matrix multiplication/matrix-vector multiplication (llama/17031)
Reese Levine [Sat, 8 Nov 2025 03:27:20 +0000 (19:27 -0800)]
ggml webgpu: faster matrix multiplication/matrix-vector multiplication (llama/17031)

* Faster tensors (llama/8)

Add fast matrix and matrix/vector multiplication.

* Use map for shader replacements instead of pair of strings

3 weeks agoCUDA: properly handle nb00=nb02 case for cpy (llama/17081)
bssrdf [Fri, 7 Nov 2025 22:41:58 +0000 (17:41 -0500)]
CUDA: properly handle nb00=nb02 case for cpy (llama/17081)

3 weeks agovulkan : refactor buffer handling in vk_op_f32 (llama/16840)
Acly [Fri, 7 Nov 2025 20:08:50 +0000 (21:08 +0100)]
vulkan : refactor buffer handling in vk_op_f32 (llama/16840)

* vulkan : refactor/simplify buffer handling in vk_op_* functions

* Combine UMA handling into ggml_vk_tensor_subbuffer

3 weeks agoCUDA: fix should_use_mmvf for ne11 == 1 (llama/17085)
Johannes Gäßler [Fri, 7 Nov 2025 19:53:14 +0000 (20:53 +0100)]
CUDA: fix should_use_mmvf for ne11 == 1 (llama/17085)

* CUDA: fix should_use_mmvf for ne11 == 1

* Apply suggestion from @am17an

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Aman Gupta <redacted>
3 weeks agoRevert "ggml-cpu: detect correct cpu flags for arm64 (llama/16229) (#16239)" (llama...
Adrien Gallouët [Fri, 7 Nov 2025 16:34:05 +0000 (17:34 +0100)]
Revert "ggml-cpu: detect correct cpu flags for arm64 (llama/16229) (#16239)" (llama/17084)

This reverts commit 7c23f3f0d4b9f5d6ea140756eb694b562d5acebb.

3 weeks agoggml-cpu: detect correct cpu flags for arm64 (#16229) (llama/16239)
iron [Fri, 7 Nov 2025 16:18:14 +0000 (00:18 +0800)]
ggml-cpu: detect correct cpu flags for arm64 (#16229) (llama/16239)

When using GCC 9 and GCC 12 on the arm64 platform of ubuntu 2004,
the command "gcc -mcpu=native -E -v -" fails to detect the correct CPU flags,
which results in compilation failures for certain extended instructions,
but the correct CPU flags can be obtained by using gcc -march.

Signed-off-by: lizhenneng <redacted>
Co-authored-by: lizhenneng <redacted>
3 weeks agoggml-cpu : optimize RVV q2_k and q3_k kernels (llama/16887)
xctan [Thu, 6 Nov 2025 16:12:45 +0000 (00:12 +0800)]
ggml-cpu : optimize RVV q2_k and q3_k kernels (llama/16887)

3 weeks agoCUDA: fix crash on uneven context without FA (llama/16988)
Johannes Gäßler [Thu, 6 Nov 2025 13:05:47 +0000 (14:05 +0100)]
CUDA: fix crash on uneven context without FA (llama/16988)

3 weeks agometal : initial Metal4 tensor API support (llama/16634)
Georgi Gerganov [Thu, 6 Nov 2025 12:45:10 +0000 (14:45 +0200)]
metal : initial Metal4 tensor API support (llama/16634)

* metal : rework mat-mat multiplication

* metal : initial Metal4 support

* cont

* metal : detect tensor support

* cont : better ifdefs

* metal : support tensors in mul_mm_id

* metal : add env for disabling tensor API

* tests : restore

* metal : remove unused constants

* metal : fix check for bfloat tensor support

* cont : handle API incompatibilities

* cont : handle even more incompatibilities

* metal : use tensor API only on M5 and later

3 weeks agosycl: add CONCAT operator support (llama/16047)
YehuditE [Thu, 6 Nov 2025 10:02:33 +0000 (12:02 +0200)]
sycl: add CONCAT operator support (llama/16047)

* sycl: add CONCAT operator support

* cleanup: remove stray lines added by mistake

* fix: code format issues in concat.cpp and tests/test-backend-ops.cpp

* chore: fix editorconfig violations

* cleanup: drop unnecessary i16 type support

* docs: update sycl-csv and regenerate ops.md

* update docs/ops.md

* fix: adapt to upstream master changes after rebase

* fix: remove empty files

* fix: drop whitespace

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agoggml-hexagon: graceful fallback for older socs where rpcmem_alloc2 and FASTRPC_GET_UR...
l3utterfly [Thu, 6 Nov 2025 05:46:38 +0000 (13:46 +0800)]
ggml-hexagon: graceful fallback for older socs where rpcmem_alloc2 and FASTRPC_GET_URI is unsupported (llama/16987)

* support older socs where FASTRPC_GET_URI is unsupported

* added graceful fallback when FASTRPC_GET_URI call fails

* use weak symbols instead of loading libcdsprpc.so dynamically

* Add weak pragma for rpcmem_alloc2

* Remove weak declaration for rpcmem_alloc2 in ggml-hexagon.cpp

Removed weak declaration for rpcmem_alloc2.

* Enforce ndev to 1 for archs below v75

Force ndev to 1 for SoCs architectures lower than v75.

3 weeks agoimprove CUDA cpy memory bandwidth when copying transposed tensor (llama/16841)
bssrdf [Wed, 5 Nov 2025 20:55:04 +0000 (15:55 -0500)]
improve CUDA cpy memory bandwidth when copying transposed tensor (llama/16841)

* WIP

* added a cpy kernel specific to transposed tensor which uses smem to avoid uncoalesced access; test cases also added shwoing improved memory bandwidth

* added BF16 support

* more strict check to make sure src0 is a transpose

* reformulated to handle more complicated transpose cases

* bring back 2D transpose for higher performance

* allow build on windows

* tranpose copy more shapes

* minor tweak

* final clean up

* restore some test cases

* keep only the kernel for true tranposed case; updated with review suggestions

* make CI happy

* remove headers not needed

* reduced bank conflicts for fp16 and bf16

* add missing const*

* now bank conflicts free

* use padding instead of swizzling

---------

Co-authored-by: bssrdf <redacted>
3 weeks agovulkan: Fix GGML_VULKAN_CHECK_RESULTS to better handle fusion (llama/16919)
Jeff Bolz [Wed, 5 Nov 2025 18:51:03 +0000 (12:51 -0600)]
vulkan: Fix GGML_VULKAN_CHECK_RESULTS to better handle fusion (llama/16919)

3 weeks agosync : llama.cpp
Georgi Gerganov [Sun, 9 Nov 2025 12:45:38 +0000 (14:45 +0200)]
sync : llama.cpp

3 weeks agoggml webgpu: minor set rows optimization (llama/16810)
Reese Levine [Sun, 9 Nov 2025 12:44:39 +0000 (14:44 +0200)]
ggml webgpu: minor set rows optimization (llama/16810)

* Add buffer label and enable dawn-specific toggles to turn off some checks

* Minor set_rows optimization (#4)

* updated optimization, fixed errors

* non vectorized version now dispatches one thread per element

* Simplify

* Change logic for set_rows pipelines

---------

Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
* Comment on dawn toggles

* Remove some comments

* Implement overlap binary operators

* Revert "Implement overlap binary operators"

This reverts commit ed710b36f51ab3f53fa13db15c1685dc8678a32a.

* Disable support for non-contiguous binary_op tensors and leave note for future support

---------

Co-authored-by: neha-ha <redacted>
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Neha Abbas <redacted>
3 weeks agosync : llama.cpp
Georgi Gerganov [Sun, 9 Nov 2025 12:43:58 +0000 (14:43 +0200)]
sync : llama.cpp

3 weeks agorefactor: replace sprintf with snprintf for safer string handling in dump functions...
nullname [Tue, 4 Nov 2025 20:25:39 +0000 (04:25 +0800)]
refactor: replace sprintf with snprintf for safer string handling in dump functions (llama/16913)

3 weeks agovulkan: remove the need for the dryrun (llama/16826)
Jeff Bolz [Tue, 4 Nov 2025 19:28:17 +0000 (13:28 -0600)]
vulkan: remove the need for the dryrun (llama/16826)

* vulkan: remove the need for the dryrun

Allocate pipelines and descriptor sets when requested.

Reallocate the prealloc buffers when needed, and flush any pending work
before reallocating.

For rms_partials and total_mul_mat_bytes, use the sizes computed the last time
the graph was executed.

* remove dryrun parameters

3 weeks agoggml-cpu : bicubic interpolation (llama/16891)
Acly [Tue, 4 Nov 2025 12:12:20 +0000 (13:12 +0100)]
ggml-cpu : bicubic interpolation (llama/16891)

3 weeks agoFix garbled output with REPACK at high thread counts (llama/16956)
Noah [Tue, 4 Nov 2025 05:04:59 +0000 (05:04 +0000)]
Fix garbled output with REPACK at high thread counts (llama/16956)

* Fix garbled output with REPACK at high thread counts

Fixed a race condition in the REPACK matrix multiplication code that caused garbled output when using 26+ threads (model-dependent threshold). The issue occurred because with high thread counts, the code forced chunk count to equal thread count, creating many small chunks. After aligning these chunks to NB_COLS boundaries, adjacent chunks could overlap, causing data corruption and race conditions. The fix enforces minimum chunk sizes based on NB_COLS and caps maximum chunk count to prevent creating too many tiny chunks, ensuring proper alignment without overlaps.

* Update ggml/src/ggml-cpu/repack.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-cpu/repack.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
3 weeks agoCUDA: avoid mul + bias fusion when doing fusion (llama/16935)
Aman Gupta [Tue, 4 Nov 2025 02:53:48 +0000 (10:53 +0800)]
CUDA: avoid mul + bias fusion when doing fusion (llama/16935)

3 weeks agoopencl: support imrope (llama/16914)
lhez [Mon, 3 Nov 2025 19:47:57 +0000 (11:47 -0800)]
opencl: support imrope (llama/16914)

* opencl: support imrope

* opencl: fix whitespace

3 weeks agoggml: CUDA: add head size 72 for flash-attn (llama/16962)
theo77186 [Mon, 3 Nov 2025 13:29:11 +0000 (14:29 +0100)]
ggml: CUDA: add head size 72 for flash-attn (llama/16962)

3 weeks agoggml : LoongArch fixes (llama/16958)
Jinyang He [Mon, 3 Nov 2025 06:40:02 +0000 (14:40 +0800)]
ggml : LoongArch fixes (llama/16958)

* Fix test-quantize-fns f16 and q4_0 failed when use LSX

* Fix LoongArch set float intrinsic when use LSX/LASX

3 weeks agoSYCL: optimized repeat_back kernel (3× fewer asm instructions, 2× faster)Feature...
shani-f [Mon, 3 Nov 2025 01:35:33 +0000 (03:35 +0200)]
SYCL: optimized repeat_back kernel (3× fewer asm instructions, 2× faster)Feature/sycl repeat back opt (#16869)

* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* SYCL: optimize repeat_back kernel

* Remove Hebrew comment from repeat_back.cpp

* Remove comments for code clarity

Removed comments to clean up the code.

* Fix formatting in ggml-sycl.cpp

* Formatted lambda according to legacy style. No logic changes

* Remove blank line in repeat_back.cpp

Remove unnecessary blank line before assigning acc to dst_dd.

3 weeks agotest-backend-ops : fix segfault in moe-expert-reduce test in support mode and coverag...
Shagun Bera [Sun, 2 Nov 2025 23:10:30 +0000 (04:40 +0530)]
test-backend-ops : fix segfault in moe-expert-reduce test in support mode and coverage (llama/16936)

* tests: fix segfault in moe-expert-reduce test in support mode and --show-coverage

* tests: init gf and filter out fusion tests for support mode

* tests: filter out fusion cases before calling eval_support

* tests: filter out fusion cases from show_test_coverage as well, fix lint

3 weeks agoclip : use FA (llama/16837)
Georgi Gerganov [Sun, 2 Nov 2025 20:21:48 +0000 (22:21 +0200)]
clip : use FA (llama/16837)

* clip : use FA

* cont : add warning about unsupported ops

* implement "auto" mode for clip flash attn

* clip : print more detailed op support info during warmup

* cont : remove obsolete comment [no ci]

* improve debugging message

* trailing space

* metal : remove stray return

---------

Co-authored-by: Xuan Son Nguyen <redacted>
3 weeks agoCUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (llama/16917)
mnehete32 [Sun, 2 Nov 2025 03:12:57 +0000 (08:42 +0530)]
CUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (llama/16917)

3 weeks agoggml: add s390x cpu-feats (llama/16774)
Aaron Teo [Sun, 2 Nov 2025 00:48:23 +0000 (08:48 +0800)]
ggml: add s390x cpu-feats (llama/16774)

3 weeks agovulkan: Fix multi_add invalid descriptor usage (llama/16899)
Jeff Bolz [Sat, 1 Nov 2025 05:52:14 +0000 (00:52 -0500)]
vulkan: Fix multi_add invalid descriptor usage (llama/16899)

3 weeks agovulkan: fuse mul_mat+add and mul_mat_id+add_id (llama/16868)
Jeff Bolz [Sat, 1 Nov 2025 05:45:28 +0000 (00:45 -0500)]
vulkan: fuse mul_mat+add and mul_mat_id+add_id (llama/16868)

* vulkan: fuse mul_mat+add and mul_mat_id+add_id

The fusion is only applied for the mat-vec mul paths.

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix 32b build

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agoCUDA: Remove unneded bias/gate dims in fused mmvq (llama/16858)
Oliver Simons [Sat, 1 Nov 2025 05:13:26 +0000 (06:13 +0100)]
CUDA: Remove unneded bias/gate dims in fused mmvq (llama/16858)

* CUDA: Remove unneded bias/gate dims in fused mmvq

Pointed out
[here](https://github.com/ggml-org/llama.cpp/pull/16847#discussion_r2476798989)
that only a single value is needed per target col per thread

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Fix "Error 991-D: extra braces are nonstandard" during compilation

---------

Co-authored-by: Johannes Gäßler <redacted>
3 weeks agoCUDA: Volta tensor core support for MMF (llama/16843)
Johannes Gäßler [Fri, 31 Oct 2025 14:57:19 +0000 (15:57 +0100)]
CUDA: Volta tensor core support for MMF (llama/16843)

* CUDA: Volta tensor core support for MMF

* more generic checks for hardware support

* Update ggml/src/ggml-cuda/mmf.cuh

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Aman Gupta <redacted>
3 weeks agoggml : fix conv2d_dw SVE path (#1380)
Georgi Gerganov [Tue, 4 Nov 2025 18:40:52 +0000 (20:40 +0200)]
ggml : fix conv2d_dw SVE path (#1380)

* Fix test-conv2d-dw failure on ARM SVE by using runtime vector length

The ggml_compute_forward_conv_2d_dw_cwhn function was using a hardcoded GGML_F32_EPR (8) for SIMD vectorization, but on ARM SVE the actual vector length varies by hardware. This caused incorrect computation when processing CWHN layout tensors on ARM machines.

Fix by using svcntw() to get the runtime SVE vector length instead of the compile-time constant.

Co-authored-by: ggerganov <redacted>
* ci : reduce sam score threshold

* ci : update bbox checks for sam test

---------

Co-authored-by: copilot-swe-agent[bot] <redacted>
Co-authored-by: ggerganov <redacted>
4 weeks agosync : llama.cpp
Georgi Gerganov [Fri, 31 Oct 2025 14:27:03 +0000 (16:27 +0200)]
sync : llama.cpp

4 weeks agoCUDA: add expert reduce kernel (llama/16857)
Aman Gupta [Fri, 31 Oct 2025 12:05:07 +0000 (20:05 +0800)]
CUDA: add expert reduce kernel (llama/16857)

* CUDA: add expert reduce kernel

* contigous checks, better formatting, use std::vector instead of array

* use vector empty instead of size

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agovulkan: disable spirv-opt for rope shaders (llama/16872)
Jeff Bolz [Fri, 31 Oct 2025 07:34:47 +0000 (02:34 -0500)]
vulkan: disable spirv-opt for rope shaders (llama/16872)

4 weeks agovulkan: Fix crash when FP16 mul_mat accumulation is not supported (llama/16796)
Masato Nakasaka [Fri, 31 Oct 2025 07:18:59 +0000 (16:18 +0900)]
vulkan: Fix crash when FP16 mul_mat accumulation is not supported (llama/16796)

* Experimenting crash fix

* added assert for aborting and fixed comment

* changed to check if a pipeline is empty or not

* Moved function in class definition

* replaced with is_empty

* Modified is_empty to check only unaligned pipelines

4 weeks agovulkan: fix shmem overrun in mmq id shader (llama/16873)
Ruben Ortlam [Fri, 31 Oct 2025 07:14:49 +0000 (08:14 +0100)]
vulkan: fix shmem overrun in mmq id shader (llama/16873)

* vulkan: fix shmem overrun in mmq id shader

* metal : fix mul_mm_id

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agoggml-hexagon: respect input size when getting/setting tensor data (llama/16836)
l3utterfly [Fri, 31 Oct 2025 04:46:31 +0000 (12:46 +0800)]
ggml-hexagon: respect input size when getting/setting tensor data (llama/16836)

* respect input size when getting/setting tensor data

allows partial repacking/copying when get tensor size is smaller than the actual tensor

* Removed duplicate repack_mxfp4_mxfp4x4x2 function

4 weeks agoopencl: fix boundary handling for mul_mm (llama/16875)
lhez [Thu, 30 Oct 2025 23:00:20 +0000 (16:00 -0700)]
opencl: fix boundary handling for mul_mm (llama/16875)

4 weeks agocpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64...
Max Krasnyansky [Thu, 30 Oct 2025 16:06:13 +0000 (09:06 -0700)]
cpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64 (llama/16833)

Very similar implementation to the flash-attention chunking, with similar benefits.

4 weeks agomodel: add support for qwen3vl series (llama/16780)
JJJYmmm [Thu, 30 Oct 2025 15:19:14 +0000 (23:19 +0800)]
model: add support for qwen3vl series (llama/16780)

* support qwen3vl series.

Co-authored-by: Thireus ☠ <redacted>
Co-authored-by: yairpatch <redacted>
Co-authored-by: LETS-BEE <redacted>
* bugfix: fix the arch check for qwen3vl-moe.

* use build_ffn

* optimize deepstack structure

* optimize deepstack feature saving

* Revert "optimize deepstack feature saving" for temporal fix

This reverts commit f321b9fdf13e59527408152e73b1071e19a87e71.

* code clean

* use fused qkv in clip

* clean up / rm is_deepstack_layers for simplification

* add test model

* move test model to "big" section

* fix imrope check

* remove trailing whitespace

* fix rope fail

* metal : add imrope support

* add imrope support for sycl

* vulkan: add imrope w/o check

* fix vulkan

* webgpu: add imrope w/o check

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix tensor mapping

---------

Co-authored-by: Thireus ☠ <redacted>
Co-authored-by: yairpatch <redacted>
Co-authored-by: LETS-BEE <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agocpu: introduce chunking for flash attention (llama/16829)
Max Krasnyansky [Thu, 30 Oct 2025 12:26:05 +0000 (05:26 -0700)]
cpu: introduce chunking for flash attention (llama/16829)

Factor out the core FA loop into flash_atten_f16_one_chunk and add an outter loop
on top that handles the chunks.

4 weeks agocuda : fix argsort with 64k+ rows (llama/16849)
Sigbjørn Skjæret [Thu, 30 Oct 2025 07:56:28 +0000 (08:56 +0100)]
cuda : fix argsort with 64k+ rows (llama/16849)

4 weeks agovulkan: Handle argsort with a large number of rows (llama/16851)
Jeff Bolz [Thu, 30 Oct 2025 06:27:41 +0000 (01:27 -0500)]
vulkan: Handle argsort with a large number of rows (llama/16851)

4 weeks agoHide latency of bias and gate-loading (llama/16847)
Oliver Simons [Thu, 30 Oct 2025 03:34:15 +0000 (04:34 +0100)]
Hide latency of bias and gate-loading (llama/16847)

This is realised by loading them into registers before computation of
the dot-product, effectively batching them together with said
dot-product. As a lot of threads are alive here, the warp scheduler has
enough threads available to effectively hide the cost of additionally
loading those two floats.

4 weeks agovulkan: Fuse rope+set_rows (llama/16769)
Jeff Bolz [Wed, 29 Oct 2025 20:13:10 +0000 (15:13 -0500)]
vulkan: Fuse rope+set_rows (llama/16769)

This pattern appears in a lot of models, the rope operation is applied right
before storing into the KV cache (usually on the K tensor).

Add a path to some of the rope shaders that computes the destination address
based on the set_rows tensor. Compile variants of the shader with D_TYPE of
f16 (the usual KV cache type).

Add a src3 operand to ggml_vk_op_f32 - sometimes rope uses three srcs and needs
the fourth for the row indices.

Add fused_ops_write_mask to indicate which intermediate tensors need to write
their results to memory. Skipping writing the roped K value helps to allow more
nodes to run concurrently.

Add logic to ggml_vk_graph_optimize to make ROPE+VIEW+SET_ROWS consecutive. It
rarely starts out that way in the graph.

Add new backend tests.

4 weeks agovulkan: Update topk_moe fusion to handle gpt's late softmax (llama/16656)
Jeff Bolz [Wed, 29 Oct 2025 13:44:29 +0000 (08:44 -0500)]
vulkan: Update topk_moe fusion to handle gpt's late softmax (llama/16656)

* vulkan: Update topk_moe fusion to handle gpt's late softmax

Based on #16649.

* Add ggml_check_edges

* Add sync logging to show fusion effects

* handle clamp added in #16655

* Update ggml/src/ggml-impl.h

Co-authored-by: Diego Devesa <redacted>
4 weeks agoVulkan MMQ Integer Dot Refactor and K-Quant support (llama/16536)
Ruben Ortlam [Wed, 29 Oct 2025 13:39:03 +0000 (14:39 +0100)]
Vulkan MMQ Integer Dot Refactor and K-Quant support (llama/16536)

* vulkan: add mmq q2_k integer dot support

* Refactor mmq caching

* Reduce mmq register use

* Load 4 quant blocks into shared memory in one step

* Pack q2_k blocks into caches of 32

* Use 32-bit accumulators for integer dot matmul

* Add q4_k mmq

* Add q3_k mmq

* Add q5_k mmq

* Add q6_k mmq

* Add mxfp4 mmq, enable MMQ MUL_MAT_ID

* Fix mmv dm loads

4 weeks agoHexagon Op queue & dispatch optimizations (llama/16820)
Max Krasnyansky [Wed, 29 Oct 2025 13:29:12 +0000 (06:29 -0700)]
Hexagon Op queue & dispatch optimizations (llama/16820)

* hexagon: remove dspqueue callbacks and do all read processing inplace

* hexagon: there is no need to ref/deref the buffers at this point

We're not going to release the buffers without flushing the session queue.
So there is no need to inc/dec the refcounts for every request.
We also don't need to include those bufs in the response.

* hexagon: bump the thread count in the adb wrapper scripts

We can use more CPU cores now that the dedicated dspqueue polling threads are not used (ie no contention).
Also enable more agressive polling for now since we still map Flash Attention (and a few other kernels) to
the CPU and those dspqueue threads were keeping the CPU cores are higher clock freqs.

* hexagon: add lhez as the second code owner

4 weeks agoCUDA: use fastdiv in set-rows (llama/16834)
Aman Gupta [Wed, 29 Oct 2025 13:11:53 +0000 (21:11 +0800)]
CUDA: use fastdiv in set-rows (llama/16834)

* CUDA: use fastdiv in set-rows

* add assert about value fitting in u32

4 weeks agovulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (llama/16793)
Jeff Bolz [Wed, 29 Oct 2025 08:53:04 +0000 (03:53 -0500)]
vulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (llama/16793)

This lets the copy to the destination device use the host-visible
vidmem optimization.

4 weeks agoCUDA: Fix bug in topk-moe for gpt-oss (llama/16821)
Aman Gupta [Wed, 29 Oct 2025 07:55:06 +0000 (15:55 +0800)]
CUDA: Fix bug in topk-moe for gpt-oss (llama/16821)

* CUDA: Fix bug in topk-moe for gpt-oss

When using ggml_can_fuse_subgraph, the output nodes which are passed are wrong. This causes `test-backend-ops` to still fuse ndoes (because the nodes are not used elsewhere in the graph),
but it actually doesn't fuse in the actual gpt-oss

* fix for qwen3 too

* change ifndef to ifdef

4 weeks agosycl: add RMS_NORM_BACK operation support (llama/16808)
YaelLogic [Wed, 29 Oct 2025 06:14:39 +0000 (08:14 +0200)]
sycl: add RMS_NORM_BACK operation support (llama/16808)

* sycl: add RMS_NORM_BACK operation support

* sycl: rms_norm_back: add dual reduction paths (FP64 and FP32) and savepoint before further changes

* sycl: add RMS_NORM_BACK support

Implement RMS_NORM_BACK for the SYCL backend using FP32 compensated parallel reduction. Minimal docs updates (ops.md / SYCL.csv).

* revert: restore .gitignore and tools/run/CMakeLists.txt to upstream

* revert: restore tests/CMakeLists.txt to upstream

* sycl: optimize rms_norm_back

* fix: restore SYCL.csv to correct state with RMS_NORM_BACK support

* Update ggml/src/ggml-sycl/norm.cpp

Co-authored-by: Neo Zhang Jianyu <redacted>
* fix: remove trailing whitespace and add missing newline (EditorConfig)

---------

Co-authored-by: Neo Zhang Jianyu <redacted>
4 weeks agocuda: add SET operation support (llama/16804)
YaelGitAccount [Tue, 28 Oct 2025 19:10:28 +0000 (21:10 +0200)]
cuda: add SET operation support (llama/16804)

* feat(cuda): add GGML_OP_SET support

Implement CUDA kernel for SET operation with f32 support.

All tests passing (14598/14598).

* cuda(set): add I32 support; keep F32

* refactor(cuda): use ggml_cuda_cpy to unify SET operator logic and remove code duplication

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-cuda/set.cu

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoinitialise buffer.device in ggml_hexagon_session (llama/16816)
l3utterfly [Tue, 28 Oct 2025 15:16:20 +0000 (23:16 +0800)]
initialise buffer.device in ggml_hexagon_session (llama/16816)

4 weeks agoCANN: Improve device ID handling and aclnnArange checks (llama/16752)
Chenguang Li [Tue, 28 Oct 2025 02:54:53 +0000 (10:54 +0800)]
CANN: Improve device ID handling and aclnnArange checks (llama/16752)

* cann: improve device ID handling and aclnnArange checks

- Stop relying on CANN's internal device ID retrieval; use a global variable instead.
- Enforce stricter dimension validation in aclnnArange for better compatibility across CANN versions.

* cann: use thread local var

4 weeks agoCUDA: add unused vars to mmvf and mmvq (llama/16807)
Aman Gupta [Tue, 28 Oct 2025 02:31:21 +0000 (10:31 +0800)]
CUDA: add unused vars to mmvf and mmvq (llama/16807)

4 weeks agosycl: add SSM_CONV operation support (llama/16800)
tamarPal [Tue, 28 Oct 2025 01:50:33 +0000 (03:50 +0200)]
sycl: add SSM_CONV operation support (llama/16800)

* feat: Add SYCL backend support for SSM_CONV operator

* Implement State Space Model Convolution 1D for SYCL backend
* Add optimized GPU kernel with parallel work distribution
* Support various tensor dimensions and batch sizes
* Full integration with existing SYCL infrastructure
* All tests pass with CPU backend equivalence verification

* feat: Implement SYCL backend support for SSM_CONV operation

- Add ggml-sycl/ssm_conv.cpp and ssm_conv.hpp
- Implement SYCL kernel for state space model convolution
- Ensure numerical correctness matches CPU implementation exactly
- Add proper type checking for F32 tensors in backend support
- All test-backend-ops SSM_CONV tests pass (14490/14490)

* Perfect SSM_CONV SYCL implementation - 100% CPU parity

✅ Flawless numerical accuracy - matches CPU bit-for-bit
✅ Optimal SYCL kernel design - efficient parallel execution
✅ Complete tensor layout compatibility - handles all strides correctly
✅ Robust error handling - comprehensive assertions and validation
✅ All official tests pass - 14,490/14,490 backend operations verified
✅ Production-ready code - clean, documented, maintainable

Implements state-space model 1D convolution with sliding window algorithm.
Eliminates blocking queue.wait() for better async performance.

* Clean SSM_CONV code - remove all comments for production

Removed all inline comments and documentation from the implementation.
Clean, minimal code ready for production merge.

* fix: Final formatting corrections for CI compliance

- Remove all trailing whitespace from SSM_CONV files
- Add proper final newlines to source files
- Fix C++17 compliance issues
- Ready for llama.cpp CI validation

* sycl: fix trailing whitespace and minor safety casts in ssm_conv

* fix: Clean up duplicated content in ssm_conv.hpp header file

---------

Co-authored-by: tamarPal <redacted>
4 weeks agoggml : fix interpolate with align-corners and ne=1 (llama/16700)
Acly [Mon, 27 Oct 2025 20:50:22 +0000 (21:50 +0100)]
ggml : fix interpolate with align-corners and ne=1 (llama/16700)

* ggml : fix interpolate with align-corners and ne=1

* avoid division by zero if one of the spatial dimensions is 1
* cpu, cuda, opencl returned correct result anyway due to clamp
* vulkan didn't clamp for align-corners so results were broken

* fix clang warning

4 weeks agoHIP: fix AMDGPU_TARGETS, update documentation (llama/16803)
Johannes Gäßler [Mon, 27 Oct 2025 20:39:49 +0000 (21:39 +0100)]
HIP: fix AMDGPU_TARGETS, update documentation (llama/16803)

4 weeks agotest-backend-ops: print failed tests at the end (llama/16785)
Aman Gupta [Mon, 27 Oct 2025 01:25:10 +0000 (09:25 +0800)]
test-backend-ops: print failed tests at the end (llama/16785)

4 weeks agosycl: add ROLL operation support (llama/16665)
tamarPal [Mon, 27 Oct 2025 01:20:24 +0000 (03:20 +0200)]
sycl: add ROLL operation support (llama/16665)

* sycl: add ROLL operation support

- Implement ggml_sycl_roll function for F32 tensors
- Add multi-axis roll operation with SYCL kernel
- Support all 4 tensor dimensions with proper shift normalization
- Add roll.cpp and roll.hpp to SYCL backend
- Update backend dispatch and supports_op for GGML_OP_ROLL
- Tests: 17662/17662 pass with identical CPU reference results

* fix: remove trailing whitespace from roll.cpp

- Fix EditorConfig violations in ggml/src/ggml-sycl/roll.cpp
- Remove trailing spaces from lines 6, 11, 28, 47, 58, 60

* ci: retrigger

* sycl: remove wait() calls from ROLL operation

* fix: editorconfig — LF endings + final newline for roll.hpp

---------

Co-authored-by: tamarPal <redacted>
4 weeks agosycl: add REPEAT_BACK operation support (llama/16734)
shani-f [Mon, 27 Oct 2025 01:19:50 +0000 (03:19 +0200)]
sycl: add REPEAT_BACK operation support (llama/16734)

* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* Update ggml/src/ggml-sycl/repeat_back.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-sycl/repeat_back.hpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoCUDA: support for weight clamp in top-k norm (llama/16702)
Aman Gupta [Mon, 27 Oct 2025 01:06:16 +0000 (09:06 +0800)]
CUDA: support for weight clamp in top-k norm (llama/16702)

4 weeks agoggml-alloc : make gallocr prefer chunks that allow memory reuse (llama/16788)
Acly [Sun, 26 Oct 2025 22:19:03 +0000 (23:19 +0100)]
ggml-alloc : make gallocr prefer chunks that allow memory reuse (llama/16788)

4 weeks agocuda : use fast copy when src and dst are of different type and contiguous (llama...
Sigbjørn Skjæret [Sun, 26 Oct 2025 20:31:41 +0000 (21:31 +0100)]
cuda : use fast copy when src and dst are of different type and contiguous (llama/16789)

* use fast copy when src and dst are contiguous and same shape

* use int64_t ne and ignore shape

4 weeks agoggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support...
leejet [Sun, 26 Oct 2025 18:13:31 +0000 (02:13 +0800)]
ggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support large batch (llama/16744)

* fix k_compute_batched_ptrs

* add backend ops test

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* reduce the batch size

---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agoCUDA: General GEMV fusion (llama/16715)
Aman Gupta [Sun, 26 Oct 2025 11:28:04 +0000 (19:28 +0800)]
CUDA: General GEMV fusion (llama/16715)

4 weeks agovulkan: deduplicate Microsoft Direct3D12 devices (llama/16689)
Gilad S. [Sun, 26 Oct 2025 04:37:38 +0000 (06:37 +0200)]
vulkan: deduplicate Microsoft Direct3D12 devices (llama/16689)

* fix: deduplicate and deprioritize Microsoft Direct3D12 vulkan devices from the `vulkan-dozen` driver

* style: indent

* fix: decrease priority

* fix: switch to `||`

4 weeks agovulkan: delete dead code (llama/16732)
Giuseppe Scrivano [Sat, 25 Oct 2025 08:59:54 +0000 (10:59 +0200)]
vulkan: delete dead code (llama/16732)

ggml_vk_create_buffer_temp is not used anywhere, and it is the only
caller for ggml_vk_pool_malloc.

Signed-off-by: Giuseppe Scrivano <redacted>
4 weeks agovulkan: Optimize SSM_SCAN (llama/16645)
Jeff Bolz [Sat, 25 Oct 2025 05:04:12 +0000 (00:04 -0500)]
vulkan: Optimize SSM_SCAN (llama/16645)

4 weeks agoggml: fix CUDA grid launch condition for large block_nums.y in binbcast (llama/16742)
leejet [Fri, 24 Oct 2025 19:39:37 +0000 (03:39 +0800)]
ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (llama/16742)

* Fix CUDA grid launch condition for large block_nums.y

* add backend ops test

* reduce test  repetitions

4 weeks agoCUDA: use CUB for arbitary size argsort (llama/16754)
Aman Gupta [Fri, 24 Oct 2025 12:46:19 +0000 (20:46 +0800)]
CUDA: use CUB for arbitary size argsort (llama/16754)

4 weeks agoggml-cuda: use passed ops instead of hardcoded ops (llama/16712)
Aman Gupta [Thu, 23 Oct 2025 11:14:06 +0000 (19:14 +0800)]
ggml-cuda: use passed ops instead of hardcoded ops (llama/16712)

4 weeks agosycl: use async memory allocation to fix crashes during graph recording (llama/16644)
Matthew Michel [Thu, 23 Oct 2025 01:05:15 +0000 (20:05 -0500)]
sycl: use async memory allocation to fix crashes during graph recording (llama/16644)

* sycl: use async memory allocation to fix graph recording failures

GGML_SYCL_DISABLE_GRAPHS=0 causes crashes because:
  - Host waits are currently unsupported in graph recording mode.
  - SYCL malloc / free calls are unsupported in graph recording mode.

The following changes are made to fix SYCL graph functionality:
  - When graphs are enabled, use the SYCL async memory extension for temp
    buffers which is supported with SYCL graphs.
  - For compiler versions that do not support this extension, skip
    graphs with the affected op.
  - Switch from USM shared to device memory as the async extension
    currently just supports device allocations.

* Address reviewer feedback

* Use global async variable to decide path in sycl_ext_[malloc_device|free]

4 weeks agoAdd experimental ggml-hexagon backend for the Hexagon NPU (llama/16547)
Max Krasnyansky [Wed, 22 Oct 2025 20:47:09 +0000 (13:47 -0700)]
Add experimental ggml-hexagon backend for the Hexagon NPU (llama/16547)

* model: add support for extra bufs for all devices

* hexagon: add experimental ggml-hexagon backend for the Hexagon NPU

This commit introduces a new experimental backend `ggml-hexagon` with support for the Hexagon NPU.

Highlights:
- Supports Hexagon versions: v73, v75, v79, and v81
- Targets Android devices based on Snapdragon SoCs: Gen3, 8-Elite, and 8-Elite Gen5
- Supports Q4_0, Q8_0, MXFP4, and FP32 data types
- Implements core LLM ops: MUL_MAT/MUL_MAT_ID, ADD/SUB/MUL/ADD_ID, RMS_NORM, ROPE, GLU/SWIGLU, SOFTMAX

**Note:** This backend is experimental and may exhibit instability or limited performance across supported devices.
It is intended for early testing and feedback from llama.cpp/ggml developer and user community.

Co-Authored-By: Rajdeep Ganguly <redacted>
Co-Authored-By: Todor Boinovski <redacted>
* hexagon: fix format checker errors

* hexagon: update readme and cmake presets

* ci: add android-ndk-build jobs that build plain ARM64 and Snapdragon versions

* hexagon: add simple graph optimizer for stacking MUL_MAT ops with the same input

* hexagon: move ADB helper scripts into scripts/snapdragon/adb

* hexagon: replace all f/printfs with GGML_LOG_...

* readme: add hexagon to the list supported backends

* hexagon: stack malmuts with quantized inputs only

* hexagon: add TODO for fixing issues in hexagon_graph_optimize

* hexagon: update to hex-sdk 6.4.0 and add scripts for running on QDC

* scripts: fix lint errors

* scripts: update qdc pytest script to make linter happy

* hexagon: add reduce sum in fp32

* hexagon: reduce number of vector stores in matmul output

* hexagon: remove the need for vdelta in reduce-multiply-x8

* hexagon: consistent use of reduce_sum_fp32 for row_sums

* hexagon: some more matmul optimizations and comments

Optimize cases where tensor dims are not multiple of 1024 (e.g in Qwen models).
We've handled those cases already but at a higher overhead.

* hexagon: update cmake presets

* hexagon: add OPMASK support for run-bench.sh wrapper

* hexagon: update to use GGML_BACKEND_API

* hexagon: remove unused logic for setting tensor flags for the views

* hexagon: add asserts to set/get_tensor to make sure we handle complete tensors

Same asserts as the CPU backend.

* hexagon: use cpy_tensor slow path for non-host buffers

* hexagon: error checks in the buffer allocator

* cmake: move include(extProj) under ggml-hexagon

* hexagon: don't forget to delete the backend on free

* hexagon: set/get_tensor size assert apply only to quantized tensors

* hexagon: reintroduce HEX_VERBOSE wrapper for GGML_LOG_DEBUG for now

GGML_LOG_DEBUG is always enabled for test-backend-ops and the output gets in the way.
Ideally we need a bit more finer log levels.

* docs: typos in hexagon developer docs (libggm-...)

* hexagon: overhaul error handling in the session/device allocation

this should handle all failure paths in the session allocation.

* hexagon: update cmake presets to enable fp16 vectors

* hexagon: remove unused time_usec function

* hexagon: don't forget to release buffer contexts

* hexagon: fixed indents in hvx-utils (missed clang-format auto-format failure)

* hexagon: remove custom can_repeat function and use ggml_can_repeat

---------

Co-authored-by: Rajdeep Ganguly <redacted>
Co-authored-by: Todor Boinovski <redacted>
4 weeks agoRevert "ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_v…" ...
Diego Devesa [Wed, 22 Oct 2025 18:20:55 +0000 (11:20 -0700)]
Revert "ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_v…" (#16723)

This reverts commit 19a5a3edfd306516cc419679d69d6435943b6816.

4 weeks agoggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for...
sirus20x6 [Wed, 22 Oct 2025 10:14:14 +0000 (05:14 -0500)]
ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for faster fills (llama/16522)

* Leverage the existing GGML_F32_VEC helpers to broadcast the fill value across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.

* Vectorize additional f32 helper loops

* Normalize f32 helper tails for ggml vec ops

---------

Co-authored-by: Aaron <redacted>
4 weeks agoCUDA: fix bug in topk-moe softmax (llama/16711)
Aman Gupta [Wed, 22 Oct 2025 04:33:08 +0000 (12:33 +0800)]
CUDA: fix bug in topk-moe softmax (llama/16711)

4 weeks agoCUDA: topk-moe: add optional parameter for gpt-oss (llama/16649)
Aman Gupta [Tue, 21 Oct 2025 14:40:38 +0000 (22:40 +0800)]
CUDA: topk-moe: add optional parameter for gpt-oss (llama/16649)