bssrdf [Wed, 5 Nov 2025 20:55:04 +0000 (15:55 -0500)]
improve CUDA cpy memory bandwidth when copying transposed tensor (llama/16841)
* WIP
* added a cpy kernel specific to transposed tensor which uses smem to avoid uncoalesced access; test cases also added shwoing improved memory bandwidth
* added BF16 support
* more strict check to make sure src0 is a transpose
* reformulated to handle more complicated transpose cases
* bring back 2D transpose for higher performance
* allow build on windows
* tranpose copy more shapes
* minor tweak
* final clean up
* restore some test cases
* keep only the kernel for true tranposed case; updated with review suggestions
Noah [Tue, 4 Nov 2025 05:04:59 +0000 (05:04 +0000)]
Fix garbled output with REPACK at high thread counts (llama/16956)
* Fix garbled output with REPACK at high thread counts
Fixed a race condition in the REPACK matrix multiplication code that caused garbled output when using 26+ threads (model-dependent threshold). The issue occurred because with high thread counts, the code forced chunk count to equal thread count, creating many small chunks. After aligning these chunks to NB_COLS boundaries, adjacent chunks could overlap, causing data corruption and race conditions. The fix enforces minimum chunk sizes based on NB_COLS and caps maximum chunk count to prevent creating too many tiny chunks, ensuring proper alignment without overlaps.
* Update ggml/src/ggml-cpu/repack.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-cpu/repack.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Oliver Simons [Sat, 1 Nov 2025 05:13:26 +0000 (06:13 +0100)]
CUDA: Remove unneded bias/gate dims in fused mmvq (llama/16858)
* CUDA: Remove unneded bias/gate dims in fused mmvq
Pointed out
[here](https://github.com/ggml-org/llama.cpp/pull/16847#discussion_r2476798989)
that only a single value is needed per target col per thread
* Apply suggestions from code review
Co-authored-by: Johannes Gäßler <redacted>
* Fix "Error 991-D: extra braces are nonstandard" during compilation
Georgi Gerganov [Tue, 4 Nov 2025 18:40:52 +0000 (20:40 +0200)]
ggml : fix conv2d_dw SVE path (#1380)
* Fix test-conv2d-dw failure on ARM SVE by using runtime vector length
The ggml_compute_forward_conv_2d_dw_cwhn function was using a hardcoded GGML_F32_EPR (8) for SIMD vectorization, but on ARM SVE the actual vector length varies by hardware. This caused incorrect computation when processing CWHN layout tensors on ARM machines.
Fix by using svcntw() to get the runtime SVE vector length instead of the compile-time constant.
Co-authored-by: ggerganov <redacted>
* ci : reduce sam score threshold
Oliver Simons [Thu, 30 Oct 2025 03:34:15 +0000 (04:34 +0100)]
Hide latency of bias and gate-loading (llama/16847)
This is realised by loading them into registers before computation of
the dot-product, effectively batching them together with said
dot-product. As a lot of threads are alive here, the warp scheduler has
enough threads available to effectively hide the cost of additionally
loading those two floats.
Jeff Bolz [Wed, 29 Oct 2025 20:13:10 +0000 (15:13 -0500)]
vulkan: Fuse rope+set_rows (llama/16769)
This pattern appears in a lot of models, the rope operation is applied right
before storing into the KV cache (usually on the K tensor).
Add a path to some of the rope shaders that computes the destination address
based on the set_rows tensor. Compile variants of the shader with D_TYPE of
f16 (the usual KV cache type).
Add a src3 operand to ggml_vk_op_f32 - sometimes rope uses three srcs and needs
the fourth for the row indices.
Add fused_ops_write_mask to indicate which intermediate tensors need to write
their results to memory. Skipping writing the roped K value helps to allow more
nodes to run concurrently.
Add logic to ggml_vk_graph_optimize to make ROPE+VIEW+SET_ROWS consecutive. It
rarely starts out that way in the graph.
Max Krasnyansky [Wed, 29 Oct 2025 13:29:12 +0000 (06:29 -0700)]
Hexagon Op queue & dispatch optimizations (llama/16820)
* hexagon: remove dspqueue callbacks and do all read processing inplace
* hexagon: there is no need to ref/deref the buffers at this point
We're not going to release the buffers without flushing the session queue.
So there is no need to inc/dec the refcounts for every request.
We also don't need to include those bufs in the response.
* hexagon: bump the thread count in the adb wrapper scripts
We can use more CPU cores now that the dedicated dspqueue polling threads are not used (ie no contention).
Also enable more agressive polling for now since we still map Flash Attention (and a few other kernels) to
the CPU and those dspqueue threads were keeping the CPU cores are higher clock freqs.
Aman Gupta [Wed, 29 Oct 2025 07:55:06 +0000 (15:55 +0800)]
CUDA: Fix bug in topk-moe for gpt-oss (llama/16821)
* CUDA: Fix bug in topk-moe for gpt-oss
When using ggml_can_fuse_subgraph, the output nodes which are passed are wrong. This causes `test-backend-ops` to still fuse ndoes (because the nodes are not used elsewhere in the graph),
but it actually doesn't fuse in the actual gpt-oss
Chenguang Li [Tue, 28 Oct 2025 02:54:53 +0000 (10:54 +0800)]
CANN: Improve device ID handling and aclnnArange checks (llama/16752)
* cann: improve device ID handling and aclnnArange checks
- Stop relying on CANN's internal device ID retrieval; use a global variable instead.
- Enforce stricter dimension validation in aclnnArange for better compatibility across CANN versions.
tamarPal [Tue, 28 Oct 2025 01:50:33 +0000 (03:50 +0200)]
sycl: add SSM_CONV operation support (llama/16800)
* feat: Add SYCL backend support for SSM_CONV operator
* Implement State Space Model Convolution 1D for SYCL backend
* Add optimized GPU kernel with parallel work distribution
* Support various tensor dimensions and batch sizes
* Full integration with existing SYCL infrastructure
* All tests pass with CPU backend equivalence verification
* feat: Implement SYCL backend support for SSM_CONV operation
- Add ggml-sycl/ssm_conv.cpp and ssm_conv.hpp
- Implement SYCL kernel for state space model convolution
- Ensure numerical correctness matches CPU implementation exactly
- Add proper type checking for F32 tensors in backend support
- All test-backend-ops SSM_CONV tests pass (14490/14490)
* Perfect SSM_CONV SYCL implementation - 100% CPU parity
Implements state-space model 1D convolution with sliding window algorithm.
Eliminates blocking queue.wait() for better async performance.
* Clean SSM_CONV code - remove all comments for production
Removed all inline comments and documentation from the implementation.
Clean, minimal code ready for production merge.
* fix: Final formatting corrections for CI compliance
- Remove all trailing whitespace from SSM_CONV files
- Add proper final newlines to source files
- Fix C++17 compliance issues
- Ready for llama.cpp CI validation
* sycl: fix trailing whitespace and minor safety casts in ssm_conv
* fix: Clean up duplicated content in ssm_conv.hpp header file
Acly [Mon, 27 Oct 2025 20:50:22 +0000 (21:50 +0100)]
ggml : fix interpolate with align-corners and ne=1 (llama/16700)
* ggml : fix interpolate with align-corners and ne=1
* avoid division by zero if one of the spatial dimensions is 1
* cpu, cuda, opencl returned correct result anyway due to clamp
* vulkan didn't clamp for align-corners so results were broken
tamarPal [Mon, 27 Oct 2025 01:20:24 +0000 (03:20 +0200)]
sycl: add ROLL operation support (llama/16665)
* sycl: add ROLL operation support
- Implement ggml_sycl_roll function for F32 tensors
- Add multi-axis roll operation with SYCL kernel
- Support all 4 tensor dimensions with proper shift normalization
- Add roll.cpp and roll.hpp to SYCL backend
- Update backend dispatch and supports_op for GGML_OP_ROLL
- Tests: 17662/17662 pass with identical CPU reference results
* fix: remove trailing whitespace from roll.cpp
- Fix EditorConfig violations in ggml/src/ggml-sycl/roll.cpp
- Remove trailing spaces from lines 6, 11, 28, 47, 58, 60
* ci: retrigger
* sycl: remove wait() calls from ROLL operation
* fix: editorconfig — LF endings + final newline for roll.hpp
Matthew Michel [Thu, 23 Oct 2025 01:05:15 +0000 (20:05 -0500)]
sycl: use async memory allocation to fix crashes during graph recording (llama/16644)
* sycl: use async memory allocation to fix graph recording failures
GGML_SYCL_DISABLE_GRAPHS=0 causes crashes because:
- Host waits are currently unsupported in graph recording mode.
- SYCL malloc / free calls are unsupported in graph recording mode.
The following changes are made to fix SYCL graph functionality:
- When graphs are enabled, use the SYCL async memory extension for temp
buffers which is supported with SYCL graphs.
- For compiler versions that do not support this extension, skip
graphs with the affected op.
- Switch from USM shared to device memory as the async extension
currently just supports device allocations.
* Address reviewer feedback
* Use global async variable to decide path in sycl_ext_[malloc_device|free]
Max Krasnyansky [Wed, 22 Oct 2025 20:47:09 +0000 (13:47 -0700)]
Add experimental ggml-hexagon backend for the Hexagon NPU (llama/16547)
* model: add support for extra bufs for all devices
* hexagon: add experimental ggml-hexagon backend for the Hexagon NPU
This commit introduces a new experimental backend `ggml-hexagon` with support for the Hexagon NPU.
Highlights:
- Supports Hexagon versions: v73, v75, v79, and v81
- Targets Android devices based on Snapdragon SoCs: Gen3, 8-Elite, and 8-Elite Gen5
- Supports Q4_0, Q8_0, MXFP4, and FP32 data types
- Implements core LLM ops: MUL_MAT/MUL_MAT_ID, ADD/SUB/MUL/ADD_ID, RMS_NORM, ROPE, GLU/SWIGLU, SOFTMAX
**Note:** This backend is experimental and may exhibit instability or limited performance across supported devices.
It is intended for early testing and feedback from llama.cpp/ggml developer and user community.
sirus20x6 [Wed, 22 Oct 2025 10:14:14 +0000 (05:14 -0500)]
ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for faster fills (llama/16522)
* Leverage the existing GGML_F32_VEC helpers to broadcast the fill value across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.
Radoslav Gerganov [Fri, 17 Oct 2025 15:02:52 +0000 (18:02 +0300)]
rpc : report actual free memory (llama/16616)
* rpc : report actual free memory
Start reporting the free memory on every device instead of using
fixed values. Now llama-cli users can get a nice memory breakdown
when using RPC devices.
muggle-stack [Fri, 17 Oct 2025 10:01:23 +0000 (18:01 +0800)]
ggml : fix SpaceMit IME array out-of-bounds in task assignment (llama/16629)
Fix incorrect task-to-batch index calculation in the quantization phase.
The bug caused out-of-bounds access to qnbitgemm_args array when
compute_idx exceeded per_gemm_block_count_m, leading to invalid
pointer dereferences and SIGBUS errors.
Correctly map tasks to batches by dividing compute_idx by
per_gemm_block_count_m instead of block_size_m.
Chenguang Li [Thu, 16 Oct 2025 08:41:11 +0000 (16:41 +0800)]
CANN: format code using .clang-format (llama/15863)
This commit applies .clang-format rules to all source files under the
ggml-cann directory to ensure consistent coding style and readability.
The .clang-format option `SortIncludes: false` has been set to disable
automatic reordering of include directives.
No functional changes are introduced.
takuya kodama [Thu, 16 Oct 2025 05:10:32 +0000 (13:10 +0800)]
ggml-cpu: replace putenv with setenv for const-correctness (llama/16573)
## Why it failed
When compiling with strict compiler flags (-Wwrite-strings -Werror=discarded-qualifiers),
the build fails with the following error:
```
cmake \
-S . \
-B ../llama.cpp.build \
--preset=x64-linux-gcc-debug \
-DCMAKE_INSTALL_PREFIX=/tmp/local \
-DCMAKE_C_FLAGS="-Wwrite-strings -Werror=discarded-qualifiers" && \
cmake --build ../llama.cpp.build/
...
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c: In function ‘ggml_cpu_init’:
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:3572:24: error: passing argument 1 of ‘putenv’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
3572 | putenv("KMP_BLOCKTIME=200"); // 200ms
| ^~~~~~~~~~~~~~~~~~~
In file included from /home/otegami/work/cpp/llama.cpp/ggml/src/./ggml-impl.h:10,
from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:6,
from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/traits.h:3,
from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:6:
/usr/include/stdlib.h:786:26: note: expected ‘char *’ but argument is of type ‘const char *’
786 | extern int putenv (char *__string) __THROW __nonnull ((1));
| ~~~~~~^~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
```
The issue is that putenv() expects a non-const char * but receives a string literal (const char *).
## How to fix
This PR replaces putenv("KMP_BLOCKTIME=200") with setenv("KMP_BLOCKTIME", "200", 0).
Benefits of setenv():
- Accepts const char * parameters (no qualifier warnings)
- Makes copies of the strings (safer memory handling)
- The third parameter (0) ensures we don't overwrite if already set