]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
3 weeks agoCUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (llama/16917)
mnehete32 [Sun, 2 Nov 2025 03:12:57 +0000 (08:42 +0530)]
CUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (llama/16917)

3 weeks agoggml: add s390x cpu-feats (llama/16774)
Aaron Teo [Sun, 2 Nov 2025 00:48:23 +0000 (08:48 +0800)]
ggml: add s390x cpu-feats (llama/16774)

3 weeks agovulkan: Fix multi_add invalid descriptor usage (llama/16899)
Jeff Bolz [Sat, 1 Nov 2025 05:52:14 +0000 (00:52 -0500)]
vulkan: Fix multi_add invalid descriptor usage (llama/16899)

3 weeks agovulkan: fuse mul_mat+add and mul_mat_id+add_id (llama/16868)
Jeff Bolz [Sat, 1 Nov 2025 05:45:28 +0000 (00:45 -0500)]
vulkan: fuse mul_mat+add and mul_mat_id+add_id (llama/16868)

* vulkan: fuse mul_mat+add and mul_mat_id+add_id

The fusion is only applied for the mat-vec mul paths.

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix 32b build

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agoCUDA: Remove unneded bias/gate dims in fused mmvq (llama/16858)
Oliver Simons [Sat, 1 Nov 2025 05:13:26 +0000 (06:13 +0100)]
CUDA: Remove unneded bias/gate dims in fused mmvq (llama/16858)

* CUDA: Remove unneded bias/gate dims in fused mmvq

Pointed out
[here](https://github.com/ggml-org/llama.cpp/pull/16847#discussion_r2476798989)
that only a single value is needed per target col per thread

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Fix "Error 991-D: extra braces are nonstandard" during compilation

---------

Co-authored-by: Johannes Gäßler <redacted>
3 weeks agoCUDA: Volta tensor core support for MMF (llama/16843)
Johannes Gäßler [Fri, 31 Oct 2025 14:57:19 +0000 (15:57 +0100)]
CUDA: Volta tensor core support for MMF (llama/16843)

* CUDA: Volta tensor core support for MMF

* more generic checks for hardware support

* Update ggml/src/ggml-cuda/mmf.cuh

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Aman Gupta <redacted>
3 weeks agoggml : fix conv2d_dw SVE path (#1380)
Georgi Gerganov [Tue, 4 Nov 2025 18:40:52 +0000 (20:40 +0200)]
ggml : fix conv2d_dw SVE path (#1380)

* Fix test-conv2d-dw failure on ARM SVE by using runtime vector length

The ggml_compute_forward_conv_2d_dw_cwhn function was using a hardcoded GGML_F32_EPR (8) for SIMD vectorization, but on ARM SVE the actual vector length varies by hardware. This caused incorrect computation when processing CWHN layout tensors on ARM machines.

Fix by using svcntw() to get the runtime SVE vector length instead of the compile-time constant.

Co-authored-by: ggerganov <redacted>
* ci : reduce sam score threshold

* ci : update bbox checks for sam test

---------

Co-authored-by: copilot-swe-agent[bot] <redacted>
Co-authored-by: ggerganov <redacted>
4 weeks agosync : llama.cpp
Georgi Gerganov [Fri, 31 Oct 2025 14:27:03 +0000 (16:27 +0200)]
sync : llama.cpp

4 weeks agoCUDA: add expert reduce kernel (llama/16857)
Aman Gupta [Fri, 31 Oct 2025 12:05:07 +0000 (20:05 +0800)]
CUDA: add expert reduce kernel (llama/16857)

* CUDA: add expert reduce kernel

* contigous checks, better formatting, use std::vector instead of array

* use vector empty instead of size

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agovulkan: disable spirv-opt for rope shaders (llama/16872)
Jeff Bolz [Fri, 31 Oct 2025 07:34:47 +0000 (02:34 -0500)]
vulkan: disable spirv-opt for rope shaders (llama/16872)

4 weeks agovulkan: Fix crash when FP16 mul_mat accumulation is not supported (llama/16796)
Masato Nakasaka [Fri, 31 Oct 2025 07:18:59 +0000 (16:18 +0900)]
vulkan: Fix crash when FP16 mul_mat accumulation is not supported (llama/16796)

* Experimenting crash fix

* added assert for aborting and fixed comment

* changed to check if a pipeline is empty or not

* Moved function in class definition

* replaced with is_empty

* Modified is_empty to check only unaligned pipelines

4 weeks agovulkan: fix shmem overrun in mmq id shader (llama/16873)
Ruben Ortlam [Fri, 31 Oct 2025 07:14:49 +0000 (08:14 +0100)]
vulkan: fix shmem overrun in mmq id shader (llama/16873)

* vulkan: fix shmem overrun in mmq id shader

* metal : fix mul_mm_id

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agoggml-hexagon: respect input size when getting/setting tensor data (llama/16836)
l3utterfly [Fri, 31 Oct 2025 04:46:31 +0000 (12:46 +0800)]
ggml-hexagon: respect input size when getting/setting tensor data (llama/16836)

* respect input size when getting/setting tensor data

allows partial repacking/copying when get tensor size is smaller than the actual tensor

* Removed duplicate repack_mxfp4_mxfp4x4x2 function

4 weeks agoopencl: fix boundary handling for mul_mm (llama/16875)
lhez [Thu, 30 Oct 2025 23:00:20 +0000 (16:00 -0700)]
opencl: fix boundary handling for mul_mm (llama/16875)

4 weeks agocpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64...
Max Krasnyansky [Thu, 30 Oct 2025 16:06:13 +0000 (09:06 -0700)]
cpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64 (llama/16833)

Very similar implementation to the flash-attention chunking, with similar benefits.

4 weeks agomodel: add support for qwen3vl series (llama/16780)
JJJYmmm [Thu, 30 Oct 2025 15:19:14 +0000 (23:19 +0800)]
model: add support for qwen3vl series (llama/16780)

* support qwen3vl series.

Co-authored-by: Thireus ☠ <redacted>
Co-authored-by: yairpatch <redacted>
Co-authored-by: LETS-BEE <redacted>
* bugfix: fix the arch check for qwen3vl-moe.

* use build_ffn

* optimize deepstack structure

* optimize deepstack feature saving

* Revert "optimize deepstack feature saving" for temporal fix

This reverts commit f321b9fdf13e59527408152e73b1071e19a87e71.

* code clean

* use fused qkv in clip

* clean up / rm is_deepstack_layers for simplification

* add test model

* move test model to "big" section

* fix imrope check

* remove trailing whitespace

* fix rope fail

* metal : add imrope support

* add imrope support for sycl

* vulkan: add imrope w/o check

* fix vulkan

* webgpu: add imrope w/o check

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix tensor mapping

---------

Co-authored-by: Thireus ☠ <redacted>
Co-authored-by: yairpatch <redacted>
Co-authored-by: LETS-BEE <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agocpu: introduce chunking for flash attention (llama/16829)
Max Krasnyansky [Thu, 30 Oct 2025 12:26:05 +0000 (05:26 -0700)]
cpu: introduce chunking for flash attention (llama/16829)

Factor out the core FA loop into flash_atten_f16_one_chunk and add an outter loop
on top that handles the chunks.

4 weeks agocuda : fix argsort with 64k+ rows (llama/16849)
Sigbjørn Skjæret [Thu, 30 Oct 2025 07:56:28 +0000 (08:56 +0100)]
cuda : fix argsort with 64k+ rows (llama/16849)

4 weeks agovulkan: Handle argsort with a large number of rows (llama/16851)
Jeff Bolz [Thu, 30 Oct 2025 06:27:41 +0000 (01:27 -0500)]
vulkan: Handle argsort with a large number of rows (llama/16851)

4 weeks agoHide latency of bias and gate-loading (llama/16847)
Oliver Simons [Thu, 30 Oct 2025 03:34:15 +0000 (04:34 +0100)]
Hide latency of bias and gate-loading (llama/16847)

This is realised by loading them into registers before computation of
the dot-product, effectively batching them together with said
dot-product. As a lot of threads are alive here, the warp scheduler has
enough threads available to effectively hide the cost of additionally
loading those two floats.

4 weeks agovulkan: Fuse rope+set_rows (llama/16769)
Jeff Bolz [Wed, 29 Oct 2025 20:13:10 +0000 (15:13 -0500)]
vulkan: Fuse rope+set_rows (llama/16769)

This pattern appears in a lot of models, the rope operation is applied right
before storing into the KV cache (usually on the K tensor).

Add a path to some of the rope shaders that computes the destination address
based on the set_rows tensor. Compile variants of the shader with D_TYPE of
f16 (the usual KV cache type).

Add a src3 operand to ggml_vk_op_f32 - sometimes rope uses three srcs and needs
the fourth for the row indices.

Add fused_ops_write_mask to indicate which intermediate tensors need to write
their results to memory. Skipping writing the roped K value helps to allow more
nodes to run concurrently.

Add logic to ggml_vk_graph_optimize to make ROPE+VIEW+SET_ROWS consecutive. It
rarely starts out that way in the graph.

Add new backend tests.

4 weeks agovulkan: Update topk_moe fusion to handle gpt's late softmax (llama/16656)
Jeff Bolz [Wed, 29 Oct 2025 13:44:29 +0000 (08:44 -0500)]
vulkan: Update topk_moe fusion to handle gpt's late softmax (llama/16656)

* vulkan: Update topk_moe fusion to handle gpt's late softmax

Based on #16649.

* Add ggml_check_edges

* Add sync logging to show fusion effects

* handle clamp added in #16655

* Update ggml/src/ggml-impl.h

Co-authored-by: Diego Devesa <redacted>
4 weeks agoVulkan MMQ Integer Dot Refactor and K-Quant support (llama/16536)
Ruben Ortlam [Wed, 29 Oct 2025 13:39:03 +0000 (14:39 +0100)]
Vulkan MMQ Integer Dot Refactor and K-Quant support (llama/16536)

* vulkan: add mmq q2_k integer dot support

* Refactor mmq caching

* Reduce mmq register use

* Load 4 quant blocks into shared memory in one step

* Pack q2_k blocks into caches of 32

* Use 32-bit accumulators for integer dot matmul

* Add q4_k mmq

* Add q3_k mmq

* Add q5_k mmq

* Add q6_k mmq

* Add mxfp4 mmq, enable MMQ MUL_MAT_ID

* Fix mmv dm loads

4 weeks agoHexagon Op queue & dispatch optimizations (llama/16820)
Max Krasnyansky [Wed, 29 Oct 2025 13:29:12 +0000 (06:29 -0700)]
Hexagon Op queue & dispatch optimizations (llama/16820)

* hexagon: remove dspqueue callbacks and do all read processing inplace

* hexagon: there is no need to ref/deref the buffers at this point

We're not going to release the buffers without flushing the session queue.
So there is no need to inc/dec the refcounts for every request.
We also don't need to include those bufs in the response.

* hexagon: bump the thread count in the adb wrapper scripts

We can use more CPU cores now that the dedicated dspqueue polling threads are not used (ie no contention).
Also enable more agressive polling for now since we still map Flash Attention (and a few other kernels) to
the CPU and those dspqueue threads were keeping the CPU cores are higher clock freqs.

* hexagon: add lhez as the second code owner

4 weeks agoCUDA: use fastdiv in set-rows (llama/16834)
Aman Gupta [Wed, 29 Oct 2025 13:11:53 +0000 (21:11 +0800)]
CUDA: use fastdiv in set-rows (llama/16834)

* CUDA: use fastdiv in set-rows

* add assert about value fitting in u32

4 weeks agovulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (llama/16793)
Jeff Bolz [Wed, 29 Oct 2025 08:53:04 +0000 (03:53 -0500)]
vulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (llama/16793)

This lets the copy to the destination device use the host-visible
vidmem optimization.

4 weeks agoCUDA: Fix bug in topk-moe for gpt-oss (llama/16821)
Aman Gupta [Wed, 29 Oct 2025 07:55:06 +0000 (15:55 +0800)]
CUDA: Fix bug in topk-moe for gpt-oss (llama/16821)

* CUDA: Fix bug in topk-moe for gpt-oss

When using ggml_can_fuse_subgraph, the output nodes which are passed are wrong. This causes `test-backend-ops` to still fuse ndoes (because the nodes are not used elsewhere in the graph),
but it actually doesn't fuse in the actual gpt-oss

* fix for qwen3 too

* change ifndef to ifdef

4 weeks agosycl: add RMS_NORM_BACK operation support (llama/16808)
YaelLogic [Wed, 29 Oct 2025 06:14:39 +0000 (08:14 +0200)]
sycl: add RMS_NORM_BACK operation support (llama/16808)

* sycl: add RMS_NORM_BACK operation support

* sycl: rms_norm_back: add dual reduction paths (FP64 and FP32) and savepoint before further changes

* sycl: add RMS_NORM_BACK support

Implement RMS_NORM_BACK for the SYCL backend using FP32 compensated parallel reduction. Minimal docs updates (ops.md / SYCL.csv).

* revert: restore .gitignore and tools/run/CMakeLists.txt to upstream

* revert: restore tests/CMakeLists.txt to upstream

* sycl: optimize rms_norm_back

* fix: restore SYCL.csv to correct state with RMS_NORM_BACK support

* Update ggml/src/ggml-sycl/norm.cpp

Co-authored-by: Neo Zhang Jianyu <redacted>
* fix: remove trailing whitespace and add missing newline (EditorConfig)

---------

Co-authored-by: Neo Zhang Jianyu <redacted>
4 weeks agocuda: add SET operation support (llama/16804)
YaelGitAccount [Tue, 28 Oct 2025 19:10:28 +0000 (21:10 +0200)]
cuda: add SET operation support (llama/16804)

* feat(cuda): add GGML_OP_SET support

Implement CUDA kernel for SET operation with f32 support.

All tests passing (14598/14598).

* cuda(set): add I32 support; keep F32

* refactor(cuda): use ggml_cuda_cpy to unify SET operator logic and remove code duplication

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-cuda/set.cu

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoinitialise buffer.device in ggml_hexagon_session (llama/16816)
l3utterfly [Tue, 28 Oct 2025 15:16:20 +0000 (23:16 +0800)]
initialise buffer.device in ggml_hexagon_session (llama/16816)

4 weeks agoCANN: Improve device ID handling and aclnnArange checks (llama/16752)
Chenguang Li [Tue, 28 Oct 2025 02:54:53 +0000 (10:54 +0800)]
CANN: Improve device ID handling and aclnnArange checks (llama/16752)

* cann: improve device ID handling and aclnnArange checks

- Stop relying on CANN's internal device ID retrieval; use a global variable instead.
- Enforce stricter dimension validation in aclnnArange for better compatibility across CANN versions.

* cann: use thread local var

4 weeks agoCUDA: add unused vars to mmvf and mmvq (llama/16807)
Aman Gupta [Tue, 28 Oct 2025 02:31:21 +0000 (10:31 +0800)]
CUDA: add unused vars to mmvf and mmvq (llama/16807)

4 weeks agosycl: add SSM_CONV operation support (llama/16800)
tamarPal [Tue, 28 Oct 2025 01:50:33 +0000 (03:50 +0200)]
sycl: add SSM_CONV operation support (llama/16800)

* feat: Add SYCL backend support for SSM_CONV operator

* Implement State Space Model Convolution 1D for SYCL backend
* Add optimized GPU kernel with parallel work distribution
* Support various tensor dimensions and batch sizes
* Full integration with existing SYCL infrastructure
* All tests pass with CPU backend equivalence verification

* feat: Implement SYCL backend support for SSM_CONV operation

- Add ggml-sycl/ssm_conv.cpp and ssm_conv.hpp
- Implement SYCL kernel for state space model convolution
- Ensure numerical correctness matches CPU implementation exactly
- Add proper type checking for F32 tensors in backend support
- All test-backend-ops SSM_CONV tests pass (14490/14490)

* Perfect SSM_CONV SYCL implementation - 100% CPU parity

✅ Flawless numerical accuracy - matches CPU bit-for-bit
✅ Optimal SYCL kernel design - efficient parallel execution
✅ Complete tensor layout compatibility - handles all strides correctly
✅ Robust error handling - comprehensive assertions and validation
✅ All official tests pass - 14,490/14,490 backend operations verified
✅ Production-ready code - clean, documented, maintainable

Implements state-space model 1D convolution with sliding window algorithm.
Eliminates blocking queue.wait() for better async performance.

* Clean SSM_CONV code - remove all comments for production

Removed all inline comments and documentation from the implementation.
Clean, minimal code ready for production merge.

* fix: Final formatting corrections for CI compliance

- Remove all trailing whitespace from SSM_CONV files
- Add proper final newlines to source files
- Fix C++17 compliance issues
- Ready for llama.cpp CI validation

* sycl: fix trailing whitespace and minor safety casts in ssm_conv

* fix: Clean up duplicated content in ssm_conv.hpp header file

---------

Co-authored-by: tamarPal <redacted>
4 weeks agoggml : fix interpolate with align-corners and ne=1 (llama/16700)
Acly [Mon, 27 Oct 2025 20:50:22 +0000 (21:50 +0100)]
ggml : fix interpolate with align-corners and ne=1 (llama/16700)

* ggml : fix interpolate with align-corners and ne=1

* avoid division by zero if one of the spatial dimensions is 1
* cpu, cuda, opencl returned correct result anyway due to clamp
* vulkan didn't clamp for align-corners so results were broken

* fix clang warning

4 weeks agoHIP: fix AMDGPU_TARGETS, update documentation (llama/16803)
Johannes Gäßler [Mon, 27 Oct 2025 20:39:49 +0000 (21:39 +0100)]
HIP: fix AMDGPU_TARGETS, update documentation (llama/16803)

4 weeks agotest-backend-ops: print failed tests at the end (llama/16785)
Aman Gupta [Mon, 27 Oct 2025 01:25:10 +0000 (09:25 +0800)]
test-backend-ops: print failed tests at the end (llama/16785)

4 weeks agosycl: add ROLL operation support (llama/16665)
tamarPal [Mon, 27 Oct 2025 01:20:24 +0000 (03:20 +0200)]
sycl: add ROLL operation support (llama/16665)

* sycl: add ROLL operation support

- Implement ggml_sycl_roll function for F32 tensors
- Add multi-axis roll operation with SYCL kernel
- Support all 4 tensor dimensions with proper shift normalization
- Add roll.cpp and roll.hpp to SYCL backend
- Update backend dispatch and supports_op for GGML_OP_ROLL
- Tests: 17662/17662 pass with identical CPU reference results

* fix: remove trailing whitespace from roll.cpp

- Fix EditorConfig violations in ggml/src/ggml-sycl/roll.cpp
- Remove trailing spaces from lines 6, 11, 28, 47, 58, 60

* ci: retrigger

* sycl: remove wait() calls from ROLL operation

* fix: editorconfig — LF endings + final newline for roll.hpp

---------

Co-authored-by: tamarPal <redacted>
4 weeks agosycl: add REPEAT_BACK operation support (llama/16734)
shani-f [Mon, 27 Oct 2025 01:19:50 +0000 (03:19 +0200)]
sycl: add REPEAT_BACK operation support (llama/16734)

* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* Update ggml/src/ggml-sycl/repeat_back.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-sycl/repeat_back.hpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoCUDA: support for weight clamp in top-k norm (llama/16702)
Aman Gupta [Mon, 27 Oct 2025 01:06:16 +0000 (09:06 +0800)]
CUDA: support for weight clamp in top-k norm (llama/16702)

4 weeks agoggml-alloc : make gallocr prefer chunks that allow memory reuse (llama/16788)
Acly [Sun, 26 Oct 2025 22:19:03 +0000 (23:19 +0100)]
ggml-alloc : make gallocr prefer chunks that allow memory reuse (llama/16788)

4 weeks agocuda : use fast copy when src and dst are of different type and contiguous (llama...
Sigbjørn Skjæret [Sun, 26 Oct 2025 20:31:41 +0000 (21:31 +0100)]
cuda : use fast copy when src and dst are of different type and contiguous (llama/16789)

* use fast copy when src and dst are contiguous and same shape

* use int64_t ne and ignore shape

4 weeks agoggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support...
leejet [Sun, 26 Oct 2025 18:13:31 +0000 (02:13 +0800)]
ggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support large batch (llama/16744)

* fix k_compute_batched_ptrs

* add backend ops test

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* reduce the batch size

---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agoCUDA: General GEMV fusion (llama/16715)
Aman Gupta [Sun, 26 Oct 2025 11:28:04 +0000 (19:28 +0800)]
CUDA: General GEMV fusion (llama/16715)

4 weeks agovulkan: deduplicate Microsoft Direct3D12 devices (llama/16689)
Gilad S. [Sun, 26 Oct 2025 04:37:38 +0000 (06:37 +0200)]
vulkan: deduplicate Microsoft Direct3D12 devices (llama/16689)

* fix: deduplicate and deprioritize Microsoft Direct3D12 vulkan devices from the `vulkan-dozen` driver

* style: indent

* fix: decrease priority

* fix: switch to `||`

4 weeks agovulkan: delete dead code (llama/16732)
Giuseppe Scrivano [Sat, 25 Oct 2025 08:59:54 +0000 (10:59 +0200)]
vulkan: delete dead code (llama/16732)

ggml_vk_create_buffer_temp is not used anywhere, and it is the only
caller for ggml_vk_pool_malloc.

Signed-off-by: Giuseppe Scrivano <redacted>
4 weeks agovulkan: Optimize SSM_SCAN (llama/16645)
Jeff Bolz [Sat, 25 Oct 2025 05:04:12 +0000 (00:04 -0500)]
vulkan: Optimize SSM_SCAN (llama/16645)

4 weeks agoggml: fix CUDA grid launch condition for large block_nums.y in binbcast (llama/16742)
leejet [Fri, 24 Oct 2025 19:39:37 +0000 (03:39 +0800)]
ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (llama/16742)

* Fix CUDA grid launch condition for large block_nums.y

* add backend ops test

* reduce test  repetitions

4 weeks agoCUDA: use CUB for arbitary size argsort (llama/16754)
Aman Gupta [Fri, 24 Oct 2025 12:46:19 +0000 (20:46 +0800)]
CUDA: use CUB for arbitary size argsort (llama/16754)

4 weeks agoggml-cuda: use passed ops instead of hardcoded ops (llama/16712)
Aman Gupta [Thu, 23 Oct 2025 11:14:06 +0000 (19:14 +0800)]
ggml-cuda: use passed ops instead of hardcoded ops (llama/16712)

4 weeks agosycl: use async memory allocation to fix crashes during graph recording (llama/16644)
Matthew Michel [Thu, 23 Oct 2025 01:05:15 +0000 (20:05 -0500)]
sycl: use async memory allocation to fix crashes during graph recording (llama/16644)

* sycl: use async memory allocation to fix graph recording failures

GGML_SYCL_DISABLE_GRAPHS=0 causes crashes because:
  - Host waits are currently unsupported in graph recording mode.
  - SYCL malloc / free calls are unsupported in graph recording mode.

The following changes are made to fix SYCL graph functionality:
  - When graphs are enabled, use the SYCL async memory extension for temp
    buffers which is supported with SYCL graphs.
  - For compiler versions that do not support this extension, skip
    graphs with the affected op.
  - Switch from USM shared to device memory as the async extension
    currently just supports device allocations.

* Address reviewer feedback

* Use global async variable to decide path in sycl_ext_[malloc_device|free]

4 weeks agoAdd experimental ggml-hexagon backend for the Hexagon NPU (llama/16547)
Max Krasnyansky [Wed, 22 Oct 2025 20:47:09 +0000 (13:47 -0700)]
Add experimental ggml-hexagon backend for the Hexagon NPU (llama/16547)

* model: add support for extra bufs for all devices

* hexagon: add experimental ggml-hexagon backend for the Hexagon NPU

This commit introduces a new experimental backend `ggml-hexagon` with support for the Hexagon NPU.

Highlights:
- Supports Hexagon versions: v73, v75, v79, and v81
- Targets Android devices based on Snapdragon SoCs: Gen3, 8-Elite, and 8-Elite Gen5
- Supports Q4_0, Q8_0, MXFP4, and FP32 data types
- Implements core LLM ops: MUL_MAT/MUL_MAT_ID, ADD/SUB/MUL/ADD_ID, RMS_NORM, ROPE, GLU/SWIGLU, SOFTMAX

**Note:** This backend is experimental and may exhibit instability or limited performance across supported devices.
It is intended for early testing and feedback from llama.cpp/ggml developer and user community.

Co-Authored-By: Rajdeep Ganguly <redacted>
Co-Authored-By: Todor Boinovski <redacted>
* hexagon: fix format checker errors

* hexagon: update readme and cmake presets

* ci: add android-ndk-build jobs that build plain ARM64 and Snapdragon versions

* hexagon: add simple graph optimizer for stacking MUL_MAT ops with the same input

* hexagon: move ADB helper scripts into scripts/snapdragon/adb

* hexagon: replace all f/printfs with GGML_LOG_...

* readme: add hexagon to the list supported backends

* hexagon: stack malmuts with quantized inputs only

* hexagon: add TODO for fixing issues in hexagon_graph_optimize

* hexagon: update to hex-sdk 6.4.0 and add scripts for running on QDC

* scripts: fix lint errors

* scripts: update qdc pytest script to make linter happy

* hexagon: add reduce sum in fp32

* hexagon: reduce number of vector stores in matmul output

* hexagon: remove the need for vdelta in reduce-multiply-x8

* hexagon: consistent use of reduce_sum_fp32 for row_sums

* hexagon: some more matmul optimizations and comments

Optimize cases where tensor dims are not multiple of 1024 (e.g in Qwen models).
We've handled those cases already but at a higher overhead.

* hexagon: update cmake presets

* hexagon: add OPMASK support for run-bench.sh wrapper

* hexagon: update to use GGML_BACKEND_API

* hexagon: remove unused logic for setting tensor flags for the views

* hexagon: add asserts to set/get_tensor to make sure we handle complete tensors

Same asserts as the CPU backend.

* hexagon: use cpy_tensor slow path for non-host buffers

* hexagon: error checks in the buffer allocator

* cmake: move include(extProj) under ggml-hexagon

* hexagon: don't forget to delete the backend on free

* hexagon: set/get_tensor size assert apply only to quantized tensors

* hexagon: reintroduce HEX_VERBOSE wrapper for GGML_LOG_DEBUG for now

GGML_LOG_DEBUG is always enabled for test-backend-ops and the output gets in the way.
Ideally we need a bit more finer log levels.

* docs: typos in hexagon developer docs (libggm-...)

* hexagon: overhaul error handling in the session/device allocation

this should handle all failure paths in the session allocation.

* hexagon: update cmake presets to enable fp16 vectors

* hexagon: remove unused time_usec function

* hexagon: don't forget to release buffer contexts

* hexagon: fixed indents in hvx-utils (missed clang-format auto-format failure)

* hexagon: remove custom can_repeat function and use ggml_can_repeat

---------

Co-authored-by: Rajdeep Ganguly <redacted>
Co-authored-by: Todor Boinovski <redacted>
4 weeks agoRevert "ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_v…" ...
Diego Devesa [Wed, 22 Oct 2025 18:20:55 +0000 (11:20 -0700)]
Revert "ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_v…" (#16723)

This reverts commit 19a5a3edfd306516cc419679d69d6435943b6816.

4 weeks agoggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for...
sirus20x6 [Wed, 22 Oct 2025 10:14:14 +0000 (05:14 -0500)]
ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for faster fills (llama/16522)

* Leverage the existing GGML_F32_VEC helpers to broadcast the fill value across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.

* Vectorize additional f32 helper loops

* Normalize f32 helper tails for ggml vec ops

---------

Co-authored-by: Aaron <redacted>
4 weeks agoCUDA: fix bug in topk-moe softmax (llama/16711)
Aman Gupta [Wed, 22 Oct 2025 04:33:08 +0000 (12:33 +0800)]
CUDA: fix bug in topk-moe softmax (llama/16711)

4 weeks agoCUDA: topk-moe: add optional parameter for gpt-oss (llama/16649)
Aman Gupta [Tue, 21 Oct 2025 14:40:38 +0000 (22:40 +0800)]
CUDA: topk-moe: add optional parameter for gpt-oss (llama/16649)

4 weeks agoCUDA: better error for FA kernel with 0 occupancy (llama/16643)
Johannes Gäßler [Tue, 21 Oct 2025 13:27:53 +0000 (15:27 +0200)]
CUDA: better error for FA kernel with 0 occupancy (llama/16643)

4 weeks agoRewrite simple-backend to use sched and ggml_backend_load_all (#1376)
Jeff Bolz [Wed, 29 Oct 2025 17:10:19 +0000 (12:10 -0500)]
Rewrite simple-backend to use sched and ggml_backend_load_all (#1376)

* Rewrite simple-backend to use sched and ggml_backend_load_all

* address slaren's feedback

* move the storage to the model class

5 weeks agosync : whisper.cpp
Georgi Gerganov [Wed, 22 Oct 2025 09:58:31 +0000 (12:58 +0300)]
sync : whisper.cpp

[no ci]

5 weeks agosync : llama.cpp
Georgi Gerganov [Tue, 21 Oct 2025 09:04:05 +0000 (12:04 +0300)]
sync : llama.cpp

5 weeks agoggml: add ggml_can_fuse_subgraph (llama/16662)
Aman Gupta [Tue, 21 Oct 2025 08:43:14 +0000 (16:43 +0800)]
ggml: add ggml_can_fuse_subgraph (llama/16662)

* ggml: add ggml_can_fuse_subgraph

* ggml-cuda: use ggml_can_fuse_subgraph for topk-moe

* format

* 1. remove inputs from signature as they are transient nodes
2. add check for views: view_src should be part of the subgraph

* - combine check into one loop
- check all view_src parents
- other minor review comments

* remove redudant if test

* - rename and other minor review comments

* add assert about count < 32

5 weeks agoopencl: fix warnings and clean up profiling (llama/16688)
lhez [Tue, 21 Oct 2025 05:26:17 +0000 (22:26 -0700)]
opencl: fix warnings and clean up profiling (llama/16688)

* opencl: remove unused headers, fix warnings

* opencl: clean up profiling, only keep kernel time

5 weeks agovulkan: Handle FA with all -inf mask values (llama/16447)
Jeff Bolz [Tue, 21 Oct 2025 03:16:08 +0000 (22:16 -0500)]
vulkan: Handle FA with all -inf mask values (llama/16447)

5 weeks agosycl : add PAD_REFLECT_D1 operator support (llama/16145)
YehuditE [Mon, 20 Oct 2025 22:21:12 +0000 (01:21 +0300)]
sycl : add PAD_REFLECT_D1 operator support (llama/16145)

* sycl: add PAD_REFLECT_D1 operator support

* docs(ops): regenerate docs/ops.md

* remove trailing whitespaces

* style: fix editorconfig issues — trim trailing spaces and normalize EOLs

* fix: move PAD_REFLECT_1D case outside of fall-through block

5 weeks agoggml-alloc : fix leak when reusing a tensor with a larger size (llama/16679)
Diego Devesa [Mon, 20 Oct 2025 12:53:50 +0000 (05:53 -0700)]
ggml-alloc : fix leak when reusing a tensor with a larger size (llama/16679)

5 weeks agoSYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (llama/16613)
safranowith [Mon, 20 Oct 2025 08:08:32 +0000 (11:08 +0300)]
SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (llama/16613)

* SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

Clean up unrelated changes from previous commit

* Chore: remove empty lines and fix indentation

* Clean up: remove leftover blank lines and fix spacing

* chore: fix trailing whitespace and ensure final newline

* Cleanup: remove redundant declarations already defined in header

* Sync docs/ops.md with updated backend operation support

* docs: update ops.md after rebase

* docs: update ops.md - Vulkan supports SSM_CONV and SSM_SCAN

5 weeks agoci : fix binaries release failure for s390x (binaries may not work yet) (llama/16664)
Aaron Teo [Sun, 19 Oct 2025 21:06:39 +0000 (05:06 +0800)]
ci : fix binaries release failure for s390x (binaries may not work yet) (llama/16664)

* devops: initial patch

Signed-off-by: Aaron Teo <redacted>
* devops: forgot the z15 suffix

Signed-off-by: Aaron Teo <redacted>
* devops: attempt at impl GGML_CPU_ALL_VARIANTS for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: rm baseline version

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
5 weeks agoHIP: fix GPU_TARGETS (llama/16642)
Johannes Gäßler [Sat, 18 Oct 2025 12:47:32 +0000 (14:47 +0200)]
HIP: fix GPU_TARGETS (llama/16642)

5 weeks agovulkan: Implement topk_moe fused shader, ported from CUDA (llama/16641)
Jeff Bolz [Sat, 18 Oct 2025 10:22:57 +0000 (05:22 -0500)]
vulkan: Implement topk_moe fused shader, ported from CUDA (llama/16641)

This is similar to the CUDA shader from #16130, but doesn't use shared memory
and handles different subgroup sizes.

5 weeks agoCUDA: use registers instead of smem in topk-moe (llama/16647)
Aman Gupta [Sat, 18 Oct 2025 09:52:53 +0000 (17:52 +0800)]
CUDA: use registers instead of smem in topk-moe (llama/16647)

Uses the technique used in the vulkan PR #16641. Neat trick!

5 weeks agoopencl: transposed gemm/gemv moe kernel with mxfp4,f32 (llama/16602)
Shawn Gu [Sat, 18 Oct 2025 00:55:32 +0000 (17:55 -0700)]
opencl: transposed gemm/gemv moe kernel with mxfp4,f32 (llama/16602)

* opencl: transposed gemm/gemv moe kernel with mxfp4,f32

* add restore kernel for moe transpose

* fix trailing whitespaces

* resolve compilation warnings

5 weeks agorpc : report actual free memory (llama/16616)
Radoslav Gerganov [Fri, 17 Oct 2025 15:02:52 +0000 (18:02 +0300)]
rpc : report actual free memory (llama/16616)

* rpc : report actual free memory

Start reporting the free memory on every device instead of using
fixed values. Now llama-cli users can get a nice memory breakdown
when using RPC devices.

* drop --mem in rpc-server

5 weeks agovulkan: Add State Space Model (SSM) Operations Support (llama/16463)
Giuseppe Scrivano [Fri, 17 Oct 2025 12:23:47 +0000 (14:23 +0200)]
vulkan: Add State Space Model (SSM) Operations Support (llama/16463)

* vulkan: implement SSM scan operation

Add State Space Model scan operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <redacted>
* vulkan: implement SSM conv operation

Add State Space Model conv operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <redacted>
---------

Signed-off-by: Giuseppe Scrivano <redacted>
5 weeks agoggml : fix SpaceMit IME array out-of-bounds in task assignment (llama/16629)
muggle-stack [Fri, 17 Oct 2025 10:01:23 +0000 (18:01 +0800)]
ggml : fix SpaceMit IME array out-of-bounds in task assignment (llama/16629)

Fix incorrect task-to-batch index calculation in the quantization phase.

The bug caused out-of-bounds access to qnbitgemm_args array when
compute_idx exceeded per_gemm_block_count_m, leading to invalid
pointer dereferences and SIGBUS errors.

Correctly map tasks to batches by dividing compute_idx by
per_gemm_block_count_m instead of block_size_m.

Example:
  batch_feature=1, gemm_m=30, block_size_m=4
  per_gemm_block_count_m = 8, task_count = 8

  Old: gemm_idx = 4/4 = 1 (out of bounds  New: gemm_idx = 4/8 = 0 (correct)

Tested on SpaceMit K1 RISC-V64 with qwen2.5:0.5b model.

Co-authored-by: muggle <redacted>
5 weeks agovulkan: fix debug build (add_rms_len/data not found) (llama/16624)
Jeff Bolz [Fri, 17 Oct 2025 07:31:04 +0000 (02:31 -0500)]
vulkan: fix debug build (add_rms_len/data not found) (llama/16624)

5 weeks agometal : add `CONV_TRANSPOSE_2D` (llama/16542)
Ilia Ilmer [Fri, 17 Oct 2025 06:33:58 +0000 (02:33 -0400)]
metal : add `CONV_TRANSPOSE_2D` (llama/16542)

* initial: headers and metal-device.cpp updates

* adding conv_transpose_2d

* fix type

* fix type: int32->int64

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* add checks for src[0] and src[1]; add type checks

* Update ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* add more tests, add optimization to threading

* add dynamic memory allocation in metal

---------

Co-authored-by: Georgi Gerganov <redacted>
5 weeks agoSYCL SET operator optimized for F32 tensors (llama/16350)
GittyBurstein [Fri, 17 Oct 2025 02:36:40 +0000 (05:36 +0300)]
SYCL SET operator optimized for F32 tensors (llama/16350)

* SYCL/SET: implement operator + wire-up; docs/ops updates; element_wise & ggml-sycl changes

* sycl(SET): re-apply post-rebase; revert manual docs/ops.md; style cleanups

* move SET op to standalone file, GPU-only implementation

* Update SYCL SET operator for F32

* ci: fix editorconfig issues (LF endings, trailing spaces, final newline)

* fixed ggml-sycl.cpp

---------

Co-authored-by: Gitty Burstein <redacted>
5 weeks agosycl : add ARANGE operator (llama/16362)
GittyBurstein [Thu, 16 Oct 2025 13:26:21 +0000 (16:26 +0300)]
sycl : add ARANGE operator (llama/16362)

* SYCL: update element-wise ops and presets

* clean arange

* Re-trigger CI

---------

Co-authored-by: Gitty Burstein <redacted>
5 weeks agoCANN: format code using .clang-format (llama/15863)
Chenguang Li [Thu, 16 Oct 2025 08:41:11 +0000 (16:41 +0800)]
CANN: format code using .clang-format (llama/15863)

This commit applies .clang-format rules to all source files under the
ggml-cann directory to ensure consistent coding style and readability.
The .clang-format option `SortIncludes: false` has been set to disable
automatic reordering of include directives.
No functional changes are introduced.

Co-authored-by: hipudding <redacted>
5 weeks agoggml-cpu: replace putenv with setenv for const-correctness (llama/16573)
takuya kodama [Thu, 16 Oct 2025 05:10:32 +0000 (13:10 +0800)]
ggml-cpu: replace putenv with setenv for const-correctness (llama/16573)

## Why it failed

When compiling with strict compiler flags (-Wwrite-strings -Werror=discarded-qualifiers),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_C_FLAGS="-Wwrite-strings -Werror=discarded-qualifiers" && \
cmake --build ../llama.cpp.build/
...
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c: In function ‘ggml_cpu_init’:
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:3572:24: error: passing argument 1 of ‘putenv’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 3572 |                 putenv("KMP_BLOCKTIME=200"); // 200ms
      |                        ^~~~~~~~~~~~~~~~~~~
In file included from /home/otegami/work/cpp/llama.cpp/ggml/src/./ggml-impl.h:10,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:6,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/traits.h:3,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:6:
/usr/include/stdlib.h:786:26: note: expected ‘char *’ but argument is of type ‘const char *’
  786 | extern int putenv (char *__string) __THROW __nonnull ((1));
      |                    ~~~~~~^~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
```

The issue is that putenv() expects a non-const char * but receives a string literal (const char *).

## How to fix

This PR replaces putenv("KMP_BLOCKTIME=200") with setenv("KMP_BLOCKTIME", "200", 0).

Benefits of setenv():
- Accepts const char * parameters (no qualifier warnings)
- Makes copies of the strings (safer memory handling)
- The third parameter (0) ensures we don't overwrite if already set

5 weeks agoSYCL: Add GGML_OP_MEAN operator support (llama/16009)
yael-works [Thu, 16 Oct 2025 04:21:28 +0000 (07:21 +0300)]
SYCL: Add GGML_OP_MEAN operator support (llama/16009)

* SYCL: Add GGML_OP_MEAN operator support

* SYCL: Fix formatting for GGML_OP_MEAN case

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agocpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (llama/16083)
safranowith [Wed, 15 Oct 2025 19:24:51 +0000 (22:24 +0300)]
cpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (llama/16083)

* CPU: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

- Added the operators to unary op enum
- Implemented API functions
- Implemented forward and unary-op logic in CPU backend
- Updated ggml_get_n_tasks
- Updated operators names array and static_assert
- Updated docs and enabled automatic tests

* docs: add documentation for ggml_trunc and ggml_trunc_inplace in ggml.h

* chore: remove trailing whitespace from ggml.h

* Remove unresolved merge markers

* Apply review suggestions: cleanup formatting, enum order and leftover artifacts

* Regenerate ops.md using create_ops_docs.py

5 weeks agoopencl: add q8_0 mm support (llama/16469)
lhez [Wed, 15 Oct 2025 17:51:04 +0000 (10:51 -0700)]
opencl: add q8_0 mm support (llama/16469)

* opencl: add mm_q8_0_f32

* opencl: fix data loading for incomplete tile

* opencl: use q8_0 mm for larger matrix

* opencl: add some tests to cover the path

5 weeks agoopencl: fix FA for f32 (llama/16584)
lhez [Wed, 15 Oct 2025 17:48:28 +0000 (10:48 -0700)]
opencl: fix FA for f32 (llama/16584)

5 weeks agometal: optimise `GGML_OP_SUM` (llama/16559)
Sam/Samuel [Wed, 15 Oct 2025 14:05:56 +0000 (23:05 +0900)]
metal: optimise `GGML_OP_SUM` (llama/16559)

* optimise GGML_OP_SUM

* add non-contiguous tests by permuting the input

* change tests to require full contiguity of OP_SUM

* cuda : add check GGML_OP_SUM

---------

Co-authored-by: Georgi Gerganov <redacted>
5 weeks agoCUDA: Changing the CUDA scheduling strategy to spin (llama/16585)
Julius Tischbein [Wed, 15 Oct 2025 11:54:15 +0000 (13:54 +0200)]
CUDA: Changing the CUDA scheduling strategy to spin (llama/16585)

* CUDA set scheduling strategy to spinning for cc121

* Using prop.major and prop.minor, include HIP and MUSA

* Exclude HIP and MUSA

* Remove trailing whitespace

Co-authored-by: Johannes Gäßler <redacted>
* Remove empty line

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
5 weeks agometal : avoid using Metal's gpuAddress property (llama/16576)
Georgi Gerganov [Tue, 14 Oct 2025 17:33:05 +0000 (20:33 +0300)]
metal : avoid using Metal's gpuAddress property (llama/16576)

* metal : avoid using Metal's gpuAddress property

* metal : fix rope kernels buffer check

6 weeks agosync : llama.cpp upstream/latest upstream/0.9.4.58
Georgi Gerganov [Tue, 14 Oct 2025 17:24:43 +0000 (20:24 +0300)]
sync : llama.cpp

6 weeks agovulkan: Add ACC_TYPE_VEC2 implementation (llama/16203)
SavicStefan [Tue, 14 Oct 2025 17:18:05 +0000 (19:18 +0200)]
vulkan: Add ACC_TYPE_VEC2 implementation (llama/16203)

Signed-off-by: Stefan Savic <redacted>
Co-authored-by: Stefan Savic <redacted>
6 weeks agoCUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (llama/16577)
Aman Gupta [Tue, 14 Oct 2025 14:48:08 +0000 (22:48 +0800)]
CUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (llama/16577)

6 weeks agovulkan: Support FA with K/V in F32 (llama/16543)
Jeff Bolz [Tue, 14 Oct 2025 13:53:37 +0000 (08:53 -0500)]
vulkan: Support FA with K/V in F32 (llama/16543)

6 weeks agovulkan: Improve build time for MSVC (llama/16545)
Jeff Bolz [Tue, 14 Oct 2025 12:51:36 +0000 (07:51 -0500)]
vulkan: Improve build time for MSVC (llama/16545)

Enable CMP0147 so custom build steps (invoking vulkan-shader-gen) are run in parallel.

Enable /MP so source files are compiled in parallel.

6 weeks agoCUDA: enable FA for FP32 KV cache (llama/16546)
Johannes Gäßler [Tue, 14 Oct 2025 12:22:47 +0000 (14:22 +0200)]
CUDA: enable FA for FP32 KV cache (llama/16546)

6 weeks agoCUDA: use fastdiv + ggml_cuda_mad for mmvf (llama/16557)
Aman Gupta [Tue, 14 Oct 2025 11:16:21 +0000 (19:16 +0800)]
CUDA: use fastdiv + ggml_cuda_mad for mmvf (llama/16557)

* CUDA: use fastdiv + ggml_cuda_mad for mmvf

* use bf16 directly + fix formatting

* Add exception for HIP code

6 weeks agoCUDA: add fp kernel for larger batch size MoE (llama/16512)
Aman Gupta [Tue, 14 Oct 2025 11:15:15 +0000 (19:15 +0800)]
CUDA: add fp kernel for larger batch size MoE (llama/16512)

* CUDA: kernel for larger batch sizes for MoE

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* fixup

* tests

* Move mmq_ids_helper to mmid

* cleanup

* Remove redundant checks

6 weeks agocuda : remove legacy copy-op pointer indirection code (llama/16485)
Anav Prasad [Tue, 14 Oct 2025 09:53:49 +0000 (09:53 +0000)]
cuda : remove legacy copy-op pointer indirection code (llama/16485)

* remove legacy copy-op pointer indirection code

* further removal of copy-op indirection code

* renamed check_node_graph_compatibility_and_refresh_copy_ops function

6 weeks agometal : FA support F32 K and V and head size = 32 (llama/16531)
Georgi Gerganov [Mon, 13 Oct 2025 20:07:57 +0000 (23:07 +0300)]
metal : FA support F32 K and V and head size = 32 (llama/16531)

* metal : FA support F32 K and V and head size = 32

* graph : remove obsolete comment [no ci]

6 weeks agoopencl: fix build targeting CL 2 (llama/16554)
lhez [Mon, 13 Oct 2025 18:50:37 +0000 (11:50 -0700)]
opencl: fix build targeting CL 2 (llama/16554)

6 weeks agoCUDA: fix numerical issues in tile FA kernel (llama/16540)
Johannes Gäßler [Mon, 13 Oct 2025 14:29:45 +0000 (16:29 +0200)]
CUDA: fix numerical issues in tile FA kernel (llama/16540)

6 weeks agoggml : fix build broken with -march=armv9-a on MacOS (llama/16520)
Jie Fu (傅杰) [Mon, 13 Oct 2025 12:48:47 +0000 (20:48 +0800)]
ggml : fix build broken with -march=armv9-a on MacOS (llama/16520)

* ggml : fix build broken with -march=armv9-a on MacOS

Signed-off-by: Jie Fu <redacted>
* Add #pragma message

Signed-off-by: Jie Fu <redacted>
* Address review comment.

Signed-off-by: Jie Fu <redacted>
* Update ggml/src/ggml-cpu/ggml-cpu.c

---------

Signed-off-by: Jie Fu <redacted>
Co-authored-by: Diego Devesa <redacted>
6 weeks agoCANN: fix CPU memory leak in CANN backend (llama/16549)
Chenguang Li [Mon, 13 Oct 2025 09:01:24 +0000 (17:01 +0800)]
CANN: fix CPU memory leak in CANN backend (llama/16549)

This commit fixes a CPU-side memory leak issue in the CANN backend,
which occurred when intermediate aclTensorList objects were not properly
released after operator execution. The leak happened during repeated
invocations of CANN ops (e.g., FlashAttention), leading to increasing
host memory usage over time.

Proper resource cleanup (aclDestroyTensorList and related release logic)
has been added to ensure that all temporary tensors are correctly freed.