]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
2 months agoRewrite simple-backend to use sched and ggml_backend_load_all (#1376)
Jeff Bolz [Wed, 29 Oct 2025 17:10:19 +0000 (12:10 -0500)]
Rewrite simple-backend to use sched and ggml_backend_load_all (#1376)

* Rewrite simple-backend to use sched and ggml_backend_load_all

* address slaren's feedback

* move the storage to the model class

3 months agosync : whisper.cpp
Georgi Gerganov [Wed, 22 Oct 2025 09:58:31 +0000 (12:58 +0300)]
sync : whisper.cpp

[no ci]

3 months agosync : llama.cpp
Georgi Gerganov [Tue, 21 Oct 2025 09:04:05 +0000 (12:04 +0300)]
sync : llama.cpp

3 months agoggml: add ggml_can_fuse_subgraph (llama/16662)
Aman Gupta [Tue, 21 Oct 2025 08:43:14 +0000 (16:43 +0800)]
ggml: add ggml_can_fuse_subgraph (llama/16662)

* ggml: add ggml_can_fuse_subgraph

* ggml-cuda: use ggml_can_fuse_subgraph for topk-moe

* format

* 1. remove inputs from signature as they are transient nodes
2. add check for views: view_src should be part of the subgraph

* - combine check into one loop
- check all view_src parents
- other minor review comments

* remove redudant if test

* - rename and other minor review comments

* add assert about count < 32

3 months agoopencl: fix warnings and clean up profiling (llama/16688)
lhez [Tue, 21 Oct 2025 05:26:17 +0000 (22:26 -0700)]
opencl: fix warnings and clean up profiling (llama/16688)

* opencl: remove unused headers, fix warnings

* opencl: clean up profiling, only keep kernel time

3 months agovulkan: Handle FA with all -inf mask values (llama/16447)
Jeff Bolz [Tue, 21 Oct 2025 03:16:08 +0000 (22:16 -0500)]
vulkan: Handle FA with all -inf mask values (llama/16447)

3 months agosycl : add PAD_REFLECT_D1 operator support (llama/16145)
YehuditE [Mon, 20 Oct 2025 22:21:12 +0000 (01:21 +0300)]
sycl : add PAD_REFLECT_D1 operator support (llama/16145)

* sycl: add PAD_REFLECT_D1 operator support

* docs(ops): regenerate docs/ops.md

* remove trailing whitespaces

* style: fix editorconfig issues — trim trailing spaces and normalize EOLs

* fix: move PAD_REFLECT_1D case outside of fall-through block

3 months agoggml-alloc : fix leak when reusing a tensor with a larger size (llama/16679)
Diego Devesa [Mon, 20 Oct 2025 12:53:50 +0000 (05:53 -0700)]
ggml-alloc : fix leak when reusing a tensor with a larger size (llama/16679)

3 months agoSYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (llama/16613)
safranowith [Mon, 20 Oct 2025 08:08:32 +0000 (11:08 +0300)]
SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (llama/16613)

* SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

Clean up unrelated changes from previous commit

* Chore: remove empty lines and fix indentation

* Clean up: remove leftover blank lines and fix spacing

* chore: fix trailing whitespace and ensure final newline

* Cleanup: remove redundant declarations already defined in header

* Sync docs/ops.md with updated backend operation support

* docs: update ops.md after rebase

* docs: update ops.md - Vulkan supports SSM_CONV and SSM_SCAN

3 months agoci : fix binaries release failure for s390x (binaries may not work yet) (llama/16664)
Aaron Teo [Sun, 19 Oct 2025 21:06:39 +0000 (05:06 +0800)]
ci : fix binaries release failure for s390x (binaries may not work yet) (llama/16664)

* devops: initial patch

Signed-off-by: Aaron Teo <redacted>
* devops: forgot the z15 suffix

Signed-off-by: Aaron Teo <redacted>
* devops: attempt at impl GGML_CPU_ALL_VARIANTS for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: rm baseline version

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
3 months agoHIP: fix GPU_TARGETS (llama/16642)
Johannes Gäßler [Sat, 18 Oct 2025 12:47:32 +0000 (14:47 +0200)]
HIP: fix GPU_TARGETS (llama/16642)

3 months agovulkan: Implement topk_moe fused shader, ported from CUDA (llama/16641)
Jeff Bolz [Sat, 18 Oct 2025 10:22:57 +0000 (05:22 -0500)]
vulkan: Implement topk_moe fused shader, ported from CUDA (llama/16641)

This is similar to the CUDA shader from #16130, but doesn't use shared memory
and handles different subgroup sizes.

3 months agoCUDA: use registers instead of smem in topk-moe (llama/16647)
Aman Gupta [Sat, 18 Oct 2025 09:52:53 +0000 (17:52 +0800)]
CUDA: use registers instead of smem in topk-moe (llama/16647)

Uses the technique used in the vulkan PR #16641. Neat trick!

3 months agoopencl: transposed gemm/gemv moe kernel with mxfp4,f32 (llama/16602)
Shawn Gu [Sat, 18 Oct 2025 00:55:32 +0000 (17:55 -0700)]
opencl: transposed gemm/gemv moe kernel with mxfp4,f32 (llama/16602)

* opencl: transposed gemm/gemv moe kernel with mxfp4,f32

* add restore kernel for moe transpose

* fix trailing whitespaces

* resolve compilation warnings

3 months agorpc : report actual free memory (llama/16616)
Radoslav Gerganov [Fri, 17 Oct 2025 15:02:52 +0000 (18:02 +0300)]
rpc : report actual free memory (llama/16616)

* rpc : report actual free memory

Start reporting the free memory on every device instead of using
fixed values. Now llama-cli users can get a nice memory breakdown
when using RPC devices.

* drop --mem in rpc-server

3 months agovulkan: Add State Space Model (SSM) Operations Support (llama/16463)
Giuseppe Scrivano [Fri, 17 Oct 2025 12:23:47 +0000 (14:23 +0200)]
vulkan: Add State Space Model (SSM) Operations Support (llama/16463)

* vulkan: implement SSM scan operation

Add State Space Model scan operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <redacted>
* vulkan: implement SSM conv operation

Add State Space Model conv operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <redacted>
---------

Signed-off-by: Giuseppe Scrivano <redacted>
3 months agoggml : fix SpaceMit IME array out-of-bounds in task assignment (llama/16629)
muggle-stack [Fri, 17 Oct 2025 10:01:23 +0000 (18:01 +0800)]
ggml : fix SpaceMit IME array out-of-bounds in task assignment (llama/16629)

Fix incorrect task-to-batch index calculation in the quantization phase.

The bug caused out-of-bounds access to qnbitgemm_args array when
compute_idx exceeded per_gemm_block_count_m, leading to invalid
pointer dereferences and SIGBUS errors.

Correctly map tasks to batches by dividing compute_idx by
per_gemm_block_count_m instead of block_size_m.

Example:
  batch_feature=1, gemm_m=30, block_size_m=4
  per_gemm_block_count_m = 8, task_count = 8

  Old: gemm_idx = 4/4 = 1 (out of bounds  New: gemm_idx = 4/8 = 0 (correct)

Tested on SpaceMit K1 RISC-V64 with qwen2.5:0.5b model.

Co-authored-by: muggle <redacted>
3 months agovulkan: fix debug build (add_rms_len/data not found) (llama/16624)
Jeff Bolz [Fri, 17 Oct 2025 07:31:04 +0000 (02:31 -0500)]
vulkan: fix debug build (add_rms_len/data not found) (llama/16624)

3 months agometal : add `CONV_TRANSPOSE_2D` (llama/16542)
Ilia Ilmer [Fri, 17 Oct 2025 06:33:58 +0000 (02:33 -0400)]
metal : add `CONV_TRANSPOSE_2D` (llama/16542)

* initial: headers and metal-device.cpp updates

* adding conv_transpose_2d

* fix type

* fix type: int32->int64

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* add checks for src[0] and src[1]; add type checks

* Update ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* add more tests, add optimization to threading

* add dynamic memory allocation in metal

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agoSYCL SET operator optimized for F32 tensors (llama/16350)
GittyBurstein [Fri, 17 Oct 2025 02:36:40 +0000 (05:36 +0300)]
SYCL SET operator optimized for F32 tensors (llama/16350)

* SYCL/SET: implement operator + wire-up; docs/ops updates; element_wise & ggml-sycl changes

* sycl(SET): re-apply post-rebase; revert manual docs/ops.md; style cleanups

* move SET op to standalone file, GPU-only implementation

* Update SYCL SET operator for F32

* ci: fix editorconfig issues (LF endings, trailing spaces, final newline)

* fixed ggml-sycl.cpp

---------

Co-authored-by: Gitty Burstein <redacted>
3 months agosycl : add ARANGE operator (llama/16362)
GittyBurstein [Thu, 16 Oct 2025 13:26:21 +0000 (16:26 +0300)]
sycl : add ARANGE operator (llama/16362)

* SYCL: update element-wise ops and presets

* clean arange

* Re-trigger CI

---------

Co-authored-by: Gitty Burstein <redacted>
3 months agoCANN: format code using .clang-format (llama/15863)
Chenguang Li [Thu, 16 Oct 2025 08:41:11 +0000 (16:41 +0800)]
CANN: format code using .clang-format (llama/15863)

This commit applies .clang-format rules to all source files under the
ggml-cann directory to ensure consistent coding style and readability.
The .clang-format option `SortIncludes: false` has been set to disable
automatic reordering of include directives.
No functional changes are introduced.

Co-authored-by: hipudding <redacted>
3 months agoggml-cpu: replace putenv with setenv for const-correctness (llama/16573)
takuya kodama [Thu, 16 Oct 2025 05:10:32 +0000 (13:10 +0800)]
ggml-cpu: replace putenv with setenv for const-correctness (llama/16573)

## Why it failed

When compiling with strict compiler flags (-Wwrite-strings -Werror=discarded-qualifiers),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_C_FLAGS="-Wwrite-strings -Werror=discarded-qualifiers" && \
cmake --build ../llama.cpp.build/
...
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c: In function ‘ggml_cpu_init’:
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:3572:24: error: passing argument 1 of ‘putenv’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 3572 |                 putenv("KMP_BLOCKTIME=200"); // 200ms
      |                        ^~~~~~~~~~~~~~~~~~~
In file included from /home/otegami/work/cpp/llama.cpp/ggml/src/./ggml-impl.h:10,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:6,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/traits.h:3,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:6:
/usr/include/stdlib.h:786:26: note: expected ‘char *’ but argument is of type ‘const char *’
  786 | extern int putenv (char *__string) __THROW __nonnull ((1));
      |                    ~~~~~~^~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
```

The issue is that putenv() expects a non-const char * but receives a string literal (const char *).

## How to fix

This PR replaces putenv("KMP_BLOCKTIME=200") with setenv("KMP_BLOCKTIME", "200", 0).

Benefits of setenv():
- Accepts const char * parameters (no qualifier warnings)
- Makes copies of the strings (safer memory handling)
- The third parameter (0) ensures we don't overwrite if already set

3 months agoSYCL: Add GGML_OP_MEAN operator support (llama/16009)
yael-works [Thu, 16 Oct 2025 04:21:28 +0000 (07:21 +0300)]
SYCL: Add GGML_OP_MEAN operator support (llama/16009)

* SYCL: Add GGML_OP_MEAN operator support

* SYCL: Fix formatting for GGML_OP_MEAN case

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
3 months agocpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (llama/16083)
safranowith [Wed, 15 Oct 2025 19:24:51 +0000 (22:24 +0300)]
cpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (llama/16083)

* CPU: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

- Added the operators to unary op enum
- Implemented API functions
- Implemented forward and unary-op logic in CPU backend
- Updated ggml_get_n_tasks
- Updated operators names array and static_assert
- Updated docs and enabled automatic tests

* docs: add documentation for ggml_trunc and ggml_trunc_inplace in ggml.h

* chore: remove trailing whitespace from ggml.h

* Remove unresolved merge markers

* Apply review suggestions: cleanup formatting, enum order and leftover artifacts

* Regenerate ops.md using create_ops_docs.py

3 months agoopencl: add q8_0 mm support (llama/16469)
lhez [Wed, 15 Oct 2025 17:51:04 +0000 (10:51 -0700)]
opencl: add q8_0 mm support (llama/16469)

* opencl: add mm_q8_0_f32

* opencl: fix data loading for incomplete tile

* opencl: use q8_0 mm for larger matrix

* opencl: add some tests to cover the path

3 months agoopencl: fix FA for f32 (llama/16584)
lhez [Wed, 15 Oct 2025 17:48:28 +0000 (10:48 -0700)]
opencl: fix FA for f32 (llama/16584)

3 months agometal: optimise `GGML_OP_SUM` (llama/16559)
Sam/Samuel [Wed, 15 Oct 2025 14:05:56 +0000 (23:05 +0900)]
metal: optimise `GGML_OP_SUM` (llama/16559)

* optimise GGML_OP_SUM

* add non-contiguous tests by permuting the input

* change tests to require full contiguity of OP_SUM

* cuda : add check GGML_OP_SUM

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agoCUDA: Changing the CUDA scheduling strategy to spin (llama/16585)
Julius Tischbein [Wed, 15 Oct 2025 11:54:15 +0000 (13:54 +0200)]
CUDA: Changing the CUDA scheduling strategy to spin (llama/16585)

* CUDA set scheduling strategy to spinning for cc121

* Using prop.major and prop.minor, include HIP and MUSA

* Exclude HIP and MUSA

* Remove trailing whitespace

Co-authored-by: Johannes Gäßler <redacted>
* Remove empty line

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
3 months agometal : avoid using Metal's gpuAddress property (llama/16576)
Georgi Gerganov [Tue, 14 Oct 2025 17:33:05 +0000 (20:33 +0300)]
metal : avoid using Metal's gpuAddress property (llama/16576)

* metal : avoid using Metal's gpuAddress property

* metal : fix rope kernels buffer check

3 months agosync : llama.cpp upstream/latest upstream/0.9.4.58
Georgi Gerganov [Tue, 14 Oct 2025 17:24:43 +0000 (20:24 +0300)]
sync : llama.cpp

3 months agovulkan: Add ACC_TYPE_VEC2 implementation (llama/16203)
SavicStefan [Tue, 14 Oct 2025 17:18:05 +0000 (19:18 +0200)]
vulkan: Add ACC_TYPE_VEC2 implementation (llama/16203)

Signed-off-by: Stefan Savic <redacted>
Co-authored-by: Stefan Savic <redacted>
3 months agoCUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (llama/16577)
Aman Gupta [Tue, 14 Oct 2025 14:48:08 +0000 (22:48 +0800)]
CUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (llama/16577)

3 months agovulkan: Support FA with K/V in F32 (llama/16543)
Jeff Bolz [Tue, 14 Oct 2025 13:53:37 +0000 (08:53 -0500)]
vulkan: Support FA with K/V in F32 (llama/16543)

3 months agovulkan: Improve build time for MSVC (llama/16545)
Jeff Bolz [Tue, 14 Oct 2025 12:51:36 +0000 (07:51 -0500)]
vulkan: Improve build time for MSVC (llama/16545)

Enable CMP0147 so custom build steps (invoking vulkan-shader-gen) are run in parallel.

Enable /MP so source files are compiled in parallel.

3 months agoCUDA: enable FA for FP32 KV cache (llama/16546)
Johannes Gäßler [Tue, 14 Oct 2025 12:22:47 +0000 (14:22 +0200)]
CUDA: enable FA for FP32 KV cache (llama/16546)

3 months agoCUDA: use fastdiv + ggml_cuda_mad for mmvf (llama/16557)
Aman Gupta [Tue, 14 Oct 2025 11:16:21 +0000 (19:16 +0800)]
CUDA: use fastdiv + ggml_cuda_mad for mmvf (llama/16557)

* CUDA: use fastdiv + ggml_cuda_mad for mmvf

* use bf16 directly + fix formatting

* Add exception for HIP code

3 months agoCUDA: add fp kernel for larger batch size MoE (llama/16512)
Aman Gupta [Tue, 14 Oct 2025 11:15:15 +0000 (19:15 +0800)]
CUDA: add fp kernel for larger batch size MoE (llama/16512)

* CUDA: kernel for larger batch sizes for MoE

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* fixup

* tests

* Move mmq_ids_helper to mmid

* cleanup

* Remove redundant checks

3 months agocuda : remove legacy copy-op pointer indirection code (llama/16485)
Anav Prasad [Tue, 14 Oct 2025 09:53:49 +0000 (09:53 +0000)]
cuda : remove legacy copy-op pointer indirection code (llama/16485)

* remove legacy copy-op pointer indirection code

* further removal of copy-op indirection code

* renamed check_node_graph_compatibility_and_refresh_copy_ops function

3 months agometal : FA support F32 K and V and head size = 32 (llama/16531)
Georgi Gerganov [Mon, 13 Oct 2025 20:07:57 +0000 (23:07 +0300)]
metal : FA support F32 K and V and head size = 32 (llama/16531)

* metal : FA support F32 K and V and head size = 32

* graph : remove obsolete comment [no ci]

3 months agoopencl: fix build targeting CL 2 (llama/16554)
lhez [Mon, 13 Oct 2025 18:50:37 +0000 (11:50 -0700)]
opencl: fix build targeting CL 2 (llama/16554)

3 months agoCUDA: fix numerical issues in tile FA kernel (llama/16540)
Johannes Gäßler [Mon, 13 Oct 2025 14:29:45 +0000 (16:29 +0200)]
CUDA: fix numerical issues in tile FA kernel (llama/16540)

3 months agoggml : fix build broken with -march=armv9-a on MacOS (llama/16520)
Jie Fu (傅杰) [Mon, 13 Oct 2025 12:48:47 +0000 (20:48 +0800)]
ggml : fix build broken with -march=armv9-a on MacOS (llama/16520)

* ggml : fix build broken with -march=armv9-a on MacOS

Signed-off-by: Jie Fu <redacted>
* Add #pragma message

Signed-off-by: Jie Fu <redacted>
* Address review comment.

Signed-off-by: Jie Fu <redacted>
* Update ggml/src/ggml-cpu/ggml-cpu.c

---------

Signed-off-by: Jie Fu <redacted>
Co-authored-by: Diego Devesa <redacted>
3 months agoCANN: fix CPU memory leak in CANN backend (llama/16549)
Chenguang Li [Mon, 13 Oct 2025 09:01:24 +0000 (17:01 +0800)]
CANN: fix CPU memory leak in CANN backend (llama/16549)

This commit fixes a CPU-side memory leak issue in the CANN backend,
which occurred when intermediate aclTensorList objects were not properly
released after operator execution. The leak happened during repeated
invocations of CANN ops (e.g., FlashAttention), leading to increasing
host memory usage over time.

Proper resource cleanup (aclDestroyTensorList and related release logic)
has been added to ensure that all temporary tensors are correctly freed.

3 months agometal: add support for opt_step_sgd (llama/16539)
Sam/Samuel [Mon, 13 Oct 2025 08:25:02 +0000 (16:25 +0800)]
metal: add support for opt_step_sgd (llama/16539)

* metal: add support for opt_step_sgd

* add newline to pass EditorConfig check

3 months agoggml : fix scalar path for computing norm (llama/16558)
Georgi Gerganov [Mon, 13 Oct 2025 08:22:27 +0000 (11:22 +0300)]
ggml : fix scalar path for computing norm (llama/16558)

3 months agoCANN: Update several operators to support FP16 data format (llama/16251)
hipudding [Mon, 13 Oct 2025 00:52:22 +0000 (08:52 +0800)]
CANN: Update several operators to support FP16 data format (llama/16251)

Many Ascend operators internally use FP16 precision for computation.
If input data is in FP32, it must first be cast to FP16 before
computation, and then cast back to FP32 after computation, which
introduces unnecessary cast operations. Moreover, FP16 computation
requires significantly less workload compared to FP32, leading to
noticeable efficiency improvements.

In this change, `get_rows`, `rms_norm`, and `flash_attn_ext` are extended
to support multiple data types. Validation on the Qwen2 0.5b model shows
correct accuracy and about 10% performance gain in concurrent scenarios.

Co-authored-by: noemotiovon <redacted>
3 months agometal : add opt_step_adamw and op_sum (llama/16529)
Sam/Samuel [Sun, 12 Oct 2025 18:43:14 +0000 (02:43 +0800)]
metal : add opt_step_adamw and op_sum (llama/16529)

* scaffold to support opt step adamw on metal (not written so far)

* add opt-step-adamw kernel for metal

* pass op->src[4] as a separate buffer to the pipeline

* add bounds check to opt-step-adamw kernel

* complete scaffold for GGML_OP_SUM

* naive GGML_OP_SUM kernel

* remove unwanted comment

* change OP_SUM capability gate

* Add has_simdgroup_reduction to both ops to pass CI

3 months agofix UT fault cases: count-equal, argsort, pad OPs (llama/16521)
Neo Zhang Jianyu [Sun, 12 Oct 2025 13:53:35 +0000 (21:53 +0800)]
fix UT fault cases: count-equal, argsort, pad OPs (llama/16521)

* fix/refactor OP argsort, pad

* fix count-equal op

* update SYCL OP list

* fix format issue

---------

Co-authored-by: Zhang Jianyu <redacted>
3 months agoggml : Fix FP16 ELU positive branch (llama/16519)
sirus20x6 [Sun, 12 Oct 2025 05:25:37 +0000 (00:25 -0500)]
ggml : Fix FP16 ELU positive branch (llama/16519)

Co-authored-by: Aaron <redacted>
3 months agoggml: Correct SVE implementation in ggml_vec_dot_f16_unroll (llama/16518)
sirus20x6 [Sun, 12 Oct 2025 05:15:00 +0000 (00:15 -0500)]
ggml: Correct SVE implementation in ggml_vec_dot_f16_unroll (llama/16518)

The previous SVE implementation for `ggml_vec_dot_f16_unroll` contained a bug due to a copy-paste error. The wrong variable was used in an FMA instruction, leading to incorrect results. This commit corrects the variable usage and improves the clarity of the code by renaming variables to avoid confusion.

Co-authored-by: Aaron <redacted>
3 months agoCUDA: faster tile FA, add oob checks, more HSs (llama/16492)
Johannes Gäßler [Sat, 11 Oct 2025 18:54:32 +0000 (20:54 +0200)]
CUDA: faster tile FA, add oob checks, more HSs (llama/16492)

3 months agosync : llama.cpp
Georgi Gerganov [Sat, 11 Oct 2025 15:01:19 +0000 (18:01 +0300)]
sync : llama.cpp

3 months agometal : fix mul-mm condition + fix mul-mv permuted kernels (llama/16494)
Georgi Gerganov [Sat, 11 Oct 2025 13:54:10 +0000 (16:54 +0300)]
metal : fix mul-mm condition + fix mul-mv permuted kernels (llama/16494)

3 months agocuda : avoid initializing unused devices (llama/16510)
Diego Devesa [Sat, 11 Oct 2025 11:02:26 +0000 (04:02 -0700)]
cuda : avoid initializing unused devices (llama/16510)

3 months agocmake : Dont define XOPENSOURCE on AIX (llama/16481)
Prajwal B Mehendarkar [Fri, 10 Oct 2025 08:15:46 +0000 (13:45 +0530)]
cmake : Dont define XOPENSOURCE on AIX (llama/16481)

3 months agocpu : optimize the ggml NORM operation (llama/15953)
duduta [Thu, 9 Oct 2025 19:11:15 +0000 (22:11 +0300)]
cpu : optimize the ggml NORM operation (llama/15953)

* ggml-cpu: optimize norm operation to use intrinsics or Accelerate

          rename function

          add endif macro comment

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Aaron Teo <redacted>
* implement s390x SIMD suggested by @taronaeo

* add TODO comment

* tidy up spaces

---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Aaron Teo <redacted>
3 months agoCANN: Improve ACL graph matching (llama/16166)
Chenguang Li [Thu, 9 Oct 2025 07:50:25 +0000 (15:50 +0800)]
CANN: Improve ACL graph matching (llama/16166)

* CANN: improve ACL graph matching

Record `ne` and `nb` information for src tensors and include them in the
graph matching check. This enhances the robustness of ACL graph matching
by preventing incorrect matches when src tensors share the same data
address but differ in shape or stride.

* CANN: add op_params match

3 months agokleidiai: kernel interface refactoring (llama/16460)
Charles Xu [Thu, 9 Oct 2025 07:29:17 +0000 (09:29 +0200)]
kleidiai: kernel interface refactoring (llama/16460)

3 months agorefactor soft_max, add soft_max_back (llama/16472)
Neo Zhang Jianyu [Thu, 9 Oct 2025 07:25:11 +0000 (15:25 +0800)]
refactor soft_max, add soft_max_back (llama/16472)

* refactor to support soft_max_ext

* fix error and support soft_max_back

* rm unused functions

* fix format issue

---------

Co-authored-by: Zhang Jianyu <redacted>
3 months agoDisable CUDA host buffers on integrated GPUs (llama/16308)
ai-fonsi [Wed, 8 Oct 2025 18:21:46 +0000 (20:21 +0200)]
Disable CUDA host buffers on integrated GPUs (llama/16308)

3 months agometal : mark FA blocks (llama/16372)
Georgi Gerganov [Wed, 8 Oct 2025 07:57:53 +0000 (10:57 +0300)]
metal : mark FA blocks (llama/16372)

* metal : better unroll in the FA kernels

* metal : index FA blocks

* tests : restore [no ci]

* metal : prevent division by zero in FA kernels

* metal : fix -INF detection logic

3 months agoggml webgpu: profiling, CI updates, reworking of command submission (llama/16452)
Reese Levine [Tue, 7 Oct 2025 20:48:56 +0000 (13:48 -0700)]
ggml webgpu: profiling, CI updates, reworking of command submission (llama/16452)

* Add profiling

* More detailed profiling

* Rework command submission to avoid global locks

* Update wait handling

* try new method of waiting on futures

* Add serializing of command submission in some cases

* Add new pool for timestamp queries and clean up logging

* Serialize command submission in CI and leave a TODO note

* Update webgpu CI

* Add myself as WebGPU codeowner

* Deadlock avoidance

* Leave WebGPU/Vulkan CI serialized

* Fix divide by 0

* Fix logic in division by inflight_threads

* Update CODEOWNERS and remove serialize submit option

3 months agometal : add support for non-padded FA KV (llama/16148)
Georgi Gerganov [Tue, 7 Oct 2025 05:23:30 +0000 (08:23 +0300)]
metal : add support for non-padded FA KV (llama/16148)

* metal : pad K, V and Mask when needed

* cont : simplify

* cuda : add TODO about KV padding requirement

* metal : add comments

* metal : remove mask padding requirement

3 months agotests : add -INF blocks to the KQ mask in the FA tests (llama/16380)
Georgi Gerganov [Tue, 7 Oct 2025 05:22:35 +0000 (08:22 +0300)]
tests : add -INF blocks to the KQ mask in the FA tests (llama/16380)

* tests : add -INF blocks to the KQ mask in the FA tests

* cont : bump -INF block size to 64

Co-authored-by: Jeff Bolz <redacted>
* ggml : prevent division by zero in FA CPU op

---------

Co-authored-by: Jeff Bolz <redacted>
3 months agometal : various optimizations + refactoring (llama/16446)
Georgi Gerganov [Tue, 7 Oct 2025 05:21:40 +0000 (08:21 +0300)]
metal : various optimizations + refactoring (llama/16446)

* metal : ssm_scan minor opts

* metal : get_rows optimize

* metal : cpy optimize

* metal : ssm_conv opt

* metal : ssm_scan simplify

* metal : ssm_Scan opt

3 months agoggml : fix unaligned access in AMX code (llama/16315)
Georgi Gerganov [Mon, 6 Oct 2025 13:05:27 +0000 (16:05 +0300)]
ggml : fix unaligned access in AMX code (llama/16315)

3 months agoggml-cpu : fix leftover handling in ggml_vec_scale_f32 for SVE (llama/16443)
Daniel Bevenius [Mon, 6 Oct 2025 12:17:12 +0000 (14:17 +0200)]
ggml-cpu : fix leftover handling in ggml_vec_scale_f32 for SVE (llama/16443)

This commit updates the leftover handling in ggml_vec_scale_f32.

The motivation for this is that the code currently incorrectly assumes
there would be fewer than ggml_f32_epr leftover elements. However,
since the main loop processes 2*ggml_f32_epr elements per iteration
, there can be up to (2*ggml_f32_epr - 1) leftover elements.

The original single-pass leftover code could only process ggml_f32_epr
elements, leaving some elements unscaled.

Example scenario with 256-bit SVE:
```
ggml_f32_epr  = 8 (elements per register)
ggml_f32_step = 16 (two registers per iteration)
n             = 25
np            = 16
leftovers     = 9 elements (16-24)

Original    : processes only elements 16-23, misses element 24
This commit : loop processes elements 16-23, then element 24
```

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/18070620247/job/51419855630

3 months agoggml webgpu: actually add softmax, fix rms_norm offset (llama/16400)
Reese Levine [Sun, 5 Oct 2025 03:59:31 +0000 (20:59 -0700)]
ggml webgpu: actually add softmax, fix rms_norm offset (llama/16400)

* implement soft_max

* Fix soft_max data race

* Temporary fix, wait on each submit

3 months agovulkan: use a more appropriate amount of threads when generating shaders (llama/16418)
Eve [Sat, 4 Oct 2025 20:04:27 +0000 (20:04 +0000)]
vulkan: use a more appropriate amount of threads when generating shaders (llama/16418)

* use a more flexible amount of threads

* fix windows compile and 0 thread case

* nominmax

3 months agorpc : check src buffer when copying tensor (llama/16421)
Radoslav Gerganov [Sat, 4 Oct 2025 13:22:45 +0000 (16:22 +0300)]
rpc : check src buffer when copying tensor (llama/16421)

Only dst buffer is guaranteed to be an RPC buffer. Add check for the src
one.

3 months agorpc : add support for multiple devices (llama/16276)
Radoslav Gerganov [Sat, 4 Oct 2025 09:49:16 +0000 (12:49 +0300)]
rpc : add support for multiple devices (llama/16276)

* rpc : add support for multiple devices

Allow rpc-server to expose multiple devices from a single endpoint.
Change RPC protocol to include device identifier where needed.

closes: #15210

* fixes

* use ggml_backend_reg_t

* address review comments

* fix llama-bench backend report

* address review comments, change device naming

* fix cmd order

3 months agosync : llama.cpp
Georgi Gerganov [Sat, 11 Oct 2025 15:01:03 +0000 (18:01 +0300)]
sync : llama.cpp

3 months agovulkan : incremental shader builds (llama/16341)
Acly [Sat, 11 Oct 2025 14:59:36 +0000 (17:59 +0300)]
vulkan : incremental shader builds (llama/16341)

* vulkan (DRAFT): split shader generation by GLSL source file, to improve incremental build times

* support dep-files so shaders are recompiled if their included files change

* rename shader files which are used as "headers" to use .glsl extension
* move glslc extension detection shaders to separate folders
* the above is to prevent them from getting glob'd with the actual compute shaders that need to be compiled

* vulkan : only write embedded shader .hpp/.cpp when they change

* avoid recompiling ggml-vulkan.cpp when editing shaders
* pass single --source argument instead of --input-dir & --filter to shader gen
* check for source file match earlier

* fix hang in vulkan-shaders-gen when there are compilation errors

* early out did not decrement compile_count

* clean up

* fix glslc integer dot product test

* unconditionally write the embedded shader cpp output

* replace output filepath in generated dep-files to match output in CMakeLists

---------

Co-authored-by: Jeff Bolz <redacted>
3 months agosync : llama.cpp
Georgi Gerganov [Sat, 11 Oct 2025 14:58:06 +0000 (17:58 +0300)]
sync : llama.cpp

3 months agometal : fix loop bound in ggml_mem_ranges (llama/16412)
Georgi Gerganov [Fri, 3 Oct 2025 16:18:56 +0000 (19:18 +0300)]
metal : fix loop bound in ggml_mem_ranges (llama/16412)

3 months agoggml : fix graph reallocation with multiple chunks (llama/16396)
Acly [Fri, 3 Oct 2025 11:49:08 +0000 (13:49 +0200)]
ggml : fix graph reallocation with multiple chunks (llama/16396)

reallocation is needed if a single chunk grows in size,
even if total allocation size stays the same or is lower

3 months agovulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE (llama/16354)
Jeff Bolz [Fri, 3 Oct 2025 10:50:46 +0000 (05:50 -0500)]
vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE (llama/16354)

* vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE

Replace maxMemoryAllocationSize check with maxBufferSize when creating buffers.
The maxMemoryAllocationSize limit is a "soft" limit and allocations can succeed
beyond that limit. This allows > 4GB buffers to be allocated on some
implementations (e.g. NVIDIA) and tensors this large can be used for im2col
and mul_mat.

For temporary buffers (prealloc_x/y/etc) check against maxStorageBufferRange.
I'm not sure this check is ideal, but we always use these buffers as a single
full size binding and the limit may be smaller than maxMemoryAllocationSize
or maxBufferSize, so I think this is reasonable.

Replace descriptor range uses of VK_WHOLE_SIZE with a manually computed range.
The maxStorageBufferRange may be smaller than the maxBufferSize or
maxMemoryAllocationSize (and the Vulkan spec warns about this in a note) and
it's invalid usage if VK_WHOLE_SIZE computes a range larger than
maxStorageBufferRange.

With this change, it should be possible to generate videos using wan networks
in stable-diffusion.cpp.

* vulkan: Add env var GGML_VK_FORCE_MAX_BUFFER_SIZE and use stoull

3 months agovulkan: Fix FA coopmat1 invalid array indexing (llama/16365)
Jeff Bolz [Fri, 3 Oct 2025 09:52:46 +0000 (04:52 -0500)]
vulkan: Fix FA coopmat1 invalid array indexing (llama/16365)

When computing sinks, the cm1 shader was looping r from 0 to Br rather than
to rows_per_thread. I must have copied this from the scalar path (where it is
correct), and somehow it wasn't causing failures on current drivers.

3 months agovulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD...
Jeff Bolz [Fri, 3 Oct 2025 08:33:08 +0000 (03:33 -0500)]
vulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD) (llama/16316)

3 months agoggml webgpu: add support for soft_max, optimize rms_norm (llama/16357)
Reese Levine [Thu, 2 Oct 2025 18:00:31 +0000 (11:00 -0700)]
ggml webgpu: add support for soft_max, optimize rms_norm (llama/16357)

* Add inplace softmax

* Move rms_norm to split row approach

* Update debug for supports_op

* clean up debug statements

* Update tests/test-backend-ops.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agomodel : Apertus model implementation (llama/15852)
Piotr Wilkin (ilintar) [Thu, 2 Oct 2025 17:43:22 +0000 (19:43 +0200)]
model : Apertus model implementation (llama/15852)

* First attempt

* No permute during convert (fixes qk tensors), proper norm application.

* RoPE = NeoX

* Coherence!

* Migrate xielu params from tensors to hyperparameters

* Simple CUDA kernel

* Revert stupid LLM refactorings

* Chat template support

* configchecker / flake8 errors

* Reorder unary.cu

* I do conclude that LLMs are, in fact, stupid.

* Fix after merge

* Final newline

* Make xIELU an UNARY_OP

* Final newline

* Correctly account for parameter shift

* Argh.

* Update ggml/src/ggml-cpu/unary-ops.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Refactor: remove unused methods, inline and factorize softplus, add const modifiers

* Revert CUDA changes, implement xIELU as a separate OP

* Pesky newline

* Add float2half / half2float for F16 inputs/outputs

* CUDA variants, attempt 2

* Actually, attempt 3

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <redacted>
* Missing convert header

* Proper formula and reference for xIELU in the comments.

* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Add tensor mappings for Apertus to global list instead

* Fix lazy on scalars

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <redacted>
* Add comment about the constraints on positive/negative alpha

* Change `softplus` to `ggml_softplus`

---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Johannes Gäßler <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
3 months agomusa: update compile flags (llama/16265)
R0CKSTAR [Thu, 2 Oct 2025 13:29:56 +0000 (21:29 +0800)]
musa: update compile flags (llama/16265)

Signed-off-by: Xiaodong Ye <redacted>
3 months agoHIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (llama/16221)
uvos [Wed, 1 Oct 2025 21:09:25 +0000 (23:09 +0200)]
HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (llama/16221)

* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0

rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA

* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn

3 months agovulkan: make ggml_vk_default_dispatcher support older vulkan headers (llama/16345)
Eve [Wed, 1 Oct 2025 07:56:36 +0000 (07:56 +0000)]
vulkan: make ggml_vk_default_dispatcher support older vulkan headers (llama/16345)

* make ggml_vk_default_dispatcher support older vulkan headers

* simpilfy with using

3 months agoopencl: support pad_ext (llama/15888)
lhez [Tue, 30 Sep 2025 17:45:45 +0000 (10:45 -0700)]
opencl: support pad_ext (llama/15888)

3 months agoggml webgpu: support for rope,div,sub,glu,scale,cont operators (llama/16187)
Reese Levine [Tue, 30 Sep 2025 16:57:51 +0000 (09:57 -0700)]
ggml webgpu: support for rope,div,sub,glu,scale,cont operators (llama/16187)

* Work on rope

* Simplify inplace operation generation and combine mul/add generation

* Work on rope variants

* implement neox rope

* rope complete

* Add sub,div,glu operators

* implement scale op

* Update cpy shader to handle cont/more types

* formatting

* Update test vars printing for rope,rms_norm

* Avoid ROPE hardcoded constants

* Add TODO to change ROPE constants to enum

Co-authored-by: Georgi Gerganov <redacted>
* fix TODO comment

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agoopencl: support ne3 in get_rows (llama/15866)
lhez [Tue, 30 Sep 2025 16:55:13 +0000 (09:55 -0700)]
opencl: support ne3 in get_rows (llama/15866)

3 months agoggml : bump version to 0.9.4 (#1363) upstream/0.9.4 v0.9.4
Georgi Gerganov [Tue, 30 Sep 2025 10:42:39 +0000 (13:42 +0300)]
ggml : bump version to 0.9.4 (#1363)

3 months agosync : whisper.cpp [no ci]
Georgi Gerganov [Tue, 30 Sep 2025 10:39:06 +0000 (13:39 +0300)]
sync : whisper.cpp [no ci]

3 months agosync : llama.cpp
Georgi Gerganov [Tue, 30 Sep 2025 08:18:45 +0000 (11:18 +0300)]
sync : llama.cpp

3 months agocuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) (llama/16328)
anavp-nvidia [Tue, 30 Sep 2025 08:13:22 +0000 (08:13 +0000)]
cuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) (llama/16328)

* Fix Nemotron Nano v2 9B not executing as CUDA Graph on NVIDIA GPUs

* fix to ensure test-backend-ops check passes

3 months agometal : dynamic simdgroups for MV kernels (llama/16340)
Georgi Gerganov [Tue, 30 Sep 2025 08:03:23 +0000 (11:03 +0300)]
metal : dynamic simdgroups for MV kernels (llama/16340)

* metal : dynamic simdgroups for MV kernels

* cont : minor

3 months agokleidiai : fix work size and threads sync for fp16 (llama/16246)
Charles Xu [Tue, 30 Sep 2025 07:07:20 +0000 (09:07 +0200)]
kleidiai : fix work size and threads sync for fp16 (llama/16246)

3 months agotests: override test_set_rows::max_nmse_err to allow for occasional rounding differen...
Jeff Bolz [Tue, 30 Sep 2025 00:26:34 +0000 (19:26 -0500)]
tests: override test_set_rows::max_nmse_err to allow for occasional rounding differences (llama/16295)

* tests: override test_set_rows::max_nmse_err to allow for occasional rounding differences

* apply similar error bounds to test_cpy

3 months agosync : llama.cpp
Georgi Gerganov [Mon, 29 Sep 2025 14:53:28 +0000 (17:53 +0300)]
sync : llama.cpp

3 months agoggml: riscv: add riscv spacemit backend (llama/15288)
alex-spacemit [Mon, 29 Sep 2025 14:50:44 +0000 (22:50 +0800)]
ggml: riscv: add riscv spacemit backend (llama/15288)

* ggml: add spacemit backend

Change-Id: I249bdc043485d815a9c351867137bc1e27cc2e23

* add new line at end of file

Change-Id: I889ed1c85fb45e62350ecde0c06f70450cadfbe2

* add riscv zba extension limit

Change-Id: I321eb200f859751727afe5cae13074dfce2bb0ce

* fixed for review comments, file renamed and format

Change-Id: Ia20b6ec24a36638e62e0fe07cf100916a7cce3ce

* fixed for code format, after clang-format

Change-Id: I5dc33a0412da3d3f2d77075d8939185d3009eca2

* use _Float16 instead of __fp16

Change-Id: I039fb02bb95270e641bc4442204e658735859d43

* add ci for riscv64-spacemit-ime-native

Change-Id: I711c1033061df1a289ea77891b2997599dfe8279

* update debian-13-riscv64-spacemit-ime-native ci label

Change-Id: Ifb2b891e2fca57b5da604fce2ac255f27731179a

* remove license comment for spacemit ime

Change-Id: If0dc3ca30a958631ccca0a28b62e0b825f9fb0c3

* upgrade binutils for gcc ime

Change-Id: Ibf2fa74c1064408974cb5b45f044d40987e5fb45

* add spacemit ime cross jobs

Change-Id: I80d74909941d41cb9cd09e51d8baf01c985cbfc6

* remove native compile for riscv64-spacemit-ime

Change-Id: I01920afafdc73fa7424014fd648d243f8ec9e25e

* ci : add caching for spacemit ime cross toolchain

Change-Id: Ic54a192019a2fd982bbd58225ce3bbc38f4053de

* ci: bug fixed for cache path and env

Change-Id: I28c42e10b6fff053bb6580926ca2353448cb042a

* Update .github/workflows/build-linux-cross.yml for cache path

Co-authored-by: Sigbjørn Skjæret <redacted>
* bugfixed for  build-linux-cross.yml,  syntax error

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: cailinxi <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
3 months agoggml-backend : add root cause in error message if loading backend library fails ...
Rafal Lewczuk [Mon, 29 Sep 2025 11:17:09 +0000 (13:17 +0200)]
ggml-backend : add root cause in error message if loading backend library fails (llama/16172)

This PR adds additional information to an error message when loading backend library via ld_load_library() fails. This helps spotting why backend library did not load (missing library, missing dependency or unresolved symbol etc.).

3 months agosync : whisper.cpp (#1359)
Georgi Gerganov [Mon, 29 Sep 2025 13:49:11 +0000 (16:49 +0300)]
sync : whisper.cpp (#1359)

* ggml : Fix MKL detection by quoting BLAS_INCLUDE_DIRS (whisper/3426)

* sync : whisper.cpp

3 months agoci : print results [no ci] (#1358)
Georgi Gerganov [Mon, 29 Sep 2025 13:20:52 +0000 (16:20 +0300)]
ci : print results [no ci] (#1358)