]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
7 weeks agoggml: fix ggml_conv_1d_dw bug (#1323) upstream/0.0.2446
Jason Ni [Thu, 14 Aug 2025 11:17:51 +0000 (19:17 +0800)]
ggml: fix ggml_conv_1d_dw bug (#1323)

* ggml: fix ggml_conv_1d_dw bug

* Fixed conv1d_dw weight tensor dimension.

7 weeks agomnist : adapt to opt changes
Georgi Gerganov [Thu, 14 Aug 2025 10:41:23 +0000 (13:41 +0300)]
mnist : adapt to opt changes

ggml-ci

7 weeks agotests : remove unused includes (#0)
Georgi Gerganov [Thu, 14 Aug 2025 10:41:03 +0000 (13:41 +0300)]
tests : remove unused includes (#0)

7 weeks agosync : llama.cpp
Georgi Gerganov [Thu, 14 Aug 2025 10:22:55 +0000 (13:22 +0300)]
sync : llama.cpp

ggml-ci

7 weeks agocuda : fix GGML_CUDA_GRAPHS=OFF (llama/15300)
Sigbjørn Skjæret [Thu, 14 Aug 2025 10:22:07 +0000 (12:22 +0200)]
cuda : fix GGML_CUDA_GRAPHS=OFF (llama/15300)

* fix USE_CUDA_GRAPH=OFF

ggml-ci

* check capture status

* completely disable capturing check instead

7 weeks agofinetune: SGD optimizer, more CLI args (llama/13873)
Jonathan Graehl [Thu, 14 Aug 2025 10:03:57 +0000 (03:03 -0700)]
finetune: SGD optimizer, more CLI args (llama/13873)

* examples/finetune -opt SGD (stochastic gradient descent) memory opt

add unit tested GGML_OPT_OPTIMIZER_SGD to ggml - avoids allocating
m, v tensors.

support finetune.cpp arg -opt SGD (or sgd). (default adamw as before)

llama 3.2-1b-F32 result: observed 11gb gpu ram (41 sec/epoch)
when using SGD instead of 19gb (55 sec/epoch) using adamw.
(wikipedia 100 lines finetune)

(
using the same GPU memory, adamw can only do before OOM 512
batch/context, reaching:
train: [███████▉] data=0000140/0000140 loss=0.02575±0.00099 acc=99.52±0.03% t=00:00:47 ETA=00:00:00
val:   [███████▉] data=0000008/0000008 loss=4.76565±0.28810 acc=41.46±0.77% t=00:00:00 ETA=00:00:00

SGD is superior, though it converges slower, with max before OOM 1728
batch/context (esp see the better validation perf):
train: [███████▉] data=0000039/0000039 loss=0.00371±0.00010 acc=99.96±0.01% t=00:00:41 ETA=00:00:00
val:   [███████▉] data=0000003/0000003 loss=5.11406±0.76034 acc=48.01±0.69% t=00:00:01 ETA=00:00:00
)

note: when finetuning long enough (or w/ enough -lr),
validation accuracy *eventually* drops ('catastrophic forgetting')

-lr-half (halflife) option useful for SGD to avoid oscillation or
super slow underdamped learning (makes setting -lr more forgiving).
terminal -lr for now is set by lr-halvings i.e. if you want at most
1/8 the inital -lr you set -lr-halvings 3.

note: objective loss not directly comparable between adamw, sgd? -
check perplexity or accuracy or consider relative improvements
for convergence

new finetune args -wd 1e-9 to enable weight decay in sgd or adamw,
and max -epochs N (default 2 as before)

cache (1 - wd*alpha) in 'adamw' opt struct -
no noticeable perf benefit, disabled (still done
for new SGD though)

since opt. memory is pre-allocated, the ggml_opt_get_optimizer_params
would probably be able to change between SGD and AdamW with each epoch
but would need to use adamw for the first (unconfirmed - no cmdline arg
to set such a policy yet)

test-opt checks adamw as before and now sgd (except for a few disabled
tests for sgd only; probably just needs logging values and adding
alternate reference values);  tolerance on the 'regression'
test is broader for sgd (so we don't need many more epochs)

* Vulkan: Implement GGML_OP_OPT_STEP_SGD

* tests: Fix OPT_STEP_SGD test-backend-ops

* SGD op param store weight-decay and not 1-alpha*wd

* minor + cosmetic changes

* fix vulkan sgd

* try CI fix

---------

Co-authored-by: 0cc4m <redacted>
Co-authored-by: Johannes Gäßler <redacted>
7 weeks agoHIP: bump requirement to rocm 6.1 (llama/15296)
uvos [Wed, 13 Aug 2025 18:44:30 +0000 (20:44 +0200)]
HIP: bump requirement to rocm 6.1 (llama/15296)

7 weeks agosync : llama.cpp
Georgi Gerganov [Wed, 13 Aug 2025 16:05:27 +0000 (19:05 +0300)]
sync : llama.cpp

ggml-ci

7 weeks agoggml : update `ggml_rope_multi` (llama/12665)
Judd [Wed, 13 Aug 2025 10:45:15 +0000 (18:45 +0800)]
ggml : update `ggml_rope_multi` (llama/12665)

* update `rope_multi`:

1. add `ggml_rope_multi_inplace`;
1. use `GGML_MROPE_SECTIONS` instead of 4.

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agoggml : repack block_iq4_nlx8 (llama/14904)
Georgi Gerganov [Wed, 13 Aug 2025 08:09:39 +0000 (11:09 +0300)]
ggml : repack block_iq4_nlx8 (llama/14904)

ggml-ci

7 weeks agoCUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel...
Oliver Simons [Wed, 13 Aug 2025 08:04:46 +0000 (10:04 +0200)]
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (llama/15132)

* Factor out `reduce_rows_f32` from common.cuh

This increases iteration cycle speed by not having to recompile
every kernel all the time

* Hide memory-latency by loop unrolling in reduce_rows_f32

* Further optimizations to `reduce_rows_f32`

1. Increase threadblock size to better hide latency of memory requests.
   As a consequence of bigger threadblocks, do 2-step summation, using
   shared memory to communicate results between invocations
2. Use sum_temp array to reduce waits on sum
3. Adjust num_unroll to reflext bigger threadblock
4. Improve default block_dims, increase support for more block_dims

* Add perf tests for `reduce_rows_f32` kernel

* Add heuristic to toggle 128/512 threads based on sm count

Break even point was the minimum of the following multiples.

| GPU Model                     | Nrow SM Count Multiple |
| -----------                   | -----------            |
| RTX 4000 SFF ADA              | 2.0x                   |
| RTX 6000 ADA                  | 2.5x                   |
| RTX PRO 6000 Blackwell Max-Q  | 3.04x                  |
| RTX PRO 4500 Blackwell | 3.15x                  |

* Ensure perf gains also for small ncols and large nrows

Alternative to this, one could have also made the number of unrollings
template-able, but that would require compiling the kernel multiple
times, increasing binary size unnecessarily

* Modify perf and unit-tests

* Apply auto-formatting by clang

* Fix CI build failure

See https://github.com/ggml-org/llama.cpp/actions/runs/16798370266/job/47573716079?pr=15132#step:7:486
Building with VS generator worked though.

* Remove sm_count property from `ggml_backend_cuda_context`

Requested by @JohannesGaessler, and should fix remaining CI issues as a
side-effect

* Add CUB-based implementation for GGML_OP_MEAN

Currently this branch is only executed for nrows==1

* Add heuristics to execute CUB branch only when it brings perf

Heuristics were determined on the following HW:

* RTX 4000 SFF ADA
* RTX 6000 ADA
* RTX PRO 6000 Blackwell Max-Q
* RTX PRO 4500 Blackwell

* Add unit-test for CUB-based mean

Tests should run with CUDA Graphs enabled per default on NVGPUs

* Rename `USE_CUB` to `GGML_CUDA_USE_CUB`

Suggested by @JohannesGaessler

* Unindent Preprocessor directives

See
https://github.com/ggml-org/llama.cpp/pull/15132#discussion_r2269213506

7 weeks agoggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS...
Tak-RS [Wed, 13 Aug 2025 05:54:30 +0000 (14:54 +0900)]
ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others) (llama/15188)

* ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others). Fixes #15055

* ggml-rpc: rename RPC_IO_CHUNK->MAX_CHUNK_SIZE, use std::min() for cap, switch to GGML_LOG_ERROR, handle 0-length send/recv

* rpc: drop n==0 special case in send_data(); retry in loop per review

* rpc: remove trailing whitespace in send_data()

---------

Co-authored-by: Shinnosuke Takagi <redacted>
7 weeks agoHIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (llama...
uvos [Tue, 12 Aug 2025 20:15:12 +0000 (22:15 +0200)]
HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (llama/15273)

7 weeks agosycl: Fix and disable more configurations of mul_mat (llama/15151)
Romain Biessy [Tue, 12 Aug 2025 11:58:22 +0000 (13:58 +0200)]
sycl: Fix and disable more configurations of mul_mat (llama/15151)

* sycl: Fix and disable more configurations of mul_mat

* Disable more configurations

7 weeks agoopencl: allow mixed f16/f32 `add` (llama/15140)
rmatif [Tue, 12 Aug 2025 09:42:41 +0000 (11:42 +0200)]
opencl: allow mixed f16/f32 `add` (llama/15140)

7 weeks agoCUDA cmake: add `-lineinfo` for easier debug (llama/15260)
Aman Gupta [Tue, 12 Aug 2025 09:21:45 +0000 (17:21 +0800)]
CUDA cmake: add `-lineinfo` for easier debug (llama/15260)

7 weeks agoCANN: GGML_OP_CPY optimization (llama/15070)
Chenguang Li [Tue, 12 Aug 2025 08:12:13 +0000 (16:12 +0800)]
CANN: GGML_OP_CPY optimization (llama/15070)

Signed-off-by: noemotiovon <redacted>
7 weeks agomusa: fix failures in test-backend-ops for mul_mat_id op (llama/15236)
R0CKSTAR [Tue, 12 Aug 2025 02:02:51 +0000 (10:02 +0800)]
musa: fix failures in test-backend-ops for mul_mat_id op (llama/15236)

* musa: fix failures in test-backend-ops for mul_mat_id op

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
7 weeks agoCANN: Add broadcast for softmax and FA (llama/15208)
hipudding [Mon, 11 Aug 2025 14:50:31 +0000 (22:50 +0800)]
CANN: Add broadcast for softmax and FA (llama/15208)

* refactor softmax

* fix fa

* fix mask shape

* format

* add comments

* Remove whitespace

7 weeks agokleidiai: fix unsigned overflow bug (llama/15150)
Charles Xu [Mon, 11 Aug 2025 07:59:26 +0000 (09:59 +0200)]
kleidiai: fix unsigned overflow bug (llama/15150)

* kleidiai: fix unsigned overflow bug

* address review comments

7 weeks agocuda: refactored ssm_scan and use CUB (llama/13291)
David Zhao [Sat, 9 Aug 2025 18:29:43 +0000 (13:29 -0500)]
cuda: refactored ssm_scan and use CUB (llama/13291)

* cuda: refactored ssm_scan to use CUB

* fixed compilation error when when not using CUB

* assign L to constant and use size_t instead of int

* deduplicated functions

* change min blocks per mp to 1

* Use cub load and store warp transpose

* suppress clang warning

7 weeks agoCUDA: add attention sinks for tile and wmma (llama/15178)
Aman Gupta [Sat, 9 Aug 2025 12:00:24 +0000 (20:00 +0800)]
CUDA: add attention sinks for tile and wmma (llama/15178)

* CUDA: add attention sinks for tile and wmma

* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma

7 weeks agogguf-py : add Numpy MXFP4 de/quantization support (llama/15111)
compilade [Fri, 8 Aug 2025 21:48:26 +0000 (17:48 -0400)]
gguf-py : add Numpy MXFP4 de/quantization support (llama/15111)

* gguf-py : add MXFP4 de/quantization support

* ggml-quants : handle zero amax for MXFP4

7 weeks agoggml : fix field name when new ggml_backend (llama/14944)
AN Long [Fri, 8 Aug 2025 12:37:22 +0000 (21:37 +0900)]
ggml : fix field name when new ggml_backend (llama/14944)

7 weeks agoCUDA: attention sinks for mma FlashAttention (llama/15157)
Johannes Gäßler [Fri, 8 Aug 2025 06:19:58 +0000 (08:19 +0200)]
CUDA: attention sinks for mma FlashAttention (llama/15157)

7 weeks agoopencl: support sink in `soft_max` (attn sinks) (llama/15152)
lhez [Fri, 8 Aug 2025 04:47:03 +0000 (13:47 +0900)]
opencl: support sink in `soft_max` (attn sinks) (llama/15152)

7 weeks agovulkan: support fattn sinks (llama/15126)
Jeff Bolz [Thu, 7 Aug 2025 20:44:20 +0000 (15:44 -0500)]
vulkan: support fattn sinks (llama/15126)

7 weeks agovulkan: Add env var to disable host visible vidmem (llama/15109)
Jeff Bolz [Thu, 7 Aug 2025 20:07:11 +0000 (15:07 -0500)]
vulkan: Add env var to disable host visible vidmem (llama/15109)

7 weeks agoHIP: add cmake option to enable compiler output of kernel resource usage metrics...
uvos [Thu, 7 Aug 2025 14:44:14 +0000 (16:44 +0200)]
HIP: add cmake option to enable compiler output of kernel resource usage metrics (llama/15103)

7 weeks agoggml: Skip backend library linking code when GGML_BACKEND_DL=ON (llama/15094)
Christian Kastner [Thu, 7 Aug 2025 11:45:41 +0000 (13:45 +0200)]
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (llama/15094)

Any available libraries are found and loaded dynamically at runtime.

7 weeks agoCUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (llama/15131)
Johannes Gäßler [Thu, 7 Aug 2025 08:53:21 +0000 (10:53 +0200)]
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (llama/15131)

* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16

7 weeks agofix profiling crash (llama/15072)
rmatif [Wed, 6 Aug 2025 21:17:51 +0000 (23:17 +0200)]
fix profiling crash (llama/15072)

7 weeks agoopencl: add `swiglu_oai` and `add_id` (llama/15121)
lhez [Wed, 6 Aug 2025 19:12:17 +0000 (04:12 +0900)]
opencl: add `swiglu_oai` and `add_id` (llama/15121)

* opencl: add `swiglu-oai`

* opencl: add `add_id`

* opencl: add missing `add_id.cl`

7 weeks agoggml : fix fallback to CPU for ununsupported ops (llama/15118)
Diego Devesa [Wed, 6 Aug 2025 12:37:35 +0000 (05:37 -0700)]
ggml : fix fallback to CPU for ununsupported ops (llama/15118)

7 weeks agoCANN: add support for ACL Graph (llama/15065)
Chenguang Li [Wed, 6 Aug 2025 06:12:42 +0000 (14:12 +0800)]
CANN: add support for ACL Graph (llama/15065)

* feat(cann): add optional support for ACL Graph execution

This commit adds support for executing ggml computational graphs using
Huawei's ACL graph mode via the USE_CANN_GRAPH flag. The support can be
enabled at compile time using the CMake option:

    -DUSE_CANN_GRAPH=ON

By default, ACL graph execution is **disabled**, and the fallback path
uses node-by-node execution.

Key additions:
- CMake option  to toggle graph mode
- Graph capture and execution logic using
- Tensor property matching to determine whether graph update is required
- Safe fallback and logging if the environment variable LLAMA_SET_ROWS
  is unset or invalid

This prepares the backend for performance improvements in repetitive graph
execution scenarios on Ascend devices.

Signed-off-by: noemotiovon <redacted>
* Fix review comments

Signed-off-by: noemotiovon <redacted>
* remane USE_CANN_GRAPH to USE_ACL_GRAPH

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
7 weeks agollama : add gpt-oss (llama/15091)
Georgi Gerganov [Tue, 5 Aug 2025 19:10:36 +0000 (22:10 +0300)]
llama : add gpt-oss (llama/15091)

* oai moe

* compat with new checkpoint

* add attn sink impl

* add rope scaling yarn

* logits match with latest transformers code

* wip chat template

* rm trailing space

* use ggml_scale_bias

* rm redundant is_swa_all

* convert interleaved gate_up

* graph : fix activation function to match reference (llama/7)

* vocab : handle o200k_harmony special tokens

* ggml : add attention sinks support (llama/1)

* llama : add attn sinks

* ggml : add attn sinks

* cuda : add attn sinks

* vulkan : add support for sinks in softmax

remove unnecessary return

* ggml : add fused swiglu_oai op (llama/11)

* ggml : add fused swiglu_oai op

* Update src/ggml-cpu/ops.cpp

Co-authored-by: Georgi Gerganov <redacted>
* update CUDA impl

* cont : metal impl

* add vulkan impl

* test-backend-ops : more test cases, clean up

* llama : remove unfused impl

* remove extra lines

---------

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: slaren <redacted>
* repack mxfp4 upon conversion

* clean up a bit

* enable thinking

* add quick hack to render only some special tokens

* fix bf16 conversion

* remove vocab hack

* webui ok

* support chat parsing for gpt-oss

* fix webui

* direct mapping mxfp4, FINALLY

* force using mxfp4

* properly use lazy tensor

* ggml : add mxfp4

ggml : use e8m0 conversion instead of powf

Co-authored-by: Diego Devesa <redacted>
change kvalues_mxfp4 table to match e2m1 (llama/6)

metal : remove quantization for now (not used)

cuda : fix disabled CUDA graphs due to ffn moe bias

vulkan : add support for mxfp4

cont : add cm2 dequant

* ggml : add ggml_add_id (llama/13)

* ggml : add ggml_add_id

* add cuda impl

* llama : add weight support check for add_id

* perf opt

* add vulkan impl

* rename cuda files

* add metal impl

* allow in-place ggml_add_id

* llama : keep biases on CPU with --cpu-moe

* llama : fix compile error

ggml-ci

* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw

ggml-ci

* cleanup

ggml-ci

* sycl : fix supports_op for MXFP4

ggml-ci

* fix Unknown reasoning format

* ggml-cpu : fix AVX build

ggml-ci

* fix hip build

ggml-ci

* cuda : add mxfp4 dequantization support for cuBLAS

ggml-ci

* ggml-cpu : fix mxfp4 fallback definitions for some architectures

ggml-ci

* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw

---------

Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: slaren <redacted>
7 weeks agosycl: fix mul_mat selection (llama/15092)
Romain Biessy [Tue, 5 Aug 2025 16:39:55 +0000 (18:39 +0200)]
sycl: fix mul_mat selection (llama/15092)

7 weeks agocmake: Add GGML_BACKEND_DIR option (llama/15074)
Christian Kastner [Mon, 4 Aug 2025 19:29:14 +0000 (21:29 +0200)]
cmake: Add GGML_BACKEND_DIR option (llama/15074)

* cmake: Add GGML_BACKEND_DIR option

This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.

* Fix phrasing

7 weeks agovulkan: fix build when using glslang that does not support coopmat2 (llama/15062)
Jeff Bolz [Mon, 4 Aug 2025 05:09:19 +0000 (00:09 -0500)]
vulkan: fix build when using glslang that does not support coopmat2 (llama/15062)

7 weeks agovulkan: Use coopmat2 for conv2d (llama/14982)
Jeff Bolz [Sun, 3 Aug 2025 12:23:57 +0000 (07:23 -0500)]
vulkan: Use coopmat2 for conv2d (llama/14982)

7 weeks agoopencl: fix adreno compiler detection logic (llama/15029)
lhez [Sat, 2 Aug 2025 17:51:18 +0000 (10:51 -0700)]
opencl: fix adreno compiler detection logic (llama/15029)

7 weeks agoCUDA: use mma FA kernel for gqa > 4 on RTX 4000 (llama/15035)
Johannes Gäßler [Sat, 2 Aug 2025 14:37:08 +0000 (16:37 +0200)]
CUDA: use mma FA kernel for gqa > 4 on RTX 4000 (llama/15035)

2 months agosync : llama.cpp upstream/0.0.2404
Georgi Gerganov [Sat, 2 Aug 2025 14:17:08 +0000 (17:17 +0300)]
sync : llama.cpp

ggml-ci

2 months agocuda: make im2col a little faster (llama/15025)
leejet [Sat, 2 Aug 2025 14:15:36 +0000 (22:15 +0800)]
cuda: make im2col a little faster (llama/15025)

2 months agocuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (llama/15038)
Georgi Gerganov [Sat, 2 Aug 2025 14:13:05 +0000 (17:13 +0300)]
cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (llama/15038)

* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1

ggml-ci

* cont : fix cont types

ggml-ci

* cont : adopt variable names and comment from the other branch

2 months agovulkan: coopmat2 mul_mat optimizations (llama/14934)
Jeff Bolz [Sat, 2 Aug 2025 09:21:37 +0000 (04:21 -0500)]
vulkan: coopmat2 mul_mat optimizations (llama/14934)

- Increase tile size for k-quants, to match non-k-quants
- Choose more carefully between large and medium tiles, considering how it
  interacts with split_k
- Allow larger/non-power of two split_k, and make the splits a multiple of 256
- Use split_k==3 to when >1/2 and <=2/3 of the SMs would hae been used

2 months agovulkan: Support ne[3]>1 in noncontig matrix-vector multiply (llama/15015)
Jeff Bolz [Sat, 2 Aug 2025 08:48:30 +0000 (03:48 -0500)]
vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (llama/15015)

2 months agovulkan: optimizations for direct convolution (llama/14933)
Jeff Bolz [Sat, 2 Aug 2025 07:57:04 +0000 (02:57 -0500)]
vulkan: optimizations for direct convolution (llama/14933)

* vulkan: optimizations for direct convolution

- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
  the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.

* Three tiles sizes for CONV_2D, and a heuristic to choose

* reallow collectives for pre-Turing

* make SHMEM_PAD a spec constant

* fixes for intel perf - no shmem padding, placeholder shader core count

* shader variants with/without unrolling

* 0cc4m's fixes for AMD perf

Co-authored-by: 0cc4m <redacted>
---------

Co-authored-by: 0cc4m <redacted>
2 months agoCUDA: fix MMQ nwarps for AMD with warp_size==32 (llama/15014)
Johannes Gäßler [Fri, 1 Aug 2025 18:47:32 +0000 (20:47 +0200)]
CUDA: fix MMQ nwarps for AMD with warp_size==32 (llama/15014)

2 months agoopencl: add f16 for `add`, `sub`, `mul`, `div` (llama/14984)
lhez [Fri, 1 Aug 2025 11:15:44 +0000 (04:15 -0700)]
opencl: add f16 for `add`, `sub`, `mul`, `div` (llama/14984)

2 months agoggml : Q2k interleaving implementation - x86/x64 SIMD (llama/14373)
Srihari-mcw [Fri, 1 Aug 2025 06:20:33 +0000 (11:50 +0530)]
ggml : Q2k interleaving implementation - x86/x64 SIMD (llama/14373)

* Initial Q2_K Block Interleaving Implementation

* Addressed review comments and clean up of the code

* Post rebase fixes

* Initial CI/CD fixes

* Update declarations in arch-fallback.h

* Changes for GEMV Q2_K in arch-fallback.h

* Enable repacking only on AVX-512 machines

* Update comments in repack.cpp

* Address q2k comments

---------

Co-authored-by: Manogna-Sree <redacted>
2 months agodocker : add cann build pipline (llama/14591)
diannao [Fri, 1 Aug 2025 02:02:34 +0000 (10:02 +0800)]
docker : add cann build pipline (llama/14591)

* docker: add cann build pipline

* docker: add cann build pipline

* docker: fix cann devops

* cann : fix multi card hccl

* Update src/ggml-cann/ggml-cann.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Update ggml-cann.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
2 months agoVulkan: Fix minor debug mode issues (llama/14899)
Ruben Ortlam [Thu, 31 Jul 2025 15:46:54 +0000 (17:46 +0200)]
Vulkan: Fix minor debug mode issues (llama/14899)

* vulkan: fix debug mode issues

* vulkan: remove broken check_results GGML_OP_SET_ROWS support

2 months agoCANN: Improve loading efficiency after converting weights to NZ format. (llama/14985)
hipudding [Thu, 31 Jul 2025 11:47:20 +0000 (19:47 +0800)]
CANN: Improve loading efficiency after converting weights to NZ format. (llama/14985)

* CANN: Improve loading efficiency after converting weights to NZ format.

* CANN: fix typo

2 months agoopencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (llama/14809)
lhez [Wed, 30 Jul 2025 21:56:55 +0000 (14:56 -0700)]
opencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (llama/14809)

2 months agoHIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (llama...
uvos [Wed, 30 Jul 2025 15:38:06 +0000 (17:38 +0200)]
HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (llama/14949)

2 months agoCUDA: skip masked KV slices for all FA kernels (llama/14924)
Johannes Gäßler [Wed, 30 Jul 2025 13:46:13 +0000 (15:46 +0200)]
CUDA: skip masked KV slices for all FA kernels (llama/14924)

2 months agoHIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets...
uvos [Tue, 29 Jul 2025 18:23:04 +0000 (20:23 +0200)]
HIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets (llama/14945)

2 months agoHIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (llama/14930)
uvos [Tue, 29 Jul 2025 15:44:30 +0000 (17:44 +0200)]
HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (llama/14930)

This is useful for testing for regressions on GCN with CDNA hardware.

With GGML_HIP_MMQ_MFMA=Off and GGML_CUDA_FORCE_MMQ=On we can conveniently test the GCN code path on CDNA. As CDNA is just GCN renamed with MFMA added and limited use ACC registers, this provides a good alternative for regression testing when GCN hardware is not available.

2 months agoHIP: Ignore unsupported unroll transformation in fattn-vec (llama/14931)
uvos [Tue, 29 Jul 2025 15:43:43 +0000 (17:43 +0200)]
HIP: Ignore unsupported unroll transformation in fattn-vec (llama/14931)

llvm with the amdgcn target dose not support unrolling loops with conditional break statements, when those statements can not be resolved at compile time. Similar to other places in GGML lets simply ignore this warning.

2 months agoCANN: Add ggml_set_rows (llama/14943)
hipudding [Tue, 29 Jul 2025 14:36:43 +0000 (22:36 +0800)]
CANN: Add ggml_set_rows (llama/14943)

2 months agocuda : add softcap fusion (llama/14907)
Sigbjørn Skjæret [Tue, 29 Jul 2025 12:22:03 +0000 (14:22 +0200)]
cuda : add softcap fusion (llama/14907)

2 months agoCUDA: add roll (llama/14919)
Aman Gupta [Tue, 29 Jul 2025 06:45:18 +0000 (14:45 +0800)]
CUDA: add roll (llama/14919)

* CUDA: add roll

* Make everything const, use __restrict__

2 months agotest-backend-ops : extend test case filtering (llama/14865)
Leonard Mosescu [Mon, 28 Jul 2025 16:04:27 +0000 (09:04 -0700)]
test-backend-ops : extend test case filtering (llama/14865)

* Extend test case filtering

1. Allow passing multiple (comma-separated?) ops to test-backend-ops. This can be convenient when working on a set of ops, when you'd want to test them together (but without having to run every single op). For example:

`test-backend-ops.exe test -o "ADD,RMS_NORM,ROPE,SILU,SOFT_MAX"`

2. Support full test-case variation string in addition to basic op names. This would make it easy to select a single variation, either for testing or for benchmarking. It can be particularly useful for profiling a particular variation (ex. a CUDA kernel), for example:

`test-backend-ops.exe perf -b CUDA0 -o "MUL_MAT(type_a=f16,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3],v=2)"`

These two can be combined. As the current `-o`, this change doesn't try to detect/report an error if an filter doesn't name existing ops (ex. misspelled)

* Updating the usage help text

* Update tests/test-backend-ops.cpp

2 months agoggml-cpu : deduplicate scalar implementations (llama/14897)
xctan [Mon, 28 Jul 2025 15:40:24 +0000 (23:40 +0800)]
ggml-cpu : deduplicate scalar implementations (llama/14897)

* remove redundant code in riscv

* remove redundant code in arm

* remove redundant code in loongarch

* remove redundant code in ppc

* remove redundant code in s390

* remove redundant code in wasm

* remove redundant code in x86

* remove fallback headers

* fix x86 ggml_vec_dot_q8_0_q8_0

2 months agoSYCL: Add set_rows support for quantized types (llama/14883)
Akarshan Biswas [Mon, 28 Jul 2025 15:02:15 +0000 (20:32 +0530)]
SYCL: Add set_rows support for quantized types (llama/14883)

* SYCL: Add set_rows support for quantized types

This commit adds support for GGML_OP_SET_ROWS operation for various
quantized tensor types (Q8_0, Q5_1, Q5_0, Q4_1, Q4_0, IQ4_NL) and BF16
type in the SYCL backend.

The quantization/dequantization copy kernels were moved from cpy.cpp
to cpy.hpp to make them available for set_rows.cpp.

This addresses part of the TODOs mentioned in the code.

* Use get_global_linear_id() instead

ggml-ci

* Fix formatting

ggml-ci

* Use const for ne11 and size_t variables in set_rows_sycl_q

ggml-ci

* Increase block size for q kernel to 256

ggml-ci

* Cleanup imports

* Add float.h to cpy.hpp

2 months agoCUDA: fix pointer incrementation in FA (llama/14916)
Johannes Gäßler [Mon, 28 Jul 2025 12:30:22 +0000 (14:30 +0200)]
CUDA: fix pointer incrementation in FA (llama/14916)

2 months agosycl: refactor quantization to q8_1 (llama/14815)
Alberto Cabrera Pérez [Mon, 28 Jul 2025 10:05:53 +0000 (11:05 +0100)]
sycl: refactor quantization to q8_1 (llama/14815)

* sycl: quantization to q8_1 refactor

* Refactored src1 copy logic in op_mul_mat

2 months agoci : Move msvc to matrix (#1318)
Kai Pastor [Sat, 2 Aug 2025 14:29:48 +0000 (16:29 +0200)]
ci : Move msvc to matrix (#1318)

Enable static builds and testing

2 months agosimple : fix typo (#1319)
AN Long [Sat, 2 Aug 2025 14:28:28 +0000 (23:28 +0900)]
simple : fix typo (#1319)

2 months agosync : whisper.cpp
Georgi Gerganov [Wed, 30 Jul 2025 12:56:40 +0000 (15:56 +0300)]
sync : whisper.cpp

2 months agocmake : Fix BLAS link interface (#1316)
Kai Pastor [Wed, 30 Jul 2025 12:53:16 +0000 (14:53 +0200)]
cmake : Fix BLAS link interface (#1316)

2 months agovulkan : fix 32-bit builds (#1313)
Kai Pastor [Wed, 30 Jul 2025 12:52:26 +0000 (14:52 +0200)]
vulkan : fix 32-bit builds (#1313)

The pipeline member can be cast to VkPipeline.
This is a VkPipeline_T* on 64 bit but a uint64_t on 32 bit.
Cf. VK_DEFINE_NON_DISPATCHABLE_HANDLE documentation.

2 months agosync : llama.cpp
Georgi Gerganov [Mon, 28 Jul 2025 05:15:36 +0000 (08:15 +0300)]
sync : llama.cpp

ggml-ci

2 months agovulkan : add fp16 support for the conv_2d kernel (llama/14872)
Erik Scholz [Sun, 27 Jul 2025 10:04:33 +0000 (12:04 +0200)]
vulkan : add fp16 support for the conv_2d kernel (llama/14872)

* add f16 to conv_2d testing
* weaken conv2d test error threshold

2 months agovulkan: skip empty set_rows to avoid invalid API usage (llama/14860)
Jeff Bolz [Sun, 27 Jul 2025 09:05:34 +0000 (04:05 -0500)]
vulkan: skip empty set_rows to avoid invalid API usage (llama/14860)

2 months agoDocs: add instructions for adding backends (llama/14889)
Aman Gupta [Sun, 27 Jul 2025 01:36:43 +0000 (09:36 +0800)]
Docs: add instructions for adding backends (llama/14889)

2 months agoHIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (llama/14624)
deepsek [Sat, 26 Jul 2025 22:28:14 +0000 (18:28 -0400)]
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (llama/14624)

This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.

2 months agoCANN: Implement GLU ops (llama/14884)
hipudding [Sat, 26 Jul 2025 09:56:18 +0000 (17:56 +0800)]
CANN: Implement GLU ops (llama/14884)

Implement REGLU, GEGLU, SWIGLU ops according to #14158

2 months agomusa: fix build warnings (unused variable) (llama/14869)
R0CKSTAR [Sat, 26 Jul 2025 02:36:02 +0000 (10:36 +0800)]
musa: fix build warnings (unused variable) (llama/14869)

Signed-off-by: Xiaodong Ye <redacted>
2 months agoggml-cpu : disable GGML_NNPA by default due to instability (llama/14880)
Aaron Teo [Fri, 25 Jul 2025 17:09:03 +0000 (01:09 +0800)]
ggml-cpu : disable GGML_NNPA by default due to instability (llama/14880)

* docs: update s390x document for sentencepiece

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit e086c5e3a7ab3463d8e0906efcfa39352db0a48d)

* docs: update huggingface links + reword

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 8410b085ea8c46e22be38266147a1e94757ef108)

* ggml-cpu: disable ggml-nnpa compile flag by default

fixes #14877

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 412f4c7c88894b8f55846b4719c76892a23cfe09)

* docs: update s390x build docs to reflect nnpa disable

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit c1eeae1d0c2edc74ab9fbeff2707b0d357cf0b4d)

---------

Signed-off-by: Aaron Teo <redacted>
2 months agometal: SSM_SCAN performance (llama/14743)
Gabe Goodhart [Fri, 25 Jul 2025 16:47:39 +0000 (10:47 -0600)]
metal: SSM_SCAN performance (llama/14743)

* feat: Add s_off as a parameter in the args struct

This may not be necessary, but it more closely mirrors the CUDA kernel

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
* perf: Parallelize mamba2 SSM_SCAN metal kernel over d_state

This is a first attempt at optimizing the metal kernel. The changes here
are:

- Launch the kernel with a thread group of size d_state
- Use simd groups and shared memory to do the summation for the y
  computation

When tested with G4 tiny preview, this shows roughly a 3x speedup on
prefill and 15% speedup on decode.

Signed-off-by: Gabe Goodhart <redacted>
* fix: Update logic to correctly do the multi-layer parallel sum

Signed-off-by: Gabe Goodhart <redacted>
* fix: Correctly size the shared memory bufer and assert expected size relationships

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Compute block offsets once rather than once per token

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
* feat: Use local variable for state recursion

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
* feat: Use a secondary simd_sum instead of a for loop

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add assertion and comment about relationship between simd size and num simd groups

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallelize of d_state for mamba-1

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallel sum in SSM_CONV

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
* Revert "feat: Parallel sum in SSM_CONV"

After discussion with @compilade, the size of the parallelism here is
not worth the cost in complexity or overhead of the parallel for.

https://github.com/ggml-org/llama.cpp/pull/14743#discussion_r2223395357

This reverts commit 16bc059660c1c59e566628201c0ca2c20c9f4bc3.

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Simplify shared memory sizing

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <redacted>
Co-Authored-By: Georgi Gerganov <redacted>
---------

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 months agoopencl: add fused `rms_norm_mul` (llama/14841)
lhez [Fri, 25 Jul 2025 15:12:13 +0000 (08:12 -0700)]
opencl: add fused `rms_norm_mul` (llama/14841)

* opencl: add fused `rms_norm` + `mul`

* opencl: improve workgroup size for `rms_norm_mul`

2 months agoggml : remove invalid portPos specifiers from dot files (llama/14838)
Oliver Simons [Fri, 25 Jul 2025 11:29:57 +0000 (13:29 +0200)]
ggml : remove invalid portPos specifiers from dot files (llama/14838)

Neither "g" nor "x" are valid portPos specifiers per the official
[graphviz documents](https://graphviz.org/docs/attr-types/portPos/):

> If a compass point is used, it must have the form "n","ne","e","se","s","sw","w","nw","c","_".

I tested locally for it to fall back to default portPos specifier if an
invalid portPos is specified. As a consequence, we can remove associated
code.

2 months agorpc : check for null buffers in get/set/copy tensor endpoints (llama/14868)
Chris Rohlf [Fri, 25 Jul 2025 10:17:02 +0000 (06:17 -0400)]
rpc : check for null buffers in get/set/copy tensor endpoints (llama/14868)

2 months agosched : fix multiple evaluations of the same graph with pipeline parallelism (llama...
Diego Devesa [Fri, 25 Jul 2025 08:07:26 +0000 (01:07 -0700)]
sched : fix multiple evaluations of the same graph with pipeline parallelism (llama/14855)

ggml-ci

2 months agomusa: upgrade musa sdk to rc4.2.0 (llama/14498)
R0CKSTAR [Thu, 24 Jul 2025 19:05:37 +0000 (03:05 +0800)]
musa: upgrade musa sdk to rc4.2.0 (llama/14498)

* musa: apply mublas API changes

Signed-off-by: Xiaodong Ye <redacted>
* musa: update musa version to 4.2.0

Signed-off-by: Xiaodong Ye <redacted>
* musa: restore MUSA graph settings in CMakeLists.txt

Signed-off-by: Xiaodong Ye <redacted>
* musa: disable mudnnMemcpyAsync by default

Signed-off-by: Xiaodong Ye <redacted>
* musa: switch back to non-mudnn images

Signed-off-by: Xiaodong Ye <redacted>
* minor changes

Signed-off-by: Xiaodong Ye <redacted>
* musa: restore rc in docker image tag

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
2 months agocontrib : recommend PRs to llama.cpp (#1312)
Georgi Gerganov [Fri, 25 Jul 2025 04:05:38 +0000 (07:05 +0300)]
contrib : recommend PRs to llama.cpp (#1312)

* contrib : recommend PRs to llama.cpp

* cont : wording

2 months agocmake : Indent ggml-config.cmake (#1310)
Kai Pastor [Thu, 24 Jul 2025 17:58:02 +0000 (19:58 +0200)]
cmake : Indent ggml-config.cmake (#1310)

2 months agosync : llama.cpp
Georgi Gerganov [Thu, 24 Jul 2025 17:28:43 +0000 (20:28 +0300)]
sync : llama.cpp

ggml-ci

2 months agosycl: fixed semantics of block offset calculation (llama/14814)
Alberto Cabrera Pérez [Thu, 24 Jul 2025 10:09:57 +0000 (11:09 +0100)]
sycl: fixed semantics of block offset calculation (llama/14814)

2 months agometal : fix fusion across different encoders (llama/14849)
Georgi Gerganov [Thu, 24 Jul 2025 07:24:05 +0000 (10:24 +0300)]
metal : fix fusion across different encoders (llama/14849)

* metal : fix fusion across different encoders

ggml-ci

* cont : add assertion

ggml-ci

2 months agosycl: fix undefined variable in work group size check (llama/14843)
Donghyeon Jeong [Thu, 24 Jul 2025 04:50:41 +0000 (13:50 +0900)]
sycl: fix undefined variable in work group size check (llama/14843)

2 months agoCUDA: fix overflow in FA, tune performance (llama/14840)
Johannes Gäßler [Wed, 23 Jul 2025 19:43:25 +0000 (21:43 +0200)]
CUDA: fix overflow in FA, tune performance (llama/14840)

2 months agoCUDA: fix compilation with GGML_CUDA_F16 (llama/14837)
Johannes Gäßler [Wed, 23 Jul 2025 16:22:30 +0000 (18:22 +0200)]
CUDA: fix compilation with GGML_CUDA_F16 (llama/14837)

2 months agoCUDA: fix quantized KV cache + multiple sequences (llama/14822)
Johannes Gäßler [Wed, 23 Jul 2025 10:35:53 +0000 (12:35 +0200)]
CUDA: fix quantized KV cache + multiple sequences (llama/14822)

* CUDA: fix quantized KV cache + multiple sequences

* Update src/ggml-cuda/fattn-common.cuh

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 months agotests : add non-cont K,V FA tests
Georgi Gerganov [Fri, 18 Jul 2025 10:36:27 +0000 (13:36 +0300)]
tests : add non-cont K,V FA tests

ggml-ci

2 months agoggml: fix loongarch quantize_row_q8_1 error (llama/14827)
lixing-star [Wed, 23 Jul 2025 06:39:51 +0000 (14:39 +0800)]
ggml: fix loongarch quantize_row_q8_1 error (llama/14827)

2 months agoCANN: weight format to NZ for Ascend310P3 (llama/14407)
chen fan [Wed, 23 Jul 2025 03:58:00 +0000 (11:58 +0800)]
CANN: weight format to NZ for Ascend310P3 (llama/14407)

* weight format to nz for 310p

* remove quant weight format to nz

* clean code

* fix

* make the conditions for converting weights to NZ format consistent

* clean code

2 months agoCUDA: add fused rms norm (llama/14800)
Aman Gupta [Wed, 23 Jul 2025 01:25:42 +0000 (09:25 +0800)]
CUDA: add fused rms norm (llama/14800)