]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
4 weeks agoCUDA: fuse adds, fuse add with rms norm (llama/15631)
Aman Gupta [Fri, 29 Aug 2025 03:35:58 +0000 (11:35 +0800)]
CUDA: fuse adds, fuse add with rms norm (llama/15631)

* CUDA: fused add with rms_norm_mul

* Non-broadcast fuse works

* Add fused adds

* format

* Remove n_fuse from template params

* Address review comments

* Move template inside binbcast

4 weeks agoCUDA: add conv2d (llama/15635)
mnehete32 [Thu, 28 Aug 2025 18:33:03 +0000 (00:03 +0530)]
CUDA: add conv2d (llama/15635)

* CUDA: add conv2d

* CUDA: conv2d - correct formatting and added const

4 weeks agoggml-cpu: fix invalid hsum build in debug s390x (llama/15634)
Aaron Teo [Thu, 28 Aug 2025 14:39:27 +0000 (22:39 +0800)]
ggml-cpu: fix invalid hsum build in debug s390x (llama/15634)

Signed-off-by: Aaron Teo <redacted>
4 weeks agoggml : fix SSM_SCAN for n_groups > 1 (llama/15625)
compilade [Thu, 28 Aug 2025 14:11:36 +0000 (10:11 -0400)]
ggml : fix SSM_SCAN for n_groups > 1 (llama/15625)

4 weeks agokv-cache : remove LLAMA_SET_ROWS checks (llama/15505)
Georgi Gerganov [Thu, 28 Aug 2025 09:27:02 +0000 (12:27 +0300)]
kv-cache : remove LLAMA_SET_ROWS checks (llama/15505)

ggml-ci

4 weeks agocuda: Add cublasLt_static linking when GGML_STATIC is enabled (llama/15622)
matiaslin [Thu, 28 Aug 2025 00:32:36 +0000 (17:32 -0700)]
cuda: Add cublasLt_static linking when GGML_STATIC is enabled (llama/15622)

Prior to this change, we faced undefined cublasLt references when
attempting to compile 'llama-cli' with GGML_STATIC=ON on Linux.

We add linking with CUDA::cublasLt_static when CUDA version is greater
than 10.1.

4 weeks agoHIP: Enable support for ggml_backend_cuda_register_host_buffer (llama/15615)
uvos [Wed, 27 Aug 2025 11:58:54 +0000 (13:58 +0200)]
HIP: Enable support for ggml_backend_cuda_register_host_buffer (llama/15615)

4 weeks agoCANN: refactor mask handling and improve performance in FA (llama/15561)
Chenguang Li [Wed, 27 Aug 2025 09:21:41 +0000 (17:21 +0800)]
CANN: refactor mask handling and improve performance in FA (llama/15561)

* CANN(flash-attn): refactor mask handling and improve performance

1. Refactored the mask computation in Flash Attention, unified the logic without separating prefill and decode.
2. Optimized performance in non-alibi scenarios by reducing one repeat operation.
3. Updated operator management to explicitly mark unsupported cases on 310P devices and when dim is not divisible by 16.

Signed-off-by: noemotiovon <redacted>
* [CANN]: fix review

Signed-off-by: noemotiovon <redacted>
* [CANN]: Optimization FA BNSD to BSND

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
4 weeks agoggml-cpu : add basic RVV support for vector f32 ops (llama/15057)
xctan [Wed, 27 Aug 2025 08:44:22 +0000 (16:44 +0800)]
ggml-cpu : add basic RVV support for vector f32 ops (llama/15057)

* ggml-cpu : add basic RVV support for vector f32 ops

* ggml-cpu : add RVV support for f32 softmax

4 weeks agoOpenCL: add fused group_norm/norm, mul, add (llama/15314)
rmatif [Wed, 27 Aug 2025 06:36:05 +0000 (08:36 +0200)]
OpenCL: add fused group_norm/norm, mul, add (llama/15314)

* add fused group_norm/norm, mul, add

* fix spacing

* revert rms_norm logic

* fix trailing whitespace

4 weeks agotests : fix test-opt with GGML_BACKEND_DL (llama/15599)
Diego Devesa [Tue, 26 Aug 2025 20:14:38 +0000 (13:14 -0700)]
tests : fix test-opt with GGML_BACKEND_DL (llama/15599)

4 weeks agoSYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (llama/15592)
Akarshan Biswas [Tue, 26 Aug 2025 18:57:49 +0000 (00:27 +0530)]
SYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (llama/15592)

The original implementation unconditionally returned true for this operation, leading to a failure when the tensor's first dimension (ne[0]) was not a multiple of WARP_SIZE. This caused an GGML_ASSERT(ncols % WARP_SIZE == 0) failure in ggml-sycl/norm.cpp.

This change updates the ggml_backend_sycl_device_supports_op check to correctly return true for GGML_OP_RMS_NORM only when the first dimension of the tensor is a multiple of WARP_SIZE, ensuring the operation can be performed without error.

4 weeks agotests: add performance test for mul mat id (llama/15543)
Eve [Tue, 26 Aug 2025 15:42:49 +0000 (15:42 +0000)]
tests: add performance test for mul mat id (llama/15543)

4 weeks agollamafile: PowerPC Sgemm Optimization (llama/15558)
shalinib-ibm [Tue, 26 Aug 2025 15:35:25 +0000 (21:05 +0530)]
llamafile: PowerPC Sgemm Optimization (llama/15558)

This patch improves GEMM for FP32 Data Type on PowerPC

Implements GEMM on large blocks with configurable block size mc, nc, kc
(default: 256, 256, 256).
Packing Function optimized to access blocks as per memory layout.
GEMM Optimized to work on larger blocks.
Isolated Packing from GEMM Operations for better MMA utilization.

Verified functionality and correctness uing llama-cli and stand alone
test case (performs matmul and compares final mattrix C result with base).

Minor code refactoring changes:
Replace macro with inline function
Code Indent made consistent with 4 spaces

Performance Testing:

Observed 50% ~ 70% improvement in Prompt Processing Speed mesured using
llama-bench with Meta-Llama3-8B FP32 Model.  Similar gains observed with
Mistral-7b-Instruct-v0.3 Model.

model                   Size                Params     Backend       Threads   Test    Patch   Base
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp512   98.58   60.3
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp1024  95.88   57.36
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp2048  85.46   53.26
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp4096  68.66   45.78
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp6144  57.35   40.44

25 ~ 30% improvement in llama-batched-bench with Metla-Llama3-8B in
Prompt Processing Speed for large prompts (256, 512, 1024, 2048, 4096)tokens with various batch
sizes ( 1, 2, 4, 8, 16)

Signed-off-by: Shalini Salomi Bodapati <redacted>
4 weeks agoCUDA: return -1 for nonexistent compiled arch (llama/15587)
Johannes Gäßler [Tue, 26 Aug 2025 14:01:20 +0000 (16:01 +0200)]
CUDA: return -1 for nonexistent compiled arch (llama/15587)

4 weeks agometal : optimize FA vec for large sequences and BS <= 8 (llama/15566)
Georgi Gerganov [Tue, 26 Aug 2025 11:22:14 +0000 (14:22 +0300)]
metal : optimize FA vec for large sequences and BS <= 8 (llama/15566)

* metal : optmize FA vec for large heads and sequences

* metal : adjust small-batch mul mv kernels

ggml-ci

* batched-bench : fix total speed computation

ggml-ci

* cont : add comments

ggml-ci

4 weeks agometal : improve `MUL_MAT_ID` (llama/15541)
Georgi Gerganov [Tue, 26 Aug 2025 09:46:15 +0000 (12:46 +0300)]
metal : improve `MUL_MAT_ID` (llama/15541)

* metal : mul_mm_id remove hdst

* metal : remove mul_mm_id hsrc1

* metal : mul_mm_id simplify + add test

* metal : opt mul_mm_id map0

* metal : optimize mul_mm_id id gathering

* metal : mul/div opt

* metal : optimize mul_mm_id_map0

ggml-ci

4 weeks agometal : remove contiguous assertion for src0 in IM2COL (llama/15577)
Sigbjørn Skjæret [Tue, 26 Aug 2025 06:51:43 +0000 (08:51 +0200)]
metal : remove contiguous assertion for src0 in IM2COL (llama/15577)

* remove contiguous assertion for src0 in IM2COL

* add contiguous check in supports_op

4 weeks agoAdd a warning for special devices (llama/15563)
Yoshi_likes_e4 [Tue, 26 Aug 2025 06:15:33 +0000 (13:15 +0700)]
Add a warning for special devices (llama/15563)

* Add warning

* Print the devices names

* Add newlines

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Fix vector names

---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agovulkan: Remove splitting for mul_mat_id (llama/15568)
Jeff Bolz [Tue, 26 Aug 2025 04:42:44 +0000 (23:42 -0500)]
vulkan: Remove splitting for mul_mat_id (llama/15568)

row_ids only needs to hold the BN rows for the current tile.

4 weeks agoCUDA: Accelerate MXFP4 table lookup using `__byte_perm` (llama/15451)
Qeeweew [Mon, 25 Aug 2025 21:21:22 +0000 (05:21 +0800)]
CUDA: Accelerate MXFP4 table lookup using `__byte_perm` (llama/15451)

* CUDA: optimize get_int_from_table_16

* CUDA: use v_perm_b32 to replace byte_perm on AMD GPUs

* revise documentation

---------

Co-authored-by: xix <redacted>
Co-authored-by: Johannes Gäßler <redacted>
4 weeks agoopencl: fix support ops condition for `rms_norm` (llama/15560)
lhez [Mon, 25 Aug 2025 21:18:09 +0000 (14:18 -0700)]
opencl: fix support ops condition for `rms_norm` (llama/15560)

4 weeks agovulkan: fix min subgroup 16 condition for mmid subgroup optimization (llama/15565)
Ruben Ortlam [Mon, 25 Aug 2025 15:56:59 +0000 (17:56 +0200)]
vulkan: fix min subgroup 16 condition for mmid subgroup optimization (llama/15565)

4 weeks agotests: Generate unique input values for count_equal (llama/15487)
Jeff Bolz [Mon, 25 Aug 2025 15:47:16 +0000 (10:47 -0500)]
tests: Generate unique input values for count_equal (llama/15487)

This avoids backend-dependent behavior for argmax that leads to intermittent failures.

4 weeks agometal: fix regression when no metal devices are present (llama/15531)
Ihar Hrachyshka [Mon, 25 Aug 2025 15:27:34 +0000 (11:27 -0400)]
metal: fix regression when no metal devices are present (llama/15531)

4 weeks agoCUDA: MoE helper in device code, better tile sizes (llama/15525)
Johannes Gäßler [Mon, 25 Aug 2025 15:23:40 +0000 (17:23 +0200)]
CUDA: MoE helper in device code, better tile sizes (llama/15525)

* CUDA: MoE helper in device code, better tile sizes

* reduce superfluous CUDA blocks

4 weeks agometal : add FA kernels for HS=40 (llama/15559)
Georgi Gerganov [Mon, 25 Aug 2025 07:14:48 +0000 (10:14 +0300)]
metal : add FA kernels for HS=40 (llama/15559)

ggml-ci

4 weeks agoCANN: ROPE cache sin/cos repeat (llama/15501)
Chenguang Li [Mon, 25 Aug 2025 02:32:21 +0000 (10:32 +0800)]
CANN: ROPE cache sin/cos repeat (llama/15501)

Signed-off-by: noemotiovon <redacted>
4 weeks agovulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (llama/15524)
Ruben Ortlam [Sun, 24 Aug 2025 17:36:36 +0000 (19:36 +0200)]
vulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (llama/15524)

* vulkan: use subgroup function for mul_mat_id shader even without coopmat

* vulkan: fix compile warnings

* vulkan: properly check for subgroup size control and require full subgroups for subgroup mul_mat_id

* vulkan: disable subgroup mul_mat_id on devices with subgroups < 16

4 weeks agovulkan: Support FA with any multiple of 8 head sizes (llama/15537)
Jeff Bolz [Sun, 24 Aug 2025 09:24:25 +0000 (04:24 -0500)]
vulkan: Support FA with any multiple of 8 head sizes (llama/15537)

The scalar FA shader already handled multiples of 8. The coopmat1 FA
shader assumed 16x16x16 and the shared memory allocations need the HSK
dimensions padded to a multiple of 16. NVIDIA's coopmat2 implementation
requires multiples of 16 for N and K, and needs the matrix dimensions
padded and loads clamped.

Store the FA pipelines in a map, indexed by the pipeline state.

4 weeks agovulkan: enable Conv2D for Apple after MoltenVK fixed the bug (llama/15526)
Ruben Ortlam [Sun, 24 Aug 2025 08:48:53 +0000 (10:48 +0200)]
vulkan: enable Conv2D for Apple after MoltenVK fixed the bug (llama/15526)

4 weeks agovulkan: workaround MoltenVK compile failure in multi_add (llama/15506)
Jeff Bolz [Sun, 24 Aug 2025 08:48:21 +0000 (03:48 -0500)]
vulkan: workaround MoltenVK compile failure in multi_add (llama/15506)

* vulkan: workaround MoltenVK compile failure in multi_add

* Update src/ggml-vulkan/vulkan-shaders/multi_add.comp

Co-authored-by: 0cc4m <redacted>
4 weeks agoCUDA: fix half2 -> half conversion for HIP (llama/15529)
Johannes Gäßler [Sat, 23 Aug 2025 19:37:06 +0000 (21:37 +0200)]
CUDA: fix half2 -> half conversion for HIP (llama/15529)

4 weeks agovulkan: optimize rms_norm, and allow the work to spread across multiple SMs (llama...
Jeff Bolz [Sat, 23 Aug 2025 18:16:17 +0000 (13:16 -0500)]
vulkan: optimize rms_norm, and allow the work to spread across multiple SMs (llama/15281)

* vulkan: optimize rms_norm, and allow the work to spread across multiple SMs

There are really two parts to this change:
(1) Some optimizations similar to what we have in soft_max, to unroll with
different numbers of iterations.
(2) A fusion optimization where we detect add followed by rms_norm, and make
the add shader atomically accumulate the values^2 into memory. Then the
rms_norm shader can just load that sum. This allows the rms_norm to be
parallelized across multiple workgroups, it just becomes a simple per-element
multiply.

The fusion optimization is currently only applied when the rms_norm is on a
single vector. This previously always ran on a single SM. It could apply more
broadly, but when there are other dimensions the work can already spread across
SMs, and there would be some complexity to tracking multiple atomic sums.

* Change add+rms_norm optimization to write out an array of partial sums
rather than using atomic add, to make it deterministic. The rms_norm
shader fetches a subgroup's worth in parallel and uses subgroupAdd to
add them up.

* complete rebase against fused adds - multi_add shader can also compute partial sums

* fix validation errors

* disable add_rms_fusion for Intel due to possible driver bug

* resolve against #15489, sync after clearing partial sums

4 weeks agovulkan: Rewrite synchronization to allow some overlap between nodes (llama/15489)
Jeff Bolz [Sat, 23 Aug 2025 07:33:36 +0000 (02:33 -0500)]
vulkan: Rewrite synchronization to allow some overlap between nodes (llama/15489)

Track a list of nodes that need synchronization, and only sync if the new node
depends on them (or overwrites them). This allows some overlap which can
improve performance, and centralizes a big chunk of the synchronization logic.

The remaining synchronization logic involves writes to memory other than the
nodes, e.g. for dequantization or split_k. Each of these allocations has a bool
indicating whether they were in use and need to be synced. This should be
checked before they are written to, and set to true after they are done being
consumed.

4 weeks agovulkan : support ggml_mean (llama/15393)
Acly [Sat, 23 Aug 2025 06:35:21 +0000 (08:35 +0200)]
vulkan : support ggml_mean (llama/15393)

* vulkan : support ggml_mean

* vulkan : support sum, sum_rows and mean with non-contiguous tensors

* vulkan : fix subbuffer size not accounting for misalign offset

* tests : add backend-op tests for non-contiguous sum_rows

* cuda : require contiguous src for SUM_ROWS, MEAN support
* sycl : require contiguous src for SUM, SUM_ROWS, ARGSORT support

* require ggml_contiguous_rows in supports_op and expect nb00=1 in the shader

4 weeks agovulkan: optimize mul_mat_id loading row ids into shared memory (llama/15427)
Jeff Bolz [Sat, 23 Aug 2025 06:31:54 +0000 (01:31 -0500)]
vulkan: optimize mul_mat_id loading row ids into shared memory (llama/15427)

- Spread the work across the whole workgroup. Using more threads seems to
far outweigh the synchronization overhead.
- Specialize the code for when the division is by a power of two.

4 weeks agotest-opt: allow slight inprecision (llama/15503)
Johannes Gäßler [Fri, 22 Aug 2025 21:47:01 +0000 (23:47 +0200)]
test-opt: allow slight inprecision (llama/15503)

4 weeks agoggml WebGPU: add support for quantization types (llama/15440)
Reese Levine [Fri, 22 Aug 2025 18:28:03 +0000 (11:28 -0700)]
ggml WebGPU: add support for quantization types (llama/15440)

* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments

* Work on templating for different types in shaders

* Work on shader type generation

* Working q4_0 mul_mat and some templating for different types

* Add q4_0_f16 matmul and fix device init

* Add matmul support for basic quantization types

* Add q2_k and q3_k quantization

* Add rest of k-quants

* Get firt i-quant working

* Closer to supporting all i-quants

* Support rest of i-quants

* Cleanup code

* Fix python formatting

* debug

* Bugfix for memset

* Add padding to end of buffers on creation

* Simplify bit-shifting

* Update usage of StringView

4 weeks agoggml: add `conv3d` op (llama/15182)
rmatif [Fri, 22 Aug 2025 13:33:15 +0000 (15:33 +0200)]
ggml: add `conv3d` op (llama/15182)

* add conv3d

* bump GGML_OP_COUNT

4 weeks agocuda : add Pad Reflect 1D support (llama/14659)
Yavor Ivanov [Fri, 22 Aug 2025 11:06:29 +0000 (14:06 +0300)]
cuda : add Pad Reflect 1D support (llama/14659)

* Add Pad Reflect 1D CUDA support

* Update src/ggml-cuda/pad_reflect_1d.cu

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agoggml-cpu: Support Q5_0 and Q5_1 on s390x (llama/15486)
Aaron Teo [Fri, 22 Aug 2025 08:11:04 +0000 (16:11 +0800)]
ggml-cpu: Support Q5_0 and Q5_1 on s390x (llama/15486)

* ggml-cpu: initial q5_0 impl for s390x

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: updated q5_0 code for better performance

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: use optimised hsum for better performance

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: introduce q5_1 simd + refactor q5_0

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix incorrect return type vec_hsum

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: q5_0 incomplete refactor + table_b2b_0 activation

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: refactor q5_1

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: q5_1 update loop unroll to 4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update q5_0 unroll to 4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update build-s390x docs

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update unused variables q5_0

Signed-off-by: Aaron Teo <redacted>
* docs: update the last update date

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
4 weeks agoCANN: Optimize RMS_NORM using cache (llama/15419)
Chenguang Li [Fri, 22 Aug 2025 06:12:07 +0000 (14:12 +0800)]
CANN: Optimize RMS_NORM using cache (llama/15419)

* [CANN] Optimize RMS_NORM using cache

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
* fix review comment

Signed-off-by: noemotiovon <redacted>
* codestyle adjustment

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
4 weeks agosched : fix possible use of wrong ids tensor when offloading moe prompt processing...
Diego Devesa [Thu, 21 Aug 2025 21:09:32 +0000 (14:09 -0700)]
sched : fix possible use of wrong ids tensor when offloading moe prompt processing (llama/15488)

4 weeks agovulkan : support conv_2d_dw with f16 weights (llama/15392)
Acly [Thu, 21 Aug 2025 15:01:51 +0000 (17:01 +0200)]
vulkan : support conv_2d_dw with f16 weights (llama/15392)

4 weeks agovulkan: add exp operation (llama/15456)
Dong Won Kim [Thu, 21 Aug 2025 15:00:16 +0000 (00:00 +0900)]
vulkan: add exp operation (llama/15456)

Co-authored-by: aeseulgi <redacted>
4 weeks agovulkan: Reuse conversion results in prealloc_y (llama/15410)
Jeff Bolz [Thu, 21 Aug 2025 14:55:00 +0000 (09:55 -0500)]
vulkan: Reuse conversion results in prealloc_y (llama/15410)

* vulkan: Reuse conversion results in prealloc_y

Cache the pipeline and tensor that were most recently used to fill prealloc_y,
and skip the conversion if the current pipeline/tensor match.

* don't use shared pointer for prealloc_y_last_pipeline_used

4 weeks agoggml : fix condition of im2col on Metal backend (llama/15460)
Xuan-Son Nguyen [Thu, 21 Aug 2025 05:32:26 +0000 (07:32 +0200)]
ggml : fix condition of im2col on Metal backend (llama/15460)

4 weeks agomusa: add GGML_UNUSED_VARS (llama/15446)
R0CKSTAR [Thu, 21 Aug 2025 03:06:05 +0000 (11:06 +0800)]
musa: add GGML_UNUSED_VARS (llama/15446)

Signed-off-by: Xiaodong Ye <redacted>
4 weeks agosched : copy only the used experts when offloading prompt processing (llama/15346)
Diego Devesa [Wed, 20 Aug 2025 23:35:28 +0000 (16:35 -0700)]
sched : copy only the used experts when offloading prompt processing (llama/15346)

4 weeks agoCUDA: refactor FA support/selection code (llama/15454)
Johannes Gäßler [Wed, 20 Aug 2025 21:14:14 +0000 (23:14 +0200)]
CUDA: refactor FA support/selection code (llama/15454)

4 weeks agoCUDA: replace GGML_CUDA_F16 with CUDA arch checks (llama/15433)
Johannes Gäßler [Wed, 20 Aug 2025 14:58:49 +0000 (16:58 +0200)]
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (llama/15433)

4 weeks agovulkan: shorten pipeline name strings (llama/15431)
Jeff Bolz [Wed, 20 Aug 2025 14:33:14 +0000 (09:33 -0500)]
vulkan: shorten pipeline name strings (llama/15431)

These detailed strings were causing increased build time on gcc.

4 weeks agomusa: fix build warnings (llama/15258)
R0CKSTAR [Wed, 20 Aug 2025 02:17:37 +0000 (10:17 +0800)]
musa: fix build warnings (llama/15258)

* musa: fix build warnings

Signed-off-by: Xiaodong Ye <redacted>
* fix warning: comparison of integers of different signs: 'const int' and 'unsigned int' [-Wsign-compare]

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
4 weeks agoopencl: mark `argsort` unsupported if cols exceed workgroup limit (llama/15375)
lhez [Tue, 19 Aug 2025 18:25:51 +0000 (02:25 +0800)]
opencl: mark `argsort` unsupported if cols exceed workgroup limit (llama/15375)

4 weeks agoCANN: optimize rope operator (llama/15335)
SHUAI YANG [Tue, 19 Aug 2025 13:28:22 +0000 (21:28 +0800)]
CANN: optimize rope operator (llama/15335)

* optimize rope ops

* amendment

* delete trailing whitespace

* change the variable name

4 weeks agomusa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (llama/15413)
R0CKSTAR [Tue, 19 Aug 2025 10:33:47 +0000 (18:33 +0800)]
musa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (llama/15413)

Signed-off-by: Xiaodong Ye <redacted>
4 weeks agoggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (llama/15385)
Marvin Gießing [Tue, 19 Aug 2025 08:54:31 +0000 (10:54 +0200)]
ggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (llama/15385)

* Added VSX intrinsics for Power9+ systems

Signed-off-by: mgiessing <redacted>
* Manual unrolling for minor perf improvement

Signed-off-by: mgiessing <redacted>
* Update src/ggml-cpu/arch/powerpc/quants.c

Co-authored-by: Georgi Gerganov <redacted>
---------

Signed-off-by: mgiessing <redacted>
Co-authored-by: Georgi Gerganov <redacted>
5 weeks agoci : add github release job (#1334)
Daniel Bevenius [Thu, 28 Aug 2025 07:10:48 +0000 (09:10 +0200)]
ci : add github release job (#1334)

* ci : add github release job

This commit adds a GitHub Actions workflow to automate the release
process. Currently this will only create an archive of the sources for
ggml when a tag is pushed.

The motivation for this is that when we start releasing versions of ggml
using semantic versioning it can be nice to have the sources needed for
ggml to be deployed as a github release. This enables CMake users that
use `FetchContent` efficiently specify the the zip file instead of
cloning.

Example usage with `FetchContent`:
```cmake
cmake_minimum_required(VERSION 3.14)
project(ggml_example)

set(CMAKE_CXX_STANDARD 17)

include(FetchContent)
FetchContent_Declare(ggml
    URL https://github.com/danbev/ggml/archive/refs/tags/v1.1.5-test.zip
    DOWNLOAD_EXTRACT_TIMESTAMP TRUE
)

FetchContent_MakeAvailable(ggml)

add_executable(ggml_example main.cpp)
target_link_libraries(ggml_example ggml)
```
And with the following `main.cpp` file:
```c++
 #include <iostream>
 #include <ggml.h>

 int main() {
     std::cout << "GGML Version: " << ggml_version() << std::endl;
     return 0;
 }
```
This could then be built using:
```console
$ cmake -S . -B build
$ cmake --build build
$ ./build/ggml_example
GGML Version: 0.0.2472
```

6 weeks agocuda : remove obsolete sources (#1332) upstream/0.0.2471
Georgi Gerganov [Mon, 18 Aug 2025 19:01:00 +0000 (22:01 +0300)]
cuda : remove obsolete sources (#1332)

ggml-ci

6 weeks agosync : whisper.cpp
Georgi Gerganov [Mon, 18 Aug 2025 17:33:27 +0000 (20:33 +0300)]
sync : whisper.cpp

6 weeks agoscripts : update sync scripts
Georgi Gerganov [Mon, 18 Aug 2025 17:32:37 +0000 (20:32 +0300)]
scripts : update sync scripts

6 weeks agoggml: Add initial WebGPU backend (llama/14521)
Reese Levine [Mon, 18 Aug 2025 16:22:37 +0000 (19:22 +0300)]
ggml: Add initial WebGPU backend (llama/14521)

ggml-ci

6 weeks agoggml : initial zDNN backend (llama/14975)
Aaron Teo [Mon, 18 Aug 2025 16:21:15 +0000 (19:21 +0300)]
ggml : initial zDNN backend (llama/14975)

6 weeks agoscripts : update sync scripts
Georgi Gerganov [Mon, 18 Aug 2025 16:20:10 +0000 (19:20 +0300)]
scripts : update sync scripts

6 weeks agocommon : handle mxfp4 enum
Georgi Gerganov [Mon, 18 Aug 2025 15:57:45 +0000 (18:57 +0300)]
common : handle mxfp4 enum

ggml-ci

6 weeks agosync : llama.cpp
Georgi Gerganov [Mon, 18 Aug 2025 15:56:46 +0000 (18:56 +0300)]
sync : llama.cpp

ggml-ci

6 weeks agoggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (llama/15379)
compilade [Mon, 18 Aug 2025 07:23:56 +0000 (03:23 -0400)]
ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (llama/15379)

* ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors

* ggml-quants : avoid division by zero in make_q3_quants

6 weeks agovulkan: disable spirv-opt for bfloat16 shaders (llama/15352)
Jeff Bolz [Mon, 18 Aug 2025 05:56:29 +0000 (00:56 -0500)]
vulkan: disable spirv-opt for bfloat16 shaders (llama/15352)

6 weeks agovulkan: Use larger workgroups for mul_mat_vec when M is small (llama/15355)
Jeff Bolz [Sun, 17 Aug 2025 16:08:57 +0000 (11:08 -0500)]
vulkan: Use larger workgroups for mul_mat_vec when M is small (llama/15355)

* vulkan: Use larger workgroups for mul_mat_vec when M is small

Also use subgroup instructions for (part of) the reduction when supported.
Without this, the more expensive reductions would eat into the benefits of
the larger workgroups.

* update heuristic for amd/intel

Co-authored-by: 0cc4m <redacted>
---------

Co-authored-by: 0cc4m <redacted>
6 weeks agovulkan: support sqrt (llama/15370)
Dong Won Kim [Sun, 17 Aug 2025 14:03:09 +0000 (23:03 +0900)]
vulkan: support sqrt (llama/15370)

6 weeks agovulkan: Optimize argsort (llama/15354)
Jeff Bolz [Sun, 17 Aug 2025 08:41:45 +0000 (03:41 -0500)]
vulkan: Optimize argsort (llama/15354)

- Launch an appropriate number of invocations (next larger power of two).
32 invocations is common and the barrier is much cheaper there.
- Specialize for "needs bounds checking" vs not.
- Make the code less branchy and [[unroll]] the loops. In the final code,
I see no branches inside the main loop (only predicated stores) when
needs_bounds_check is false.
- Always sort ascending, then apply the ascending vs descending option when
doing the final stores to memory.
- Copy the values into shared memory, makes them slightly cheaper to access.

6 weeks agovulkan: fuse adds (llama/15252)
Jeff Bolz [Sat, 16 Aug 2025 16:48:22 +0000 (11:48 -0500)]
vulkan: fuse adds (llama/15252)

* vulkan: fuse adds

Fuse adds that have the same shape, which are common in MoE models.
It will currently fuse up to 6 adds, because we assume no more than
8 descriptors per dispatch. But this could be changed.

* check runtimeDescriptorArray feature

* disable multi_add for Intel due to likely driver bug

6 weeks agovulkan: Support mul_mat_id with f32 accumulators (llama/15337)
Jeff Bolz [Sat, 16 Aug 2025 09:18:31 +0000 (04:18 -0500)]
vulkan: Support mul_mat_id with f32 accumulators (llama/15337)

* vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id

* vulkan: Support mul_mat_id with f32 accumulators, but they are not hooked up

- There's no explicit way to request f32 precision for mul_mat_id, but there
probably should be, and this gets the code in place for that.
- A couple fixes to check_results.
- Remove casts to fp16 in coopmat1 FA shader (found by inspection).

6 weeks agovulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (llama/15334)
Jeff Bolz [Sat, 16 Aug 2025 08:58:38 +0000 (03:58 -0500)]
vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (llama/15334)

6 weeks agoOpenCL: add initial FA support (llama/14987)
rmatif [Sat, 16 Aug 2025 08:05:55 +0000 (10:05 +0200)]
OpenCL: add initial FA support (llama/14987)

* add F16/F16 fa support

* fix kernel init

* use mad instead of fma

* use inline function

* mark FA with sinks as unsupported for now

* add pragma unroll to loops

6 weeks agoopencl: add initial mxfp4 support via mv (llama/15270)
lhez [Fri, 15 Aug 2025 16:52:14 +0000 (00:52 +0800)]
opencl: add initial mxfp4 support via mv (llama/15270)

* opencl: add reference `mul_mv_mxfp4_f32`

* opencl: add reference `mul_mv_id` for mxfp4

* Q4_0 tranpose fix for Adreno

---------

Co-authored-by: shawngu-quic <redacted>
6 weeks agovulkan : fix out-of-bounds access in argmax kernel (llama/15342)
Georgi Gerganov [Fri, 15 Aug 2025 14:16:36 +0000 (17:16 +0300)]
vulkan : fix out-of-bounds access in argmax kernel (llama/15342)

ggml-ci

6 weeks agovulkan : fix compile warnings on macos (llama/15340)
Georgi Gerganov [Fri, 15 Aug 2025 13:28:28 +0000 (16:28 +0300)]
vulkan : fix compile warnings on macos (llama/15340)

ggml-ci

6 weeks agoggml: initial IBM zDNN backend (llama/14975)
Aaron Teo [Fri, 15 Aug 2025 13:11:22 +0000 (21:11 +0800)]
ggml: initial IBM zDNN backend (llama/14975)

* ggml-zdnn: inital backend impl

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: temp change z17 to arch15

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix build bugs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tensor->extra logging check

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add layout name mapping, ztensor information

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: separate logging into its own line

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add shape comparison

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add ggml_tensor shape log

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix incorrect shape logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add output buffer check

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: run compute and store into tensor->extra

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add set_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more loggers

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update set_tensor logging to check only for matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: last working matmul version

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add comments to prevent accidentally deleting lines

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: support op out_prod

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update op out_prod to use tensor->extra

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite the backend implementation

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix new impl

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler warnings and bugfixes

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: test ztensor finding in init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: implement at least 1 op to test

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: assign tensor->extra to buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add check for view tensors to prevent init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rework init_tensor to create new buffers

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to std vector instead of array

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch buffers back and set to arbitrary number

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update supports_op matmul matrix

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix incorrect ztensor shape, reduce memory padding

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler error missing type

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing data transform call

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tighten memory usage, change string allocation

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias ztensor and data free

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias data transform

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more debug info for extra buffer transform

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logger to check if mat mul ops go through set_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: activate bias transform in matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move weights transform into mulmat

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more safeguards in matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sequencing of transforms

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix transform ztensor vs origtensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: figure out why sigtrap is happening

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sigsegv

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move everything back to local declaration

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move bias data to local also

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring back working matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite into mre

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import in header

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to fix sigsegv

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing load tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix invalid ztensor buffer release

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logging to debug free buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: remove free_buffer debug info

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add parmblkformat detections

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add nnpa installed detection

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add zdnn_init call for static libs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing invalid buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to using deque to fix pointer deref problem

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add weights logging to check

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to use unique ptr

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add tensor to pre_tfm_desc logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add inputs logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable op_none initialisation for testing

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing return from init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: load ztensors in cgraph exec

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: work on moving output ztensor as well

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable logging and breakpoints for full test

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at manually changing the layout

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at using default nwhc format instead

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable global load ztensor for now

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix errorenous output load tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add guards to prevent loading ztensor if transformed

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code cleanup

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring load ztensor back to init routine

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix ztensor deallocation abort

stabilise ggml <-> zdnn api

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up matmul selection

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up project structure

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update documentation, prepare for upstream

Signed-off-by: Aaron Teo <redacted>
* chore: add codeowners

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable batched matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing tensor views during matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: deny all view tensors directly

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix pr comments

Signed-off-by: Aaron Teo <redacted>
* docs: update ops docs for zdnn

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: redo test-backend-ops for ops.md

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix typo in build-s390x.md

Signed-off-by: Aaron Teo <redacted>
* codeowners: remove taronaeo for now

Signed-off-by: Aaron Teo <redacted>
* Revert "codeowners: remove taronaeo for now"

This reverts commit 411ea4ed78d08778967bd0bd33a6538cfcbe082f.

* ggml-zdnn: remove unused ggml_zdnn macro

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
6 weeks agotest-opt: fix backend support check (llama/15317)
Johannes Gäßler [Fri, 15 Aug 2025 09:23:17 +0000 (11:23 +0200)]
test-opt: fix backend support check (llama/15317)

* test-opt: fix backend support check

* Update tests/test-opt.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoCUDA: fix negative KV_max values in FA (llama/15321)
Johannes Gäßler [Thu, 14 Aug 2025 21:21:24 +0000 (23:21 +0200)]
CUDA: fix negative KV_max values in FA (llama/15321)

6 weeks agoHIP: Cleanup hipification header (llama/15285)
uvos [Thu, 14 Aug 2025 14:23:56 +0000 (16:23 +0200)]
HIP: Cleanup hipification header (llama/15285)

add expicit conversion operator to support older versions of rocm
Switch over to hip_bf16 from legacy hip_bfloat16
Simplify RDNA3 define
Reduce swap over of new hipblas api to rocm 6.5 as this version is used for rocm 7.0 previews

---------

Co-authored-by: Johannes Gäßler <redacted>
6 weeks agovulkan: perf_logger improvements (llama/15246)
Jeff Bolz [Thu, 14 Aug 2025 13:38:10 +0000 (08:38 -0500)]
vulkan: perf_logger improvements (llama/15246)

* vulkan: perf_logger improvements

- Account for batch dimension in flops calculation.
- Fix how "_VEC" is detected for mat_mul_id.
- Fix "n" dimension for mat_mul_id (in case of broadcasting).
- Include a->type in name.

* use <=mul_mat_vec_max_cols rather than ==1

7 weeks agoggml: fix ggml_conv_1d_dw bug (#1323) upstream/0.0.2446
Jason Ni [Thu, 14 Aug 2025 11:17:51 +0000 (19:17 +0800)]
ggml: fix ggml_conv_1d_dw bug (#1323)

* ggml: fix ggml_conv_1d_dw bug

* Fixed conv1d_dw weight tensor dimension.

7 weeks agomnist : adapt to opt changes
Georgi Gerganov [Thu, 14 Aug 2025 10:41:23 +0000 (13:41 +0300)]
mnist : adapt to opt changes

ggml-ci

7 weeks agotests : remove unused includes (#0)
Georgi Gerganov [Thu, 14 Aug 2025 10:41:03 +0000 (13:41 +0300)]
tests : remove unused includes (#0)

7 weeks agosync : llama.cpp
Georgi Gerganov [Thu, 14 Aug 2025 10:22:55 +0000 (13:22 +0300)]
sync : llama.cpp

ggml-ci

7 weeks agocuda : fix GGML_CUDA_GRAPHS=OFF (llama/15300)
Sigbjørn Skjæret [Thu, 14 Aug 2025 10:22:07 +0000 (12:22 +0200)]
cuda : fix GGML_CUDA_GRAPHS=OFF (llama/15300)

* fix USE_CUDA_GRAPH=OFF

ggml-ci

* check capture status

* completely disable capturing check instead

7 weeks agofinetune: SGD optimizer, more CLI args (llama/13873)
Jonathan Graehl [Thu, 14 Aug 2025 10:03:57 +0000 (03:03 -0700)]
finetune: SGD optimizer, more CLI args (llama/13873)

* examples/finetune -opt SGD (stochastic gradient descent) memory opt

add unit tested GGML_OPT_OPTIMIZER_SGD to ggml - avoids allocating
m, v tensors.

support finetune.cpp arg -opt SGD (or sgd). (default adamw as before)

llama 3.2-1b-F32 result: observed 11gb gpu ram (41 sec/epoch)
when using SGD instead of 19gb (55 sec/epoch) using adamw.
(wikipedia 100 lines finetune)

(
using the same GPU memory, adamw can only do before OOM 512
batch/context, reaching:
train: [███████▉] data=0000140/0000140 loss=0.02575±0.00099 acc=99.52±0.03% t=00:00:47 ETA=00:00:00
val:   [███████▉] data=0000008/0000008 loss=4.76565±0.28810 acc=41.46±0.77% t=00:00:00 ETA=00:00:00

SGD is superior, though it converges slower, with max before OOM 1728
batch/context (esp see the better validation perf):
train: [███████▉] data=0000039/0000039 loss=0.00371±0.00010 acc=99.96±0.01% t=00:00:41 ETA=00:00:00
val:   [███████▉] data=0000003/0000003 loss=5.11406±0.76034 acc=48.01±0.69% t=00:00:01 ETA=00:00:00
)

note: when finetuning long enough (or w/ enough -lr),
validation accuracy *eventually* drops ('catastrophic forgetting')

-lr-half (halflife) option useful for SGD to avoid oscillation or
super slow underdamped learning (makes setting -lr more forgiving).
terminal -lr for now is set by lr-halvings i.e. if you want at most
1/8 the inital -lr you set -lr-halvings 3.

note: objective loss not directly comparable between adamw, sgd? -
check perplexity or accuracy or consider relative improvements
for convergence

new finetune args -wd 1e-9 to enable weight decay in sgd or adamw,
and max -epochs N (default 2 as before)

cache (1 - wd*alpha) in 'adamw' opt struct -
no noticeable perf benefit, disabled (still done
for new SGD though)

since opt. memory is pre-allocated, the ggml_opt_get_optimizer_params
would probably be able to change between SGD and AdamW with each epoch
but would need to use adamw for the first (unconfirmed - no cmdline arg
to set such a policy yet)

test-opt checks adamw as before and now sgd (except for a few disabled
tests for sgd only; probably just needs logging values and adding
alternate reference values);  tolerance on the 'regression'
test is broader for sgd (so we don't need many more epochs)

* Vulkan: Implement GGML_OP_OPT_STEP_SGD

* tests: Fix OPT_STEP_SGD test-backend-ops

* SGD op param store weight-decay and not 1-alpha*wd

* minor + cosmetic changes

* fix vulkan sgd

* try CI fix

---------

Co-authored-by: 0cc4m <redacted>
Co-authored-by: Johannes Gäßler <redacted>
7 weeks agoHIP: bump requirement to rocm 6.1 (llama/15296)
uvos [Wed, 13 Aug 2025 18:44:30 +0000 (20:44 +0200)]
HIP: bump requirement to rocm 6.1 (llama/15296)

7 weeks agosync : llama.cpp
Georgi Gerganov [Wed, 13 Aug 2025 16:05:27 +0000 (19:05 +0300)]
sync : llama.cpp

ggml-ci

7 weeks agoggml : update `ggml_rope_multi` (llama/12665)
Judd [Wed, 13 Aug 2025 10:45:15 +0000 (18:45 +0800)]
ggml : update `ggml_rope_multi` (llama/12665)

* update `rope_multi`:

1. add `ggml_rope_multi_inplace`;
1. use `GGML_MROPE_SECTIONS` instead of 4.

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agoggml : repack block_iq4_nlx8 (llama/14904)
Georgi Gerganov [Wed, 13 Aug 2025 08:09:39 +0000 (11:09 +0300)]
ggml : repack block_iq4_nlx8 (llama/14904)

ggml-ci

7 weeks agoCUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel...
Oliver Simons [Wed, 13 Aug 2025 08:04:46 +0000 (10:04 +0200)]
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (llama/15132)

* Factor out `reduce_rows_f32` from common.cuh

This increases iteration cycle speed by not having to recompile
every kernel all the time

* Hide memory-latency by loop unrolling in reduce_rows_f32

* Further optimizations to `reduce_rows_f32`

1. Increase threadblock size to better hide latency of memory requests.
   As a consequence of bigger threadblocks, do 2-step summation, using
   shared memory to communicate results between invocations
2. Use sum_temp array to reduce waits on sum
3. Adjust num_unroll to reflext bigger threadblock
4. Improve default block_dims, increase support for more block_dims

* Add perf tests for `reduce_rows_f32` kernel

* Add heuristic to toggle 128/512 threads based on sm count

Break even point was the minimum of the following multiples.

| GPU Model                     | Nrow SM Count Multiple |
| -----------                   | -----------            |
| RTX 4000 SFF ADA              | 2.0x                   |
| RTX 6000 ADA                  | 2.5x                   |
| RTX PRO 6000 Blackwell Max-Q  | 3.04x                  |
| RTX PRO 4500 Blackwell | 3.15x                  |

* Ensure perf gains also for small ncols and large nrows

Alternative to this, one could have also made the number of unrollings
template-able, but that would require compiling the kernel multiple
times, increasing binary size unnecessarily

* Modify perf and unit-tests

* Apply auto-formatting by clang

* Fix CI build failure

See https://github.com/ggml-org/llama.cpp/actions/runs/16798370266/job/47573716079?pr=15132#step:7:486
Building with VS generator worked though.

* Remove sm_count property from `ggml_backend_cuda_context`

Requested by @JohannesGaessler, and should fix remaining CI issues as a
side-effect

* Add CUB-based implementation for GGML_OP_MEAN

Currently this branch is only executed for nrows==1

* Add heuristics to execute CUB branch only when it brings perf

Heuristics were determined on the following HW:

* RTX 4000 SFF ADA
* RTX 6000 ADA
* RTX PRO 6000 Blackwell Max-Q
* RTX PRO 4500 Blackwell

* Add unit-test for CUB-based mean

Tests should run with CUDA Graphs enabled per default on NVGPUs

* Rename `USE_CUB` to `GGML_CUDA_USE_CUB`

Suggested by @JohannesGaessler

* Unindent Preprocessor directives

See
https://github.com/ggml-org/llama.cpp/pull/15132#discussion_r2269213506

7 weeks agoggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS...
Tak-RS [Wed, 13 Aug 2025 05:54:30 +0000 (14:54 +0900)]
ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others) (llama/15188)

* ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others). Fixes #15055

* ggml-rpc: rename RPC_IO_CHUNK->MAX_CHUNK_SIZE, use std::min() for cap, switch to GGML_LOG_ERROR, handle 0-length send/recv

* rpc: drop n==0 special case in send_data(); retry in loop per review

* rpc: remove trailing whitespace in send_data()

---------

Co-authored-by: Shinnosuke Takagi <redacted>
7 weeks agoHIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (llama...
uvos [Tue, 12 Aug 2025 20:15:12 +0000 (22:15 +0200)]
HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (llama/15273)

7 weeks agosycl: Fix and disable more configurations of mul_mat (llama/15151)
Romain Biessy [Tue, 12 Aug 2025 11:58:22 +0000 (13:58 +0200)]
sycl: Fix and disable more configurations of mul_mat (llama/15151)

* sycl: Fix and disable more configurations of mul_mat

* Disable more configurations

7 weeks agoopencl: allow mixed f16/f32 `add` (llama/15140)
rmatif [Tue, 12 Aug 2025 09:42:41 +0000 (11:42 +0200)]
opencl: allow mixed f16/f32 `add` (llama/15140)

7 weeks agoCUDA cmake: add `-lineinfo` for easier debug (llama/15260)
Aman Gupta [Tue, 12 Aug 2025 09:21:45 +0000 (17:21 +0800)]
CUDA cmake: add `-lineinfo` for easier debug (llama/15260)