]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
4 weeks agoVulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants (llama/14903)
Ruben Ortlam [Mon, 1 Sep 2025 14:19:07 +0000 (16:19 +0200)]
Vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants (llama/14903)

* vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants

* vulkan: use subgroup operations for quantize_q8_1 shader

* vulkan: add q8_1_x4 type with 128-bit alignment, use in mul_mat_vecq shader

* vulkan: use q8_1_x4 blocks in mul_mmq shader

* vulkan: do 8 calculations per invocation instead of 32 in mul_mat_vecq, similar to mul_mat_vec

* vulkan: tune mul_mat_vecq performance for Intel

* vulkan: fix quantizing issue when tensor is not divisible by 128

* vulkan: adapt integer dot mmv to mmv small m optimization (llama/15355)

* vulkan: allow all subgroup modes for mmv and mmvq

* vulkan: use prealloc intermediate reuse for mmvq path

* vulkan: tune mmvq for Intel, AMD GCN and Nvidia RTX 3090

* vulkan: adapt mmv quantize_y path to conditional sync logic

* vulkan: disable q8_0 mmvq on Nvidia

* vulkan: enable q8_0 on Nvidia pre-turing

* fix prealloc sync condition

* fix llvmpipe subgroup 8 issue

4 weeks agoggml : WebGPU add TRANSPOSE and RESHAPE to supported ops (llama/15695)
Daniel Bevenius [Mon, 1 Sep 2025 12:28:49 +0000 (14:28 +0200)]
ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops (llama/15695)

* ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops

This commit adds support for the TRANSPOSE and RESHAPE operations in the
ggml webgpu backend.

Co-authored-by: Diego Devesa <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoCUDA: fix build error from ambiguous __half conversions in conv2d (llama/15690)
Akarshan Biswas [Mon, 1 Sep 2025 01:25:06 +0000 (06:55 +0530)]
CUDA: fix build error from ambiguous __half conversions in conv2d (llama/15690)

* CUDA: fix build error from ambiguous __half conversions in conv2d

Building conv2d with half precision failed because `__half` defines
multiple implicit conversion operators (to float, int, short, etc.),
causing ambiguous overload resolution when multiplying with float.

Introduce a templated `to_float` helper that explicitly converts
`__half` via `__half2float`, while passing through float unchanged.
Use this helper in conv2d accumulation to ensure unambiguous and
correct promotion to float.

Fixes some build errors with half-precision kernels on CUDA.

ggml-ci

* CUDA: Replace custom to_float helper with unified ggml_cuda_cast and add half‑>float conversion

* CUDA: Add missing convert.cuh header

* CUDA: remove unnecessary extension in ggml_cuda_cast

* CUDA: Address review comment, remove second type template argument

4 weeks agoCANN: Optimize MUL_MAT_ID (llama/15658)
hipudding [Mon, 1 Sep 2025 00:57:23 +0000 (08:57 +0800)]
CANN: Optimize MUL_MAT_ID (llama/15658)

4 weeks agoCANN: fix RoPE cache issue on multi-device (llama/15629)
hipudding [Mon, 1 Sep 2025 00:57:00 +0000 (08:57 +0800)]
CANN: fix RoPE cache issue on multi-device (llama/15629)

* CANN: fix RoPE cache issue on multi-device

RoPE cache only needs to be computed once per token.
However, in multi-device scenarios, not every device starts
computation from layer 0, which may lead to unallocated memory
issues and precision errors.

This commit records the first layer of each device to avoid
the above issues.

* CANN: Optimize first-layer detection method

* CANN: Remove trailing whitespace

* CANN: Only cache the data that can be determined as unchanged through the parameters.

* CANN: Update function comment

4 weeks agometal : fix checks for available FA kernels (llama/15700)
Georgi Gerganov [Sun, 31 Aug 2025 16:43:30 +0000 (19:43 +0300)]
metal : fix checks for available FA kernels (llama/15700)

* metal : fix checks for available FA kernels

ggml-ci

* cont : fix comment [no ci]

4 weeks agollama : separate compute buffer reserve from fattn check (llama/15696)
Diego Devesa [Sun, 31 Aug 2025 13:49:03 +0000 (06:49 -0700)]
llama : separate compute buffer reserve from fattn check (llama/15696)

Exposes ggml_backend_sched_split_graph() to allow splitting the graph without allocating compute buffers and uses it to split the graph for the automatic Flash Attention check.

4 weeks agovulkan: handle large sizes for get_rows (llama/15686)
Jeff Bolz [Sun, 31 Aug 2025 08:13:27 +0000 (03:13 -0500)]
vulkan: handle large sizes for get_rows (llama/15686)

4 weeks agovulkan: mul_mat_id coopmat2 optimizations (llama/15546)
Jeff Bolz [Sun, 31 Aug 2025 07:06:43 +0000 (02:06 -0500)]
vulkan: mul_mat_id coopmat2 optimizations (llama/15546)

* vulkan: mul_mat_id coopmat2 optimizations

Add a path for when the tile fits in BN/2, similar to what we have for mul_mat.

Only call fetch_scales/store_scales once per QUANT_K block, and once at the
beginning in case start_k is not aligned.

* Also add a path for BN/4 - worth a couple more percent

4 weeks agovulkan : remove unused portability_enumeration_ext variable (llama/15679)
Daniel Bevenius [Sun, 31 Aug 2025 06:46:42 +0000 (08:46 +0200)]
vulkan : remove unused portability_enumeration_ext variable (llama/15679)

This commit removes the portability_enumeration_ext variable from the
ggml_vk_instance_portability_enumeration_ext_available function as it
is initialized to false but never modified, making it redundant.

4 weeks agovulkan: Allow fallback to sysmem memory when vidmem is full (llama/15649)
Jeff Bolz [Sun, 31 Aug 2025 06:30:54 +0000 (01:30 -0500)]
vulkan: Allow fallback to sysmem memory when vidmem is full (llama/15649)

* vulkan: Allow fallback to sysmem memory when vidmem is full

* vulkan: Add env var GGML_VK_ALLOW_SYSMEM_FALLBACK

4 weeks agovulkan: clamp matmul and FA results to the max finite value (llama/15652)
Jeff Bolz [Sun, 31 Aug 2025 06:27:57 +0000 (01:27 -0500)]
vulkan: clamp matmul and FA results to the max finite value (llama/15652)

* vulkan: clamp matmul and FA results to the max finite value

* only clamp for fp16

4 weeks agoggml: update kleidiai to v1.13.0 (llama/15663)
Charles Xu [Sat, 30 Aug 2025 16:03:42 +0000 (18:03 +0200)]
ggml: update kleidiai to v1.13.0 (llama/15663)

4 weeks agollama: use FA + max. GPU layers by default (llama/15434)
Johannes Gäßler [Sat, 30 Aug 2025 14:32:10 +0000 (16:32 +0200)]
llama: use FA + max. GPU layers by default (llama/15434)

* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault

4 weeks agoCUDA: use FP32 arithmetic for conv2d (llama/15683)
Johannes Gäßler [Sat, 30 Aug 2025 14:20:32 +0000 (16:20 +0200)]
CUDA: use FP32 arithmetic for conv2d (llama/15683)

4 weeks agovulkan: Skip syncing for prealloc_y when it is reused (llama/15544)
Jeff Bolz [Sat, 30 Aug 2025 09:11:22 +0000 (04:11 -0500)]
vulkan: Skip syncing for prealloc_y when it is reused (llama/15544)

4 weeks agoCANN: FIx compiler warnings (llama/15661)
Chenguang Li [Sat, 30 Aug 2025 02:18:35 +0000 (10:18 +0800)]
CANN: FIx compiler warnings (llama/15661)

Signed-off-by: noemotiovon <redacted>
4 weeks agoCUDA: fix bug in rms_norm fusion (llama/15660)
Aman Gupta [Fri, 29 Aug 2025 13:30:06 +0000 (21:30 +0800)]
CUDA: fix bug in rms_norm fusion (llama/15660)

* CUDA: fix bug in rms_norm fusion

* Fix bug for OP_REPEAT

* Fix index for add

4 weeks agoCUDA: fuse adds, fuse add with rms norm (llama/15631)
Aman Gupta [Fri, 29 Aug 2025 03:35:58 +0000 (11:35 +0800)]
CUDA: fuse adds, fuse add with rms norm (llama/15631)

* CUDA: fused add with rms_norm_mul

* Non-broadcast fuse works

* Add fused adds

* format

* Remove n_fuse from template params

* Address review comments

* Move template inside binbcast

4 weeks agoCUDA: add conv2d (llama/15635)
mnehete32 [Thu, 28 Aug 2025 18:33:03 +0000 (00:03 +0530)]
CUDA: add conv2d (llama/15635)

* CUDA: add conv2d

* CUDA: conv2d - correct formatting and added const

4 weeks agoggml-cpu: fix invalid hsum build in debug s390x (llama/15634)
Aaron Teo [Thu, 28 Aug 2025 14:39:27 +0000 (22:39 +0800)]
ggml-cpu: fix invalid hsum build in debug s390x (llama/15634)

Signed-off-by: Aaron Teo <redacted>
4 weeks agoggml : fix SSM_SCAN for n_groups > 1 (llama/15625)
compilade [Thu, 28 Aug 2025 14:11:36 +0000 (10:11 -0400)]
ggml : fix SSM_SCAN for n_groups > 1 (llama/15625)

4 weeks agokv-cache : remove LLAMA_SET_ROWS checks (llama/15505)
Georgi Gerganov [Thu, 28 Aug 2025 09:27:02 +0000 (12:27 +0300)]
kv-cache : remove LLAMA_SET_ROWS checks (llama/15505)

ggml-ci

4 weeks agocuda: Add cublasLt_static linking when GGML_STATIC is enabled (llama/15622)
matiaslin [Thu, 28 Aug 2025 00:32:36 +0000 (17:32 -0700)]
cuda: Add cublasLt_static linking when GGML_STATIC is enabled (llama/15622)

Prior to this change, we faced undefined cublasLt references when
attempting to compile 'llama-cli' with GGML_STATIC=ON on Linux.

We add linking with CUDA::cublasLt_static when CUDA version is greater
than 10.1.

4 weeks agoHIP: Enable support for ggml_backend_cuda_register_host_buffer (llama/15615)
uvos [Wed, 27 Aug 2025 11:58:54 +0000 (13:58 +0200)]
HIP: Enable support for ggml_backend_cuda_register_host_buffer (llama/15615)

4 weeks agoCANN: refactor mask handling and improve performance in FA (llama/15561)
Chenguang Li [Wed, 27 Aug 2025 09:21:41 +0000 (17:21 +0800)]
CANN: refactor mask handling and improve performance in FA (llama/15561)

* CANN(flash-attn): refactor mask handling and improve performance

1. Refactored the mask computation in Flash Attention, unified the logic without separating prefill and decode.
2. Optimized performance in non-alibi scenarios by reducing one repeat operation.
3. Updated operator management to explicitly mark unsupported cases on 310P devices and when dim is not divisible by 16.

Signed-off-by: noemotiovon <redacted>
* [CANN]: fix review

Signed-off-by: noemotiovon <redacted>
* [CANN]: Optimization FA BNSD to BSND

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
4 weeks agoggml-cpu : add basic RVV support for vector f32 ops (llama/15057)
xctan [Wed, 27 Aug 2025 08:44:22 +0000 (16:44 +0800)]
ggml-cpu : add basic RVV support for vector f32 ops (llama/15057)

* ggml-cpu : add basic RVV support for vector f32 ops

* ggml-cpu : add RVV support for f32 softmax

4 weeks agoOpenCL: add fused group_norm/norm, mul, add (llama/15314)
rmatif [Wed, 27 Aug 2025 06:36:05 +0000 (08:36 +0200)]
OpenCL: add fused group_norm/norm, mul, add (llama/15314)

* add fused group_norm/norm, mul, add

* fix spacing

* revert rms_norm logic

* fix trailing whitespace

4 weeks agotests : fix test-opt with GGML_BACKEND_DL (llama/15599)
Diego Devesa [Tue, 26 Aug 2025 20:14:38 +0000 (13:14 -0700)]
tests : fix test-opt with GGML_BACKEND_DL (llama/15599)

4 weeks agoSYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (llama/15592)
Akarshan Biswas [Tue, 26 Aug 2025 18:57:49 +0000 (00:27 +0530)]
SYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (llama/15592)

The original implementation unconditionally returned true for this operation, leading to a failure when the tensor's first dimension (ne[0]) was not a multiple of WARP_SIZE. This caused an GGML_ASSERT(ncols % WARP_SIZE == 0) failure in ggml-sycl/norm.cpp.

This change updates the ggml_backend_sycl_device_supports_op check to correctly return true for GGML_OP_RMS_NORM only when the first dimension of the tensor is a multiple of WARP_SIZE, ensuring the operation can be performed without error.

4 weeks agotests: add performance test for mul mat id (llama/15543)
Eve [Tue, 26 Aug 2025 15:42:49 +0000 (15:42 +0000)]
tests: add performance test for mul mat id (llama/15543)

4 weeks agollamafile: PowerPC Sgemm Optimization (llama/15558)
shalinib-ibm [Tue, 26 Aug 2025 15:35:25 +0000 (21:05 +0530)]
llamafile: PowerPC Sgemm Optimization (llama/15558)

This patch improves GEMM for FP32 Data Type on PowerPC

Implements GEMM on large blocks with configurable block size mc, nc, kc
(default: 256, 256, 256).
Packing Function optimized to access blocks as per memory layout.
GEMM Optimized to work on larger blocks.
Isolated Packing from GEMM Operations for better MMA utilization.

Verified functionality and correctness uing llama-cli and stand alone
test case (performs matmul and compares final mattrix C result with base).

Minor code refactoring changes:
Replace macro with inline function
Code Indent made consistent with 4 spaces

Performance Testing:

Observed 50% ~ 70% improvement in Prompt Processing Speed mesured using
llama-bench with Meta-Llama3-8B FP32 Model.  Similar gains observed with
Mistral-7b-Instruct-v0.3 Model.

model                   Size                Params     Backend       Threads   Test    Patch   Base
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp512   98.58   60.3
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp1024  95.88   57.36
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp2048  85.46   53.26
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp4096  68.66   45.78
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp6144  57.35   40.44

25 ~ 30% improvement in llama-batched-bench with Metla-Llama3-8B in
Prompt Processing Speed for large prompts (256, 512, 1024, 2048, 4096)tokens with various batch
sizes ( 1, 2, 4, 8, 16)

Signed-off-by: Shalini Salomi Bodapati <redacted>
4 weeks agoCUDA: return -1 for nonexistent compiled arch (llama/15587)
Johannes Gäßler [Tue, 26 Aug 2025 14:01:20 +0000 (16:01 +0200)]
CUDA: return -1 for nonexistent compiled arch (llama/15587)

4 weeks agometal : optimize FA vec for large sequences and BS <= 8 (llama/15566)
Georgi Gerganov [Tue, 26 Aug 2025 11:22:14 +0000 (14:22 +0300)]
metal : optimize FA vec for large sequences and BS <= 8 (llama/15566)

* metal : optmize FA vec for large heads and sequences

* metal : adjust small-batch mul mv kernels

ggml-ci

* batched-bench : fix total speed computation

ggml-ci

* cont : add comments

ggml-ci

4 weeks agometal : improve `MUL_MAT_ID` (llama/15541)
Georgi Gerganov [Tue, 26 Aug 2025 09:46:15 +0000 (12:46 +0300)]
metal : improve `MUL_MAT_ID` (llama/15541)

* metal : mul_mm_id remove hdst

* metal : remove mul_mm_id hsrc1

* metal : mul_mm_id simplify + add test

* metal : opt mul_mm_id map0

* metal : optimize mul_mm_id id gathering

* metal : mul/div opt

* metal : optimize mul_mm_id_map0

ggml-ci

4 weeks agometal : remove contiguous assertion for src0 in IM2COL (llama/15577)
Sigbjørn Skjæret [Tue, 26 Aug 2025 06:51:43 +0000 (08:51 +0200)]
metal : remove contiguous assertion for src0 in IM2COL (llama/15577)

* remove contiguous assertion for src0 in IM2COL

* add contiguous check in supports_op

4 weeks agoAdd a warning for special devices (llama/15563)
Yoshi_likes_e4 [Tue, 26 Aug 2025 06:15:33 +0000 (13:15 +0700)]
Add a warning for special devices (llama/15563)

* Add warning

* Print the devices names

* Add newlines

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Fix vector names

---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agovulkan: Remove splitting for mul_mat_id (llama/15568)
Jeff Bolz [Tue, 26 Aug 2025 04:42:44 +0000 (23:42 -0500)]
vulkan: Remove splitting for mul_mat_id (llama/15568)

row_ids only needs to hold the BN rows for the current tile.

4 weeks agoCUDA: Accelerate MXFP4 table lookup using `__byte_perm` (llama/15451)
Qeeweew [Mon, 25 Aug 2025 21:21:22 +0000 (05:21 +0800)]
CUDA: Accelerate MXFP4 table lookup using `__byte_perm` (llama/15451)

* CUDA: optimize get_int_from_table_16

* CUDA: use v_perm_b32 to replace byte_perm on AMD GPUs

* revise documentation

---------

Co-authored-by: xix <redacted>
Co-authored-by: Johannes Gäßler <redacted>
4 weeks agoopencl: fix support ops condition for `rms_norm` (llama/15560)
lhez [Mon, 25 Aug 2025 21:18:09 +0000 (14:18 -0700)]
opencl: fix support ops condition for `rms_norm` (llama/15560)

4 weeks agovulkan: fix min subgroup 16 condition for mmid subgroup optimization (llama/15565)
Ruben Ortlam [Mon, 25 Aug 2025 15:56:59 +0000 (17:56 +0200)]
vulkan: fix min subgroup 16 condition for mmid subgroup optimization (llama/15565)

4 weeks agotests: Generate unique input values for count_equal (llama/15487)
Jeff Bolz [Mon, 25 Aug 2025 15:47:16 +0000 (10:47 -0500)]
tests: Generate unique input values for count_equal (llama/15487)

This avoids backend-dependent behavior for argmax that leads to intermittent failures.

4 weeks agometal: fix regression when no metal devices are present (llama/15531)
Ihar Hrachyshka [Mon, 25 Aug 2025 15:27:34 +0000 (11:27 -0400)]
metal: fix regression when no metal devices are present (llama/15531)

4 weeks agoCUDA: MoE helper in device code, better tile sizes (llama/15525)
Johannes Gäßler [Mon, 25 Aug 2025 15:23:40 +0000 (17:23 +0200)]
CUDA: MoE helper in device code, better tile sizes (llama/15525)

* CUDA: MoE helper in device code, better tile sizes

* reduce superfluous CUDA blocks

4 weeks agometal : add FA kernels for HS=40 (llama/15559)
Georgi Gerganov [Mon, 25 Aug 2025 07:14:48 +0000 (10:14 +0300)]
metal : add FA kernels for HS=40 (llama/15559)

ggml-ci

4 weeks agoCANN: ROPE cache sin/cos repeat (llama/15501)
Chenguang Li [Mon, 25 Aug 2025 02:32:21 +0000 (10:32 +0800)]
CANN: ROPE cache sin/cos repeat (llama/15501)

Signed-off-by: noemotiovon <redacted>
4 weeks agovulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (llama/15524)
Ruben Ortlam [Sun, 24 Aug 2025 17:36:36 +0000 (19:36 +0200)]
vulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (llama/15524)

* vulkan: use subgroup function for mul_mat_id shader even without coopmat

* vulkan: fix compile warnings

* vulkan: properly check for subgroup size control and require full subgroups for subgroup mul_mat_id

* vulkan: disable subgroup mul_mat_id on devices with subgroups < 16

4 weeks agovulkan: Support FA with any multiple of 8 head sizes (llama/15537)
Jeff Bolz [Sun, 24 Aug 2025 09:24:25 +0000 (04:24 -0500)]
vulkan: Support FA with any multiple of 8 head sizes (llama/15537)

The scalar FA shader already handled multiples of 8. The coopmat1 FA
shader assumed 16x16x16 and the shared memory allocations need the HSK
dimensions padded to a multiple of 16. NVIDIA's coopmat2 implementation
requires multiples of 16 for N and K, and needs the matrix dimensions
padded and loads clamped.

Store the FA pipelines in a map, indexed by the pipeline state.

4 weeks agovulkan: enable Conv2D for Apple after MoltenVK fixed the bug (llama/15526)
Ruben Ortlam [Sun, 24 Aug 2025 08:48:53 +0000 (10:48 +0200)]
vulkan: enable Conv2D for Apple after MoltenVK fixed the bug (llama/15526)

4 weeks agovulkan: workaround MoltenVK compile failure in multi_add (llama/15506)
Jeff Bolz [Sun, 24 Aug 2025 08:48:21 +0000 (03:48 -0500)]
vulkan: workaround MoltenVK compile failure in multi_add (llama/15506)

* vulkan: workaround MoltenVK compile failure in multi_add

* Update src/ggml-vulkan/vulkan-shaders/multi_add.comp

Co-authored-by: 0cc4m <redacted>
4 weeks agoCUDA: fix half2 -> half conversion for HIP (llama/15529)
Johannes Gäßler [Sat, 23 Aug 2025 19:37:06 +0000 (21:37 +0200)]
CUDA: fix half2 -> half conversion for HIP (llama/15529)

4 weeks agovulkan: optimize rms_norm, and allow the work to spread across multiple SMs (llama...
Jeff Bolz [Sat, 23 Aug 2025 18:16:17 +0000 (13:16 -0500)]
vulkan: optimize rms_norm, and allow the work to spread across multiple SMs (llama/15281)

* vulkan: optimize rms_norm, and allow the work to spread across multiple SMs

There are really two parts to this change:
(1) Some optimizations similar to what we have in soft_max, to unroll with
different numbers of iterations.
(2) A fusion optimization where we detect add followed by rms_norm, and make
the add shader atomically accumulate the values^2 into memory. Then the
rms_norm shader can just load that sum. This allows the rms_norm to be
parallelized across multiple workgroups, it just becomes a simple per-element
multiply.

The fusion optimization is currently only applied when the rms_norm is on a
single vector. This previously always ran on a single SM. It could apply more
broadly, but when there are other dimensions the work can already spread across
SMs, and there would be some complexity to tracking multiple atomic sums.

* Change add+rms_norm optimization to write out an array of partial sums
rather than using atomic add, to make it deterministic. The rms_norm
shader fetches a subgroup's worth in parallel and uses subgroupAdd to
add them up.

* complete rebase against fused adds - multi_add shader can also compute partial sums

* fix validation errors

* disable add_rms_fusion for Intel due to possible driver bug

* resolve against #15489, sync after clearing partial sums

4 weeks agovulkan: Rewrite synchronization to allow some overlap between nodes (llama/15489)
Jeff Bolz [Sat, 23 Aug 2025 07:33:36 +0000 (02:33 -0500)]
vulkan: Rewrite synchronization to allow some overlap between nodes (llama/15489)

Track a list of nodes that need synchronization, and only sync if the new node
depends on them (or overwrites them). This allows some overlap which can
improve performance, and centralizes a big chunk of the synchronization logic.

The remaining synchronization logic involves writes to memory other than the
nodes, e.g. for dequantization or split_k. Each of these allocations has a bool
indicating whether they were in use and need to be synced. This should be
checked before they are written to, and set to true after they are done being
consumed.

4 weeks agovulkan : support ggml_mean (llama/15393)
Acly [Sat, 23 Aug 2025 06:35:21 +0000 (08:35 +0200)]
vulkan : support ggml_mean (llama/15393)

* vulkan : support ggml_mean

* vulkan : support sum, sum_rows and mean with non-contiguous tensors

* vulkan : fix subbuffer size not accounting for misalign offset

* tests : add backend-op tests for non-contiguous sum_rows

* cuda : require contiguous src for SUM_ROWS, MEAN support
* sycl : require contiguous src for SUM, SUM_ROWS, ARGSORT support

* require ggml_contiguous_rows in supports_op and expect nb00=1 in the shader

4 weeks agovulkan: optimize mul_mat_id loading row ids into shared memory (llama/15427)
Jeff Bolz [Sat, 23 Aug 2025 06:31:54 +0000 (01:31 -0500)]
vulkan: optimize mul_mat_id loading row ids into shared memory (llama/15427)

- Spread the work across the whole workgroup. Using more threads seems to
far outweigh the synchronization overhead.
- Specialize the code for when the division is by a power of two.

4 weeks agotest-opt: allow slight inprecision (llama/15503)
Johannes Gäßler [Fri, 22 Aug 2025 21:47:01 +0000 (23:47 +0200)]
test-opt: allow slight inprecision (llama/15503)

4 weeks agoggml WebGPU: add support for quantization types (llama/15440)
Reese Levine [Fri, 22 Aug 2025 18:28:03 +0000 (11:28 -0700)]
ggml WebGPU: add support for quantization types (llama/15440)

* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments

* Work on templating for different types in shaders

* Work on shader type generation

* Working q4_0 mul_mat and some templating for different types

* Add q4_0_f16 matmul and fix device init

* Add matmul support for basic quantization types

* Add q2_k and q3_k quantization

* Add rest of k-quants

* Get firt i-quant working

* Closer to supporting all i-quants

* Support rest of i-quants

* Cleanup code

* Fix python formatting

* debug

* Bugfix for memset

* Add padding to end of buffers on creation

* Simplify bit-shifting

* Update usage of StringView

4 weeks agoggml: add `conv3d` op (llama/15182)
rmatif [Fri, 22 Aug 2025 13:33:15 +0000 (15:33 +0200)]
ggml: add `conv3d` op (llama/15182)

* add conv3d

* bump GGML_OP_COUNT

4 weeks agocuda : add Pad Reflect 1D support (llama/14659)
Yavor Ivanov [Fri, 22 Aug 2025 11:06:29 +0000 (14:06 +0300)]
cuda : add Pad Reflect 1D support (llama/14659)

* Add Pad Reflect 1D CUDA support

* Update src/ggml-cuda/pad_reflect_1d.cu

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agoggml-cpu: Support Q5_0 and Q5_1 on s390x (llama/15486)
Aaron Teo [Fri, 22 Aug 2025 08:11:04 +0000 (16:11 +0800)]
ggml-cpu: Support Q5_0 and Q5_1 on s390x (llama/15486)

* ggml-cpu: initial q5_0 impl for s390x

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: updated q5_0 code for better performance

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: use optimised hsum for better performance

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: introduce q5_1 simd + refactor q5_0

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix incorrect return type vec_hsum

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: q5_0 incomplete refactor + table_b2b_0 activation

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: refactor q5_1

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: q5_1 update loop unroll to 4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update q5_0 unroll to 4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update build-s390x docs

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update unused variables q5_0

Signed-off-by: Aaron Teo <redacted>
* docs: update the last update date

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
4 weeks agoCANN: Optimize RMS_NORM using cache (llama/15419)
Chenguang Li [Fri, 22 Aug 2025 06:12:07 +0000 (14:12 +0800)]
CANN: Optimize RMS_NORM using cache (llama/15419)

* [CANN] Optimize RMS_NORM using cache

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
* fix review comment

Signed-off-by: noemotiovon <redacted>
* codestyle adjustment

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
4 weeks agosched : fix possible use of wrong ids tensor when offloading moe prompt processing...
Diego Devesa [Thu, 21 Aug 2025 21:09:32 +0000 (14:09 -0700)]
sched : fix possible use of wrong ids tensor when offloading moe prompt processing (llama/15488)

4 weeks agovulkan : support conv_2d_dw with f16 weights (llama/15392)
Acly [Thu, 21 Aug 2025 15:01:51 +0000 (17:01 +0200)]
vulkan : support conv_2d_dw with f16 weights (llama/15392)

4 weeks agovulkan: add exp operation (llama/15456)
Dong Won Kim [Thu, 21 Aug 2025 15:00:16 +0000 (00:00 +0900)]
vulkan: add exp operation (llama/15456)

Co-authored-by: aeseulgi <redacted>
4 weeks agovulkan: Reuse conversion results in prealloc_y (llama/15410)
Jeff Bolz [Thu, 21 Aug 2025 14:55:00 +0000 (09:55 -0500)]
vulkan: Reuse conversion results in prealloc_y (llama/15410)

* vulkan: Reuse conversion results in prealloc_y

Cache the pipeline and tensor that were most recently used to fill prealloc_y,
and skip the conversion if the current pipeline/tensor match.

* don't use shared pointer for prealloc_y_last_pipeline_used

4 weeks agoggml : fix condition of im2col on Metal backend (llama/15460)
Xuan-Son Nguyen [Thu, 21 Aug 2025 05:32:26 +0000 (07:32 +0200)]
ggml : fix condition of im2col on Metal backend (llama/15460)

4 weeks agomusa: add GGML_UNUSED_VARS (llama/15446)
R0CKSTAR [Thu, 21 Aug 2025 03:06:05 +0000 (11:06 +0800)]
musa: add GGML_UNUSED_VARS (llama/15446)

Signed-off-by: Xiaodong Ye <redacted>
4 weeks agosched : copy only the used experts when offloading prompt processing (llama/15346)
Diego Devesa [Wed, 20 Aug 2025 23:35:28 +0000 (16:35 -0700)]
sched : copy only the used experts when offloading prompt processing (llama/15346)

4 weeks agoCUDA: refactor FA support/selection code (llama/15454)
Johannes Gäßler [Wed, 20 Aug 2025 21:14:14 +0000 (23:14 +0200)]
CUDA: refactor FA support/selection code (llama/15454)

4 weeks agoCUDA: replace GGML_CUDA_F16 with CUDA arch checks (llama/15433)
Johannes Gäßler [Wed, 20 Aug 2025 14:58:49 +0000 (16:58 +0200)]
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (llama/15433)

4 weeks agovulkan: shorten pipeline name strings (llama/15431)
Jeff Bolz [Wed, 20 Aug 2025 14:33:14 +0000 (09:33 -0500)]
vulkan: shorten pipeline name strings (llama/15431)

These detailed strings were causing increased build time on gcc.

4 weeks agomusa: fix build warnings (llama/15258)
R0CKSTAR [Wed, 20 Aug 2025 02:17:37 +0000 (10:17 +0800)]
musa: fix build warnings (llama/15258)

* musa: fix build warnings

Signed-off-by: Xiaodong Ye <redacted>
* fix warning: comparison of integers of different signs: 'const int' and 'unsigned int' [-Wsign-compare]

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
4 weeks agoopencl: mark `argsort` unsupported if cols exceed workgroup limit (llama/15375)
lhez [Tue, 19 Aug 2025 18:25:51 +0000 (02:25 +0800)]
opencl: mark `argsort` unsupported if cols exceed workgroup limit (llama/15375)

4 weeks agoCANN: optimize rope operator (llama/15335)
SHUAI YANG [Tue, 19 Aug 2025 13:28:22 +0000 (21:28 +0800)]
CANN: optimize rope operator (llama/15335)

* optimize rope ops

* amendment

* delete trailing whitespace

* change the variable name

4 weeks agomusa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (llama/15413)
R0CKSTAR [Tue, 19 Aug 2025 10:33:47 +0000 (18:33 +0800)]
musa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (llama/15413)

Signed-off-by: Xiaodong Ye <redacted>
4 weeks agoggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (llama/15385)
Marvin Gießing [Tue, 19 Aug 2025 08:54:31 +0000 (10:54 +0200)]
ggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (llama/15385)

* Added VSX intrinsics for Power9+ systems

Signed-off-by: mgiessing <redacted>
* Manual unrolling for minor perf improvement

Signed-off-by: mgiessing <redacted>
* Update src/ggml-cpu/arch/powerpc/quants.c

Co-authored-by: Georgi Gerganov <redacted>
---------

Signed-off-by: mgiessing <redacted>
Co-authored-by: Georgi Gerganov <redacted>
5 weeks agoci : add github release job (#1334)
Daniel Bevenius [Thu, 28 Aug 2025 07:10:48 +0000 (09:10 +0200)]
ci : add github release job (#1334)

* ci : add github release job

This commit adds a GitHub Actions workflow to automate the release
process. Currently this will only create an archive of the sources for
ggml when a tag is pushed.

The motivation for this is that when we start releasing versions of ggml
using semantic versioning it can be nice to have the sources needed for
ggml to be deployed as a github release. This enables CMake users that
use `FetchContent` efficiently specify the the zip file instead of
cloning.

Example usage with `FetchContent`:
```cmake
cmake_minimum_required(VERSION 3.14)
project(ggml_example)

set(CMAKE_CXX_STANDARD 17)

include(FetchContent)
FetchContent_Declare(ggml
    URL https://github.com/danbev/ggml/archive/refs/tags/v1.1.5-test.zip
    DOWNLOAD_EXTRACT_TIMESTAMP TRUE
)

FetchContent_MakeAvailable(ggml)

add_executable(ggml_example main.cpp)
target_link_libraries(ggml_example ggml)
```
And with the following `main.cpp` file:
```c++
 #include <iostream>
 #include <ggml.h>

 int main() {
     std::cout << "GGML Version: " << ggml_version() << std::endl;
     return 0;
 }
```
This could then be built using:
```console
$ cmake -S . -B build
$ cmake --build build
$ ./build/ggml_example
GGML Version: 0.0.2472
```

6 weeks agocuda : remove obsolete sources (#1332) upstream/0.0.2471
Georgi Gerganov [Mon, 18 Aug 2025 19:01:00 +0000 (22:01 +0300)]
cuda : remove obsolete sources (#1332)

ggml-ci

6 weeks agosync : whisper.cpp
Georgi Gerganov [Mon, 18 Aug 2025 17:33:27 +0000 (20:33 +0300)]
sync : whisper.cpp

6 weeks agoscripts : update sync scripts
Georgi Gerganov [Mon, 18 Aug 2025 17:32:37 +0000 (20:32 +0300)]
scripts : update sync scripts

6 weeks agoggml: Add initial WebGPU backend (llama/14521)
Reese Levine [Mon, 18 Aug 2025 16:22:37 +0000 (19:22 +0300)]
ggml: Add initial WebGPU backend (llama/14521)

ggml-ci

6 weeks agoggml : initial zDNN backend (llama/14975)
Aaron Teo [Mon, 18 Aug 2025 16:21:15 +0000 (19:21 +0300)]
ggml : initial zDNN backend (llama/14975)

6 weeks agoscripts : update sync scripts
Georgi Gerganov [Mon, 18 Aug 2025 16:20:10 +0000 (19:20 +0300)]
scripts : update sync scripts

6 weeks agocommon : handle mxfp4 enum
Georgi Gerganov [Mon, 18 Aug 2025 15:57:45 +0000 (18:57 +0300)]
common : handle mxfp4 enum

ggml-ci

6 weeks agosync : llama.cpp
Georgi Gerganov [Mon, 18 Aug 2025 15:56:46 +0000 (18:56 +0300)]
sync : llama.cpp

ggml-ci

6 weeks agoggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (llama/15379)
compilade [Mon, 18 Aug 2025 07:23:56 +0000 (03:23 -0400)]
ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (llama/15379)

* ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors

* ggml-quants : avoid division by zero in make_q3_quants

6 weeks agovulkan: disable spirv-opt for bfloat16 shaders (llama/15352)
Jeff Bolz [Mon, 18 Aug 2025 05:56:29 +0000 (00:56 -0500)]
vulkan: disable spirv-opt for bfloat16 shaders (llama/15352)

6 weeks agovulkan: Use larger workgroups for mul_mat_vec when M is small (llama/15355)
Jeff Bolz [Sun, 17 Aug 2025 16:08:57 +0000 (11:08 -0500)]
vulkan: Use larger workgroups for mul_mat_vec when M is small (llama/15355)

* vulkan: Use larger workgroups for mul_mat_vec when M is small

Also use subgroup instructions for (part of) the reduction when supported.
Without this, the more expensive reductions would eat into the benefits of
the larger workgroups.

* update heuristic for amd/intel

Co-authored-by: 0cc4m <redacted>
---------

Co-authored-by: 0cc4m <redacted>
6 weeks agovulkan: support sqrt (llama/15370)
Dong Won Kim [Sun, 17 Aug 2025 14:03:09 +0000 (23:03 +0900)]
vulkan: support sqrt (llama/15370)

6 weeks agovulkan: Optimize argsort (llama/15354)
Jeff Bolz [Sun, 17 Aug 2025 08:41:45 +0000 (03:41 -0500)]
vulkan: Optimize argsort (llama/15354)

- Launch an appropriate number of invocations (next larger power of two).
32 invocations is common and the barrier is much cheaper there.
- Specialize for "needs bounds checking" vs not.
- Make the code less branchy and [[unroll]] the loops. In the final code,
I see no branches inside the main loop (only predicated stores) when
needs_bounds_check is false.
- Always sort ascending, then apply the ascending vs descending option when
doing the final stores to memory.
- Copy the values into shared memory, makes them slightly cheaper to access.

6 weeks agovulkan: fuse adds (llama/15252)
Jeff Bolz [Sat, 16 Aug 2025 16:48:22 +0000 (11:48 -0500)]
vulkan: fuse adds (llama/15252)

* vulkan: fuse adds

Fuse adds that have the same shape, which are common in MoE models.
It will currently fuse up to 6 adds, because we assume no more than
8 descriptors per dispatch. But this could be changed.

* check runtimeDescriptorArray feature

* disable multi_add for Intel due to likely driver bug

6 weeks agovulkan: Support mul_mat_id with f32 accumulators (llama/15337)
Jeff Bolz [Sat, 16 Aug 2025 09:18:31 +0000 (04:18 -0500)]
vulkan: Support mul_mat_id with f32 accumulators (llama/15337)

* vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id

* vulkan: Support mul_mat_id with f32 accumulators, but they are not hooked up

- There's no explicit way to request f32 precision for mul_mat_id, but there
probably should be, and this gets the code in place for that.
- A couple fixes to check_results.
- Remove casts to fp16 in coopmat1 FA shader (found by inspection).

6 weeks agovulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (llama/15334)
Jeff Bolz [Sat, 16 Aug 2025 08:58:38 +0000 (03:58 -0500)]
vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (llama/15334)

6 weeks agoOpenCL: add initial FA support (llama/14987)
rmatif [Sat, 16 Aug 2025 08:05:55 +0000 (10:05 +0200)]
OpenCL: add initial FA support (llama/14987)

* add F16/F16 fa support

* fix kernel init

* use mad instead of fma

* use inline function

* mark FA with sinks as unsupported for now

* add pragma unroll to loops

6 weeks agoopencl: add initial mxfp4 support via mv (llama/15270)
lhez [Fri, 15 Aug 2025 16:52:14 +0000 (00:52 +0800)]
opencl: add initial mxfp4 support via mv (llama/15270)

* opencl: add reference `mul_mv_mxfp4_f32`

* opencl: add reference `mul_mv_id` for mxfp4

* Q4_0 tranpose fix for Adreno

---------

Co-authored-by: shawngu-quic <redacted>
6 weeks agovulkan : fix out-of-bounds access in argmax kernel (llama/15342)
Georgi Gerganov [Fri, 15 Aug 2025 14:16:36 +0000 (17:16 +0300)]
vulkan : fix out-of-bounds access in argmax kernel (llama/15342)

ggml-ci

6 weeks agovulkan : fix compile warnings on macos (llama/15340)
Georgi Gerganov [Fri, 15 Aug 2025 13:28:28 +0000 (16:28 +0300)]
vulkan : fix compile warnings on macos (llama/15340)

ggml-ci

6 weeks agoggml: initial IBM zDNN backend (llama/14975)
Aaron Teo [Fri, 15 Aug 2025 13:11:22 +0000 (21:11 +0800)]
ggml: initial IBM zDNN backend (llama/14975)

* ggml-zdnn: inital backend impl

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: temp change z17 to arch15

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix build bugs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tensor->extra logging check

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add layout name mapping, ztensor information

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: separate logging into its own line

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add shape comparison

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add ggml_tensor shape log

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix incorrect shape logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add output buffer check

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: run compute and store into tensor->extra

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add set_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more loggers

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update set_tensor logging to check only for matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: last working matmul version

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add comments to prevent accidentally deleting lines

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: support op out_prod

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update op out_prod to use tensor->extra

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite the backend implementation

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix new impl

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler warnings and bugfixes

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: test ztensor finding in init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: implement at least 1 op to test

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: assign tensor->extra to buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add check for view tensors to prevent init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rework init_tensor to create new buffers

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to std vector instead of array

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch buffers back and set to arbitrary number

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update supports_op matmul matrix

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix incorrect ztensor shape, reduce memory padding

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler error missing type

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing data transform call

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tighten memory usage, change string allocation

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias ztensor and data free

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias data transform

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more debug info for extra buffer transform

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logger to check if mat mul ops go through set_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: activate bias transform in matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move weights transform into mulmat

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more safeguards in matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sequencing of transforms

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix transform ztensor vs origtensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: figure out why sigtrap is happening

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sigsegv

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move everything back to local declaration

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move bias data to local also

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring back working matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite into mre

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import in header

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to fix sigsegv

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing load tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix invalid ztensor buffer release

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logging to debug free buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: remove free_buffer debug info

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add parmblkformat detections

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add nnpa installed detection

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add zdnn_init call for static libs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing invalid buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to using deque to fix pointer deref problem

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add weights logging to check

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to use unique ptr

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add tensor to pre_tfm_desc logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add inputs logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable op_none initialisation for testing

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing return from init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: load ztensors in cgraph exec

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: work on moving output ztensor as well

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable logging and breakpoints for full test

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at manually changing the layout

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at using default nwhc format instead

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable global load ztensor for now

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix errorenous output load tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add guards to prevent loading ztensor if transformed

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code cleanup

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring load ztensor back to init routine

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix ztensor deallocation abort

stabilise ggml <-> zdnn api

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up matmul selection

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up project structure

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update documentation, prepare for upstream

Signed-off-by: Aaron Teo <redacted>
* chore: add codeowners

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable batched matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing tensor views during matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: deny all view tensors directly

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix pr comments

Signed-off-by: Aaron Teo <redacted>
* docs: update ops docs for zdnn

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: redo test-backend-ops for ops.md

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix typo in build-s390x.md

Signed-off-by: Aaron Teo <redacted>
* codeowners: remove taronaeo for now

Signed-off-by: Aaron Teo <redacted>
* Revert "codeowners: remove taronaeo for now"

This reverts commit 411ea4ed78d08778967bd0bd33a6538cfcbe082f.

* ggml-zdnn: remove unused ggml_zdnn macro

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
6 weeks agotest-opt: fix backend support check (llama/15317)
Johannes Gäßler [Fri, 15 Aug 2025 09:23:17 +0000 (11:23 +0200)]
test-opt: fix backend support check (llama/15317)

* test-opt: fix backend support check

* Update tests/test-opt.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoCUDA: fix negative KV_max values in FA (llama/15321)
Johannes Gäßler [Thu, 14 Aug 2025 21:21:24 +0000 (23:21 +0200)]
CUDA: fix negative KV_max values in FA (llama/15321)