]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
6 weeks agometal : add residency sets keep-alive heartbeat (llama/17766)
Georgi Gerganov [Fri, 5 Dec 2025 17:38:54 +0000 (19:38 +0200)]
metal : add residency sets keep-alive heartbeat (llama/17766)

* examples : add idle

* metal : attach residency sets to queue

* idle : add link

* idle : adjust intervals

* metal : add residency sets keep-alive heartbeat

* cont : adjust default keep-alive time

6 weeks agoHIP : fix RDNA4 build (llama/17792)
Johannes Gäßler [Fri, 5 Dec 2025 12:47:52 +0000 (13:47 +0100)]
HIP : fix RDNA4 build (llama/17792)

6 weeks agoQ4/Q8 Tiled Gemm Optimization. (llama/16999)
shalinib-ibm [Fri, 5 Dec 2025 11:41:51 +0000 (17:11 +0530)]
Q4/Q8 Tiled Gemm Optimization. (llama/16999)

6 weeks agoCUDA: fix FA VKQ accumulator overflow (llama/17746)
Johannes Gäßler [Fri, 5 Dec 2025 08:18:10 +0000 (09:18 +0100)]
CUDA: fix FA VKQ accumulator overflow (llama/17746)

6 weeks agoHIP: enable WMMA-MMQ INT kernels for RDNA 3 (llama/17576)
Jiacheng (Jason) Chen [Fri, 5 Dec 2025 08:17:37 +0000 (03:17 -0500)]
HIP: enable WMMA-MMQ INT kernels for RDNA 3 (llama/17576)

* enabled wmma instructions for most quantizations other than q2k

* fixed the last q2_k test case failure

* address comments: fix out of bound write for RDNA4, add comments after #endif

* clean up rebase: fix ne error in half2

* fix the EditorConfig CI

6 weeks agoAdd support for CUMSUM and TRI for CUDA. (llama/17584)
Piotr Wilkin (ilintar) [Thu, 4 Dec 2025 21:19:51 +0000 (22:19 +0100)]
Add support for CUMSUM and TRI for CUDA. (llama/17584)

* Add support for CUMSUM and TRI for CUDA.

* Minor optimizations.

* Correct warp_prefix_inclusive_sum in float2 variant to return float2

* Optimize TRI

* Whitespace

* Fix strides.

* Implement double loop

* Whitespace

* Fix HIP compilation bugs

* Optimizations + big case performance tests

* Implement using CUB with fallback to custom kernel

* Remove error message.

* Fixes from code review

* Comment out CPU-unsupported F16/BF16 cases to fix CI

* Fine, you win :P

* Fix last cast, use NO_DEVICE_CODE and GGML_UNUSED_VARS

* Vary warp-size based on physical warp size

* Add GGML_UNUSED_VARS in tri as well

* Use constexpr and call prefix_inclusive with warp_size template param

* Update ggml/src/ggml-cuda/cumsum.cu

Co-authored-by: Johannes Gäßler <redacted>
* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Change to tid % warp_size

* Fix strides; hardcode mask; add ggml_lane_mask_t

* Missing renames, remove unused get_warp_mask(), explicit calls to ggml_cuda_info()

* Too hasty...

---------

Co-authored-by: Johannes Gäßler <redacted>
6 weeks agometal: TRI, FILL, EXPM1, SOFTPLUS (llama/16623)
Gabe Goodhart [Thu, 4 Dec 2025 17:12:19 +0000 (10:12 -0700)]
metal: TRI, FILL, EXPM1, SOFTPLUS (llama/16623)

* feat(wip): Port initial TRI impl from pervious work

The kernel does not work and is not optimized, but the
code compiles and runs, so this will be the starting point
now that the core op has been merged.

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove argument for constant val override

This was added in the original draft, but later removed. With this, the
kernel now passes tests.

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
* feat: Move the ttype conditional to templating to avoid conditional in kernel

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
* fix: Type fixes

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Georgi Gerganov <redacted>
* feat: Add softplus for metal

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add EXPM1 for metal

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add FILL for metal

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Branchless version of tri using _ggml_vec_tri_cmp as a mask

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove unused arguments

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Use select instead of branch for softplus non-vec

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <redacted>
---------

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoggml-cpu : remove asserts always evaluating to false (llama/17728)
Alberto Cabrera Pérez [Thu, 4 Dec 2025 12:16:38 +0000 (12:16 +0000)]
ggml-cpu : remove asserts always evaluating to false (llama/17728)

6 weeks agometal : use params per pipeline instance (llama/17739)
Georgi Gerganov [Thu, 4 Dec 2025 08:34:11 +0000 (10:34 +0200)]
metal : use params per pipeline instance (llama/17739)

6 weeks agobuild : move _WIN32_WINNT definition to headers (llama/17736)
Adrien Gallouët [Thu, 4 Dec 2025 06:04:02 +0000 (07:04 +0100)]
build : move _WIN32_WINNT definition to headers (llama/17736)

Previously, cmake was forcing `_WIN32_WINNT=0x0A00` for MinGW builds,
This caused "macro redefined" warnings with toolchains that define the version.

This also removes the `GGML_WIN_VER` variable as it is no longer needed.

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoggml-cpu: remove duplicate conditional check 'iid' (llama/17650)
Herman Semenoff [Wed, 3 Dec 2025 21:03:19 +0000 (00:03 +0300)]
ggml-cpu: remove duplicate conditional check 'iid' (llama/17650)

6 weeks agoCUDA: generalized (mma) FA, add Volta support (llama/17505)
Johannes Gäßler [Wed, 3 Dec 2025 15:57:05 +0000 (16:57 +0100)]
CUDA: generalized (mma) FA, add Volta support (llama/17505)

* CUDA: generalized (mma) FA, add Volta support

* use struct for MMA FA kernel config

---------

Co-authored-by: Aman Gupta <aman>
6 weeks agometal : fix data race in pipeline library (llama/17731)
Georgi Gerganov [Wed, 3 Dec 2025 12:03:40 +0000 (14:03 +0200)]
metal : fix data race in pipeline library (llama/17731)

6 weeks agoggml webgpu: add support for emscripten builds (llama/17184)
Reese Levine [Wed, 3 Dec 2025 09:25:34 +0000 (01:25 -0800)]
ggml webgpu: add support for emscripten builds (llama/17184)

* Faster tensors (llama/8)

Add fast matrix and matrix/vector multiplication.

* Use map for shader replacements instead of pair of strings

* Wasm (llama/9)

* webgpu : fix build on emscripten

* more debugging stuff

* test-backend-ops: force single thread on wasm

* fix single-thread case for init_tensor_uniform

* use jspi

* add pthread

* test: remember to set n_thread for cpu backend

* Add buffer label and enable dawn-specific toggles to turn off some checks

* Intermediate state

* Fast working f16/f32 vec4

* Working float fast mul mat

* Clean up naming of mul_mat to match logical model, start work on q mul_mat

* Setup for subgroup matrix mat mul

* Basic working subgroup matrix

* Working subgroup matrix tiling

* Handle weirder sg matrix sizes (but still % sg matrix size)

* Working start to gemv

* working f16 accumulation with shared memory staging

* Print out available subgroup matrix configurations

* Vectorize dst stores for sg matrix shader

* Gemv working scalar

* Minor set_rows optimization (llama/4)

* updated optimization, fixed errors

* non vectorized version now dispatches one thread per element

* Simplify

* Change logic for set_rows pipelines

---------

Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
* Comment on dawn toggles

* Working subgroup matrix code for (semi)generic sizes

* Remove some comments

* Cleanup code

* Update dawn version and move to portable subgroup size

* Try to fix new dawn release

* Update subgroup size comment

* Only check for subgroup matrix configs if they are supported

* Add toggles for subgroup matrix/f16 support on nvidia+vulkan

* Make row/col naming consistent

* Refactor shared memory loading

* Move sg matrix stores to correct file

* Working q4_0

* Formatting

* Work with emscripten builds

* Fix test-backend-ops emscripten for f16/quantized types

* Use emscripten memory64 to support get_memory

* Add build flags and try ci

---------

Co-authored-by: Xuan Son Nguyen <redacted>
* Remove extra whitespace

* Move wasm single-thread logic out of test-backend-ops for cpu backend

* Disable multiple threads for emscripten single-thread builds in ggml_graph_plan

* Fix .gitignore

* Add memory64 option and remove unneeded macros for setting threads to 1

---------

Co-authored-by: Xuan Son Nguyen <redacted>
6 weeks agovulkan: Reduce temporary memory usage for TOP_K (llama/17623)
Jeff Bolz [Tue, 2 Dec 2025 18:22:04 +0000 (12:22 -0600)]
vulkan: Reduce temporary memory usage for TOP_K (llama/17623)

- Compute row size for the temp buffer based on the output of the first pass.
- Update shader addressing math to use the output row size
- Pass the output row size as "ncols_output", what used to be "ncols_output" is now "k"

For the common case of K=40 and src0=(200000,1,1,1), this reduces the temporary buffer
from about 3.2MB to 500KB.

6 weeks agocmake : add utf8 compilation options for msvc (llama/17682)
xiaobing318 [Tue, 2 Dec 2025 17:50:57 +0000 (01:50 +0800)]
cmake : add utf8 compilation options for msvc (llama/17682)

6 weeks agoggml : use svcntb() for SVE vector length detection (llama/17474)
Adrien Gallouët [Tue, 2 Dec 2025 16:21:11 +0000 (17:21 +0100)]
ggml : use svcntb() for SVE vector length detection (llama/17474)

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoCANN: Disable Ger operator of OUT_PROD on 310p device (llama/17563)
TianHao324 [Tue, 2 Dec 2025 12:35:23 +0000 (20:35 +0800)]
CANN: Disable Ger operator of OUT_PROD on 310p device (llama/17563)

6 weeks agoggml : remove redundant n_copies check when setting input/output (llama/17612)
Daniel Bevenius [Tue, 2 Dec 2025 11:52:45 +0000 (12:52 +0100)]
ggml : remove redundant n_copies check when setting input/output (llama/17612)

This commit removes a redundant check for sched->n_copies > 1 when
setting input and output flags on tensor copies in
ggml_backend_sched_split_graph.

The motivation for this change is to clarify the code as the outer if
statement already performs this check.

6 weeks agoggml : add fallback definition for HWCAP2_SVE2 (llama/17683)
Adrien Gallouët [Tue, 2 Dec 2025 08:41:26 +0000 (09:41 +0100)]
ggml : add fallback definition for HWCAP2_SVE2 (llama/17683)

This align with other HWCAP2 feature flags

See #17528

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoggml-cuda: reorder only relevant nodes (llama/17639)
Aman Gupta [Tue, 2 Dec 2025 04:36:31 +0000 (12:36 +0800)]
ggml-cuda: reorder only relevant nodes (llama/17639)

6 weeks agoenhance argsort for UT (llama/17573)
Neo Zhang Jianyu [Tue, 2 Dec 2025 00:56:46 +0000 (08:56 +0800)]
enhance argsort for UT (llama/17573)

Co-authored-by: Neo Zhang <redacted>
6 weeks agometal : add FA head size 48 (llama/17619)
Georgi Gerganov [Mon, 1 Dec 2025 10:49:53 +0000 (12:49 +0200)]
metal : add FA head size 48 (llama/17619)

6 weeks agoggml : extend the GGML_SCHED_NO_REALLOC debug logic of the scheduler (llama/17617)
Georgi Gerganov [Mon, 1 Dec 2025 10:49:33 +0000 (12:49 +0200)]
ggml : extend the GGML_SCHED_NO_REALLOC debug logic of the scheduler (llama/17617)

6 weeks agollama-graph: avoid expand_forward for fusion (llama/17633)
Aman Gupta [Mon, 1 Dec 2025 09:12:48 +0000 (17:12 +0800)]
llama-graph: avoid expand_forward for fusion (llama/17633)

6 weeks agomodel: LFM2-VL fixes (llama/17577)
Tarek Dakhran [Sun, 30 Nov 2025 20:57:31 +0000 (21:57 +0100)]
model: LFM2-VL fixes (llama/17577)

* Adjust to pytorch

* Add antialiasing upscale

* Increase number of patches to 1024

* Handle default marker insertion for LFM2

* Switch to flag

* Reformat

* Cuda implementation of antialias kernel

* Change placement in ops.cpp

* consistent float literals

* Pad only for LFM2

* Address PR feedback

* Rollback default marker placement changes

* Fallback to CPU implementation for antialias implementation of upscale

6 weeks agoggml: fix: macOS build with `-DGGML_BACKEND_DL=ON` (llama/17581)
Gilad S. [Sun, 30 Nov 2025 02:00:59 +0000 (04:00 +0200)]
ggml: fix: macOS build with `-DGGML_BACKEND_DL=ON` (llama/17581)

6 weeks agoCUDA: add stream-based concurrency (llama/16991)
Aman Gupta [Sun, 30 Nov 2025 00:17:55 +0000 (08:17 +0800)]
CUDA: add stream-based concurrency (llama/16991)

* CUDA: add stream-based concurrency

* HIP: fix hipStreamWaitEvent define and nodiscard warnings

* ggml-cuda: fix fusion inside stream

* ggml-cuda: fix bug w.r.t first stream launch

* ggml-cuda: format

* ggml-cuda: improve assert message

* ggml-cuda: use lambda instead of duplicating code

* ggml-cuda: add some more comments

* ggml-cuda: add more detailed comments about concurrency

* ggml-cuda: rename + remove unused var

* ggml-cuda: fix condition for stream launch

* ggml-cuda: address review comments, add destructor

* common.cuh: add is_valid for concurrent events

* common.cuh: make comment better

* update comment

Co-authored-by: Johannes Gäßler <redacted>
* update comment

Co-authored-by: Johannes Gäßler <redacted>
* common.cuh: fix lower_bound condition + remove join_node data from write_ranges

* ggml-cuda: fix overlap condition + shadowing parameter

---------

Co-authored-by: Carl Philipp Klemm <redacted>
Co-authored-by: Johannes Gäßler <redacted>
6 weeks agocuda : add error checking for cudaMemcpyAsync in argsort (llama/17599)
Mahekk Shaikh [Sun, 30 Nov 2025 00:16:28 +0000 (19:16 -0500)]
cuda : add error checking for cudaMemcpyAsync in argsort (llama/17599)

* cuda : add error checking for cudaMemcpyAsync in argsort (llama/12836)

* fix indentation

6 weeks agovulkan : fix FA mask load with bounds check (coopmat2) (llama/17606)
Acly [Sun, 30 Nov 2025 00:03:21 +0000 (01:03 +0100)]
vulkan : fix FA mask load with bounds check (coopmat2) (llama/17606)

6 weeks agosycl : support to malloc memory on device more than 4GB, update the doc and script...
Neo Zhang [Sat, 29 Nov 2025 12:59:44 +0000 (20:59 +0800)]
sycl : support to malloc memory on device more than 4GB, update the doc and script (llama/17566)

Co-authored-by: Neo Zhang Jianyu <redacted>
6 weeks agoggml: replace hwcap with riscv_hwprobe for RVV detection (llama/17567)
ixgbe [Sat, 29 Nov 2025 12:56:31 +0000 (20:56 +0800)]
ggml: replace hwcap with riscv_hwprobe for RVV detection (llama/17567)

Signed-off-by: Wang Yang <redacted>
6 weeks agoVulkan: MMVQ Integer Dot K-Quant and MUL_MAT_ID support (llama/16900)
Ruben Ortlam [Sat, 29 Nov 2025 08:37:22 +0000 (09:37 +0100)]
Vulkan: MMVQ Integer Dot K-Quant and MUL_MAT_ID support (llama/16900)

* vulkan: split mul_mmq_funcs for mul_mat_vecq use

* add mxfp4 mmvq

* add q2_k mmvq

* add q3_k mmvq

* add q4_k and q5_k mmvq

* add q6_k mmvq

* handle 4x4 quants per mmvq thread

* enable MUL_MAT_ID mmvq support

* enable subgroup optimizations for mul_mat_vec_id shaders

* device tuning

* request prealloc_y sync after quantization

* fix indentation

* fix llvmpipe test failures

* fix mul_mat_id mmvq condition

* fix unused variable warning

6 weeks agovulkan: improve topk perf for large k, fix overflow in unit tests (llama/17582)
Jeff Bolz [Sat, 29 Nov 2025 07:39:57 +0000 (01:39 -0600)]
vulkan: improve topk perf for large k, fix overflow in unit tests (llama/17582)

6 weeks agoggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sche...
Diego Devesa [Fri, 28 Nov 2025 15:33:23 +0000 (07:33 -0800)]
ggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sched (llama/17276)

* ggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sched
Enabled in ggml-ci for testing.

* llama : update worst-case graph for unified cache

* ci : disable op offload in some tests

* fix spelling

---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoenable fp16/fast_fp16/bf16_mma on PH1 (llama/17551)
R0CKSTAR [Fri, 28 Nov 2025 13:08:29 +0000 (21:08 +0800)]
enable fp16/fast_fp16/bf16_mma on PH1 (llama/17551)

* [MUSA] enable fp16/fast_fp16/bf16_mma on PH1

Signed-off-by: Xiaodong Ye <redacted>
* Update ggml/src/ggml-cuda/fattn-vec.cuh

Co-authored-by: Johannes Gäßler <redacted>
* Update ggml/src/ggml-cuda/fattn-vec.cuh

Co-authored-by: Johannes Gäßler <redacted>
* Update ggml/src/ggml-cuda/fattn-tile.cuh

Co-authored-by: Johannes Gäßler <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: Johannes Gäßler <redacted>
6 weeks agoggml-cuda: add stricter checking for fusion (llama/17568)
Aman Gupta [Fri, 28 Nov 2025 12:34:51 +0000 (20:34 +0800)]
ggml-cuda: add stricter checking for fusion (llama/17568)

* ggml-cuda: make conditions for fusion more explicit

* ggml-cuda: remove size check as std::equal already does it

6 weeks agomodel : Qwen3 Next (llama/16095)
Piotr Wilkin (ilintar) [Fri, 28 Nov 2025 11:02:56 +0000 (12:02 +0100)]
model : Qwen3 Next (llama/16095)

* Qwen3 Next - cleaned up version

* Whitespaces and stuff

* Correct minor errors

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Misc. fixes.

* Clean up code, add missing hybrid qualifier

* Did someone transpose the SOLVE_TRI result matrix? Perhaps...

* Whitespace

* Proper tensors for cb calls

* Use llama-graph.h vertical alignment

* BROKEN: chunking

* Set new tensors as inputs.

* Proper chunk logic

* It's the circle of life...

* More shenanigans for n_seq > 1

* Nail in the coffin?

* Fix Windows build

* Eh, one fails on Windows, the other fails on Mac... just use general capture.

* quant : cleanup

* model : cleanup

* qwen3 : cleanup

* cont : cleanup

* cont : cleanup

* ggml : revert change

* qwen3 : cleanup

* cont : cleanup

* Readd cmath

* qwen3 : fix typo

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Usual suspects

* fix my bad suggestion

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoCUDA: no FP16 arithmetic for vector FA kernel (llama/17558)
Johannes Gäßler [Fri, 28 Nov 2025 09:29:09 +0000 (10:29 +0100)]
CUDA: no FP16 arithmetic for vector FA kernel (llama/17558)

6 weeks agovulkan: Implement GGML_OP_TRI (llama/17503)
Jeff Bolz [Fri, 28 Nov 2025 09:07:29 +0000 (03:07 -0600)]
vulkan: Implement GGML_OP_TRI (llama/17503)

* vulkan: Implement GGML_OP_TRI

* check types match

6 weeks agorpc : cache and reuse compute graphs (llama/15405)
Radoslav Gerganov [Fri, 28 Nov 2025 08:33:51 +0000 (10:33 +0200)]
rpc : cache and reuse compute graphs (llama/15405)

Store the last computed graph and reuse it when possible.
Also do not return response from GRAPH_COMPUTE and assume it always
completes successfully. If this this is not the case, the server closes
the connection. This saves us a network round trip to the server.

6 weeks agoHIP: enable mul_mat_f for RDNA4 (llama/17437)
yulo [Fri, 28 Nov 2025 07:24:30 +0000 (15:24 +0800)]
HIP: enable mul_mat_f for RDNA4 (llama/17437)

* enable mmf for rdna4

* move some mmvf to mmf

* revert lds128 for wmma loading

* Revert "revert lds128 for wmma loading"

This reverts commit db9ae8b6b4738a5def5b393caa1611d52133e9b5.

* Revert "enable mmf for rdna4"

This reverts commit 698c9f24187b990e35c3b73a8067e5387e6ddbd4.

* Revert "move some mmvf to mmf"

This reverts commit 99b92bd6653cc8593607f641e44606391691792f.

* enable mul_mat for rdna4

---------

Co-authored-by: zhang hui <redacted>
6 weeks agoSOLVE_TRI CUDA kernel for small matrices (llama/17457)
Piotr Wilkin (ilintar) [Fri, 28 Nov 2025 04:15:32 +0000 (05:15 +0100)]
SOLVE_TRI CUDA kernel for small matrices (llama/17457)

6 weeks agorefactor pad_reflect_1d to make the UT case pass (llama/17204)
Neo Zhang Jianyu [Fri, 28 Nov 2025 00:50:56 +0000 (08:50 +0800)]
refactor pad_reflect_1d to make the UT case pass (llama/17204)

Co-authored-by: Zhang Jianyu <redacted>
6 weeks agovulkan: Implement SOLVE_TRI (llama/17486)
Jeff Bolz [Thu, 27 Nov 2025 14:48:00 +0000 (08:48 -0600)]
vulkan: Implement SOLVE_TRI (llama/17486)

* vulkan: Implement SOLVE_TRI

* load B matrix through shared memory

* use FLOAT_TYPE

6 weeks agocuda : fix UMA detection on discrete GPUs. (llama/17537)
matt23654 [Thu, 27 Nov 2025 11:35:35 +0000 (11:35 +0000)]
cuda : fix UMA detection on discrete GPUs. (llama/17537)

6 weeks agoggml-cpu: aarm64: q4_K repack gemm and gemv implementations (dotprod only) (llama...
Alberto Cabrera Pérez [Thu, 27 Nov 2025 11:25:14 +0000 (11:25 +0000)]
ggml-cpu: aarm64: q4_K repack gemm and gemv implementations (dotprod only) (llama/17494)

* Enabled q4_K_4x8 path

* Fixed generic Q4_K 8x4 implementation

* wip: dotprod gemm

* Working arm q4_K dotprod gemm

Signed-off-by: Alberto Cabrera <redacted>
* Undo acc rename

Signed-off-by: Alberto Cabrera <redacted>
* Q4_K arm dotprod gemm

Signed-off-by: Alberto Cabrera <redacted>
* Fix: q4_qs reinterpret from uint to int

Signed-off-by: Alberto Cabrera <redacted>
* Removed comments

* Fixed macro guards

* Fixed unused vars in generic implementation

* Fixed unused vars in 8x4 repack

* Fixed unused vars in generic implementation, unneeded comment

* Missing arch fallback for x86

* minor : style

---------

Signed-off-by: Alberto Cabrera <redacted>
Co-authored-by: Georgi Gerganov <redacted>
6 weeks agovulkan : move contiguous checks to device_supports_op (llama/17490)
Acly [Thu, 27 Nov 2025 05:54:19 +0000 (06:54 +0100)]
vulkan : move contiguous checks to device_supports_op (llama/17490)

* vulkan : remove op_supports_incontiguous and add missing constraints in device_supports_op

* im2col: remove contraints on src0 (kernel input)

6 weeks agovulkan: use a fixed 1KB buffer for the add_rms_fusion opt (llama/17514)
Jeff Bolz [Thu, 27 Nov 2025 05:32:30 +0000 (23:32 -0600)]
vulkan: use a fixed 1KB buffer for the add_rms_fusion opt (llama/17514)

6 weeks agoopencl: add sqr, sqrt, mean and ssm_conv (llama/17476)
lhez [Wed, 26 Nov 2025 21:29:58 +0000 (13:29 -0800)]
opencl: add sqr, sqrt, mean and ssm_conv (llama/17476)

* opencl: add sqr

* opencl: add sqrt

* opencl: add mean

* opencl: add ssm_conv

* opencl: add missing cl_khr_fp16

* opencl: do sqrt in f32 then convert to f16 for better precision

6 weeks agoFix chunks being too small with small matrix sizes (llama/17526)
Alberto Cabrera Pérez [Wed, 26 Nov 2025 21:14:54 +0000 (21:14 +0000)]
Fix chunks being too small with small matrix sizes (llama/17526)

6 weeks agovulkan: allow graph_optimize for prompt processing workloads (llama/17475)
Jeff Bolz [Wed, 26 Nov 2025 15:46:33 +0000 (09:46 -0600)]
vulkan: allow graph_optimize for prompt processing workloads (llama/17475)

6 weeks agovulkan: Implement top-k (llama/17418)
Jeff Bolz [Wed, 26 Nov 2025 15:45:43 +0000 (09:45 -0600)]
vulkan: Implement top-k (llama/17418)

* vulkan: Implement top-k

Each pass launches workgroups that each sort 2^N elements (where N is usually 7-10)
and discards all but the top K. Repeat until only K are left. And there's a fast
path when K==1 to just find the max value rather than sorting.

* fix pipeline selection

* vulkan: Add N-ary search algorithm for topk

* microoptimizations

6 weeks agoggml-cpu : add RISC-V Zvfh impl for ggml_vec_mad_f16 (llama/17448)
xctan [Wed, 26 Nov 2025 13:33:05 +0000 (21:33 +0800)]
ggml-cpu : add RISC-V Zvfh impl for ggml_vec_mad_f16 (llama/17448)

* ggml-cpu : add RISC-V Zvfh impl for ggml_vec_mad_f16

* ggml-cpu : dedup scalar impl

* Update ggml/src/ggml-cpu/vec.h

---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoggml : fix ARM feature verification (llama/17519)
Adrien Gallouët [Wed, 26 Nov 2025 13:14:41 +0000 (14:14 +0100)]
ggml : fix ARM feature verification (llama/17519)

On arm64 with `cmake` version 3.31.6, the final feature verification fails:

    -- ARM detected flags: -mcpu=neoverse-v2+crc+sve2-aes+sve2-sha3+nossbs
    -- Performing Test GGML_MACHINE_SUPPORTS_dotprod
    -- Performing Test GGML_MACHINE_SUPPORTS_dotprod - Success
    -- Performing Test GGML_MACHINE_SUPPORTS_i8mm
    -- Performing Test GGML_MACHINE_SUPPORTS_i8mm - Success
    -- Performing Test GGML_MACHINE_SUPPORTS_sve
    -- Performing Test GGML_MACHINE_SUPPORTS_sve - Success
    -- Performing Test GGML_MACHINE_SUPPORTS_sme
    -- Performing Test GGML_MACHINE_SUPPORTS_sme - Failed
    -- Performing Test GGML_MACHINE_SUPPORTS_nosme
    -- Performing Test GGML_MACHINE_SUPPORTS_nosme - Success
    -- Checking for ARM features using flags:
    --   -U__ARM_FEATURE_SME
    --   -mcpu=neoverse-v2+crc+sve2-aes+sve2-sha3+nossbs+dotprod+i8mm+sve+nosme
    -- Performing Test HAVE_DOTPROD
    -- Performing Test HAVE_DOTPROD - Failed
    -- Performing Test HAVE_SVE
    -- Performing Test HAVE_SVE - Failed
    -- Performing Test HAVE_MATMUL_INT8
    -- Performing Test HAVE_MATMUL_INT8 - Failed
    -- Performing Test HAVE_FMA
    -- Performing Test HAVE_FMA - Success
    -- Performing Test HAVE_FP16_VECTOR_ARITHMETIC
    -- Performing Test HAVE_FP16_VECTOR_ARITHMETIC - Failed
    -- Performing Test HAVE_SME
    -- Performing Test HAVE_SME - Failed
    -- Adding CPU backend variant ggml-cpu: -U__ARM_FEATURE_SME;-mcpu=neoverse-v2+crc+sve2-aes+sve2-sha3+nossbs+dotprod+i8mm+sve+nosme

We need to explicitly replace `;` with spaces from the list to make
`CMAKE_REQUIRED_FLAGS` work correctly...

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoHIP: Patch failed testcase in WMMA-MMQ kernels for RDNA 4 (llama/17502)
Jiacheng (Jason) Chen [Wed, 26 Nov 2025 10:18:48 +0000 (05:18 -0500)]
HIP: Patch failed testcase in WMMA-MMQ kernels for RDNA 4 (llama/17502)

* patch failed test case MUL_MAT(type_a=q4_0,type_b=f32,m=576,n=512,k=576,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1) for enabling WMMA on RDNA4

* Quick clean up on mma.cuh to add ggml_cuda_memcpy_1 back in for half2 and bfloat162

6 weeks agoCANN: Add MROPE and IMROPE support (llama/17401)
hipudding [Wed, 26 Nov 2025 08:44:19 +0000 (16:44 +0800)]
CANN: Add MROPE and IMROPE support (llama/17401)

* CANN: ROPE supports both MROPE and IMROPE.

1. Optimize the caching logic of rope_cache_init.
2. Add support for mRoPE and i-mRoPE.

Note that on Ascend 910B devices, it is necessary to disable FA
in CLIP and disable NZ-format conversion. These two issues are
still under investigation.

* Resolve review comments

6 weeks agovulkan: Implement GGML_OP_CUMSUM (llama/17479)
Jeff Bolz [Wed, 26 Nov 2025 06:08:10 +0000 (00:08 -0600)]
vulkan: Implement GGML_OP_CUMSUM (llama/17479)

6 weeks agoggml : add ggml_top_k (llama/17365)
Georgi Gerganov [Tue, 25 Nov 2025 13:31:43 +0000 (15:31 +0200)]
ggml : add ggml_top_k (llama/17365)

* ggml : add ggml_top_k

* cont : add ggml_argsort_top_k

* metal : add top_k support

* ggml : cleanup

* tests : add virtual err() function for test_case

* ggml : add comments

6 weeks agoCANN: supports out_prod operator for F32 and F16 (llama/17406)
TianHao324 [Tue, 25 Nov 2025 09:39:06 +0000 (17:39 +0800)]
CANN: supports out_prod operator for F32 and F16 (llama/17406)

Co-authored-by: tianhao <redacted>
6 weeks agovulkan: Use fewer rows for scalar FA when HS is not a multiple of 16 (llama/17455)
Jeff Bolz [Tue, 25 Nov 2025 06:11:27 +0000 (00:11 -0600)]
vulkan: Use fewer rows for scalar FA when HS is not a multiple of 16 (llama/17455)

6 weeks agovulkan: more FA details in vk_perf_logger (llama/17443)
Jeff Bolz [Mon, 24 Nov 2025 21:25:24 +0000 (15:25 -0600)]
vulkan: more FA details in vk_perf_logger (llama/17443)

6 weeks agoHIP: WMMA-MMQ kernels for RDNA 4 (llama/17156)
Jiacheng (Jason) Chen [Mon, 24 Nov 2025 19:00:10 +0000 (14:00 -0500)]
HIP: WMMA-MMQ kernels for RDNA 4 (llama/17156)

* first commit naive test to enable mmq for RDNA4

* adding appropriate WMMA instructions

* git rebase on top of master: fixing the correctness of the mat mul operations, updating layout mappings for RDNA4

* clean up merge conflicts

* add comments and code clean up

* PR clean up, addressed comments

* enable MMQ fallback on RDNA4

* addressed comments: add guards in load generic, separate wmma branch for use_mmq function

* Revert build-xcframework.sh

* Formating: remove trailing whitespace

* revert CMake files

* clean up after rebase: remove duplicated change, revert cmake files

* clean up after rebase: revert changes from build-xcframework.sh

* clean up: remove extra space line in mma.cuh

* Revert "clean up: remove extra space line in mma.cuh"

This reverts commit b39ed57c4529906466bd0bc7c2a86e08fc2f8bee.

6 weeks agoggml-cpu: arm64: q4_K repack gemm and gemv implementations (i8mm) (llama/16739)
Alberto Cabrera Pérez [Mon, 24 Nov 2025 11:08:11 +0000 (11:08 +0000)]
ggml-cpu: arm64: q4_K repack gemm and gemv implementations (i8mm) (llama/16739)

* Enabled q4_K_8x8_q8_K path on ARM

* wip: I8mm qs multiplication, pending bias

* cpu : arm : REPACK gemm q4_K8x8 implementation

Signed-off-by: Alberto Cabrera <redacted>
* Guard gemm with proper features, improved superblock scale and min calc

Signed-off-by: Alberto Cabrera <redacted>
* cpu: arm: Implemented REPACK gemv for Q4_K

Signed-off-by: Alberto Cabrera <redacted>
* Removed completed TODO

* Fixed missing guards when selecting optimal repack type for Q4_K

Signed-off-by: Alberto Cabrera <redacted>
* Fixed macro guard for gemv

* Fixed wrong comment in GEMV

* Fixed warning for unused variable

* vdotq_s32 -> ggml_vdotq_s32

Signed-off-by: Alberto Cabrera <redacted>
* Clang-format issues

* Apply suggestions from code review

Co-authored-by: Diego Devesa <redacted>
* Removed unnecessary GGML_UNUSED

* Fixed guards in q4_k gemm and gemv (repack)

---------

Signed-off-by: Alberto Cabrera <redacted>
Co-authored-by: Diego Devesa <redacted>
6 weeks agoggml: add RISC-V cpu-feats (llama/17461)
ixgbe [Mon, 24 Nov 2025 11:07:14 +0000 (19:07 +0800)]
ggml: add RISC-V cpu-feats (llama/17461)

* ggml: add RISC-V cpu-feats

Signed-off-by: Wang Yang <redacted>
* fix comment[1]

---------

Signed-off-by: Wang Yang <redacted>
6 weeks agohexagon: add support for ROPE_NEOX (llama/17458)
Max Krasnyansky [Mon, 24 Nov 2025 02:55:56 +0000 (18:55 -0800)]
hexagon: add support for ROPE_NEOX (llama/17458)

6 weeks agoCANN: Define `cann_graph_update_required` before macro (llama/17434)
Raul Torres [Mon, 24 Nov 2025 02:02:52 +0000 (02:02 +0000)]
CANN: Define `cann_graph_update_required` before macro (llama/17434)

**Description of the problem**

`cann_graph_update_required` is redundantly defined and
initialized as `false` inside two mutually exclusive macro branches.

**Proposed solution**

Define it right before the macro so that it could serve both
branches.

6 weeks agoggml-hexagon: Initial Hexagon v68/v69 support (llama/17394)
M. Mediouni [Mon, 24 Nov 2025 00:54:49 +0000 (01:54 +0100)]
ggml-hexagon: Initial Hexagon v68/v69 support (llama/17394)

* ggml-hexagon: fix build error with GCC

Add stdexcept include to fix GCC build errors

Signed-off-by: Mohamed Mediouni <redacted>
* ggml-hexagon: check VTCM acquire failures

Signed-off-by: Mohamed Mediouni <redacted>
* ggml-hexagon: disable destination bypass on older than v73

v68 errors out if having bypass enabled when the VTCM is the destination.

At least on v68 this made things actually work... not a proper fix though, so to look at later...

Signed-off-by: Mohamed Mediouni <redacted>
* ggml-hexagon: add initial v68/v69 support

v68 is the Hexagon revision notably used on the Snapdragon 8cx
Gen 3 and the QCM6490.

Also add support for v69.

8MB isn't a supported page size, so relax asked for page size constraint
for HAP_compute_res_attr_set_vtcm_param_v2 to optimal.

Signed-off-by: Mohamed Mediouni <redacted>
---------

Signed-off-by: Mohamed Mediouni <redacted>
6 weeks agoggml-hexagon: add `hex_supported_buffer` for better buffer supported check (llama...
nullname [Sun, 23 Nov 2025 22:26:36 +0000 (06:26 +0800)]
ggml-hexagon: add `hex_supported_buffer` for better buffer supported check (llama/17212)

* hexagon: add buffer support checks for hexagon sessions

* refactor: simplify buffer support checks in hexagon operations

* hexagon: update buffer support checks to use tensor structure

* refactor: streamline buffer initialization for DSP queue in hexagon operations

* refactor: simplify buffer initialization in DSP queue for hexagon operations

* refactor: optimize hex_supported_buffer function by fold expression

* wip

* refactor: simplify dspqueue_buffers_init function and its usage in hexagon operations

* fix: improve nan handling at hvx_vec_fast_sigmoid_fp32_guard

* refactor: optimize hvx_vec_inverse_fp32_guard for better nan handling

* refactor: update hvx_vec_fast_sigmoid_fp32_guard to use adjusted exponent limits

* refactor: modify hvx_vec_fast_sigmoid_fp32_guard to accept parameters for improved flexibility

* refactor: update hvx_vec_exp_fp32_guard to accept max_exp and inf parameters to save some instructions

* refactor: move hvx_vec_inverse_fp32_guard implementation to hvx-inverse.c for better perf

6 weeks agocuda : support non-contiguous i32 to i32 copy (llama/17326)
Sigbjørn Skjæret [Sun, 23 Nov 2025 10:13:34 +0000 (11:13 +0100)]
cuda : support non-contiguous i32 to i32 copy (llama/17326)

* support non-contiguous i32 to i32 copy

* add tests

* rename cpy_flt to cpy_scalar and reindent params

6 weeks agovulkan: remove a couple unnecessary switches (llama/17419)
Jeff Bolz [Sun, 23 Nov 2025 05:29:40 +0000 (23:29 -0600)]
vulkan: remove a couple unnecessary switches (llama/17419)

6 weeks agoRevive MUL_MAT_ID to perf testing (llama/17397)
Masato Nakasaka [Sat, 22 Nov 2025 09:55:43 +0000 (18:55 +0900)]
Revive MUL_MAT_ID to perf testing (llama/17397)

6 weeks agoHIP: RDNA4 tensor core support for MMF (llama/17077)
yulo [Fri, 21 Nov 2025 23:03:24 +0000 (07:03 +0800)]
HIP: RDNA4 tensor core support for MMF (llama/17077)

* mmf for rdna4

* align the padding for rdna4

* forbit mul_mat_f for rdna4

* fix as comment

* remove device kernels

* add constexpr for early return

* update based on review comment

* change based on the review comment

* pass compile error

* keep code consistency

---------

Co-authored-by: zhang hui <redacted>
6 weeks agoopencl: refine condition for kqv mm (llama/17392)
lhez [Fri, 21 Nov 2025 22:34:48 +0000 (14:34 -0800)]
opencl: refine condition for kqv mm (llama/17392)

6 weeks agovulkan: disable async for older Intel devices (llama/17369)
Jeff Bolz [Fri, 21 Nov 2025 08:58:17 +0000 (02:58 -0600)]
vulkan: disable async for older Intel devices (llama/17369)

* vulkan: disable async for older Intel devices

* update detection logic

* use name string for detection

6 weeks agoCANN: Refactor `evaluate_and_capture_cann_graph` (llama/17333)
Raul Torres [Fri, 21 Nov 2025 08:23:29 +0000 (08:23 +0000)]
CANN: Refactor `evaluate_and_capture_cann_graph` (llama/17333)

* CANN: Refactor `evaluate_and_capture_cann_graph`

**Description of the problem**

* `matched_graph` is obtained even if graph mode is disabled.
* End of graph capture and graph replay are unnecessarily placed in different `if` blocks.

**Proposed solution**

* Obtain `matched_graph` only if graph mode is enabled.
* Place end of graph capture and graph reply inside the same `if` block.
* Unify graph related comments.

* Remove trailing whitespace

6 weeks agoggml-hexagon: fix swiglu failure at `test-backend-ops` (llama/17344)
nullname [Thu, 20 Nov 2025 23:45:05 +0000 (07:45 +0800)]
ggml-hexagon: fix swiglu failure at `test-backend-ops` (llama/17344)

* refactor: use hvx_vec_exp_fp32_guard_inf for overflow handling in hvx_exp_f32

* feat: add fast sigmoid function with overflow guard for fp32

* refactor: replace hvx_vec_inverse_fp32 with hvx_vec_inverse_fp32_guard_inf for improved overflow handling

* feat: enhance hvx_add_scalar_f32 with overflow handling using infinity guard

* wip

* add HVX_Vector_Alias

wip

* wip

* fix: improve handling of src1 tensor in glu_swiglu_fp32_per_thread function

* fix nc

* wip

* wip

* handle nan at inverse

* wip

* fix neg

* wip

* rename

* fix hvx_vec_inverse_fp32_guard_inf to handle infinity and NaN cases correctly

* wip

* fix hvx_vec_inverse_fp32_guard_inf to handle NaN cases correctly

* wip

* wip

* wip

* fix output sign

6 weeks agoggml : Fix transposed SOLVE_TRI result (llama/17323)
Piotr Wilkin (ilintar) [Thu, 20 Nov 2025 10:58:21 +0000 (11:58 +0100)]
ggml : Fix transposed SOLVE_TRI result (llama/17323)

* Did someone transpose the SOLVE_TRI result matrix? Perhaps...

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoDGX Spark: UMA support (llama/17368)
Scott Fudally [Thu, 20 Nov 2025 10:32:02 +0000 (02:32 -0800)]
DGX Spark: UMA support (llama/17368)

* DGX Spark: UMA support

* Updates from PR feedback

* More PR feedback cleanup

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Georgi Gerganov <redacted>
* Remove trailing whitespace

* Update ggml/src/ggml-cuda/ggml-cuda.cu

---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoggml : remove useless and error-prone variadic macros (llama/17399)
Adrien Gallouët [Thu, 20 Nov 2025 10:18:27 +0000 (11:18 +0100)]
ggml : remove useless and error-prone variadic macros (llama/17399)

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agokleidiai: fix zero-size array declaration (llama/17240)
sudhiarm [Thu, 20 Nov 2025 09:45:49 +0000 (09:45 +0000)]
kleidiai: fix zero-size array declaration (llama/17240)

6 weeks agoggml-cpu:add RISC-V RVV (Zvfh) optimization for FP16 vector scaling (llama/17314)
ixgbe [Thu, 20 Nov 2025 06:09:18 +0000 (14:09 +0800)]
ggml-cpu:add RISC-V RVV (Zvfh) optimization for FP16 vector scaling (llama/17314)

* ggml-cpu:add RISC-V RVV (Zvfh) optimization for FP16 vector scaling

Signed-off-by: Wang Yang <redacted>
* fix comment

* fix comment 2

---------

Signed-off-by: Wang Yang <redacted>
6 weeks agovulkan: implement ADD1, ARANGE, FILL, SOFTPLUS, STEP, ROUND, CEIL, FLOOR, TRUNC ...
Giuseppe Scrivano [Wed, 19 Nov 2025 16:29:45 +0000 (17:29 +0100)]
vulkan: implement ADD1, ARANGE, FILL, SOFTPLUS, STEP, ROUND, CEIL, FLOOR, TRUNC (llama/17319)

* vulkan: initialize array

* vulkan: implement ADD1

* vulkan: implement ARANGE

* vulkan: implement FILL

* vulkan: implement SOFTPLUS

* vulkan: implement STEP

* vulkan: implement ROUND

* vulkan: implement CEIL

* vulkan: implement FLOOR

* vulkan: implement TRUNC

* docs: update Vulkan ops

Signed-off-by: Giuseppe Scrivano <redacted>
6 weeks agovulkan: support larger argsort (llama/17313)
Jeff Bolz [Wed, 19 Nov 2025 16:25:50 +0000 (10:25 -0600)]
vulkan: support larger argsort (llama/17313)

* vulkan: support larger argsort

This is an extension of the original bitonic sorting shader that puts the
temporary values in global memory and when more than 1024 threads are needed
it runs multiple workgroups and synchronizes through a pipelinebarrier.

To improve the memory access pattern, a copy of the float value is kept with
the index value. I've applied this same change to the original shared memory
version of the shader, which is still used when ncols <= 1024.

* Reduce the number of shader variants. Use smaller workgroups when doing a single pass, for a modest perf boost

* reduce loop overhead

* run multiple cols per invocation, to reduce barrier overhead

6 weeks agovulkan: Add copy_transpose shader (llama/17371)
Jeff Bolz [Wed, 19 Nov 2025 15:50:43 +0000 (09:50 -0600)]
vulkan: Add copy_transpose shader (llama/17371)

6 weeks agocuda: fix rope fusion for gemma3 (llama/17378)
Aman Gupta [Wed, 19 Nov 2025 10:25:05 +0000 (18:25 +0800)]
cuda: fix rope fusion for gemma3 (llama/17378)

6 weeks agoFix too relaxed check on CUDA "fast copy" (can_be_transposed) condition (llama/17332)
Piotr Wilkin (ilintar) [Wed, 19 Nov 2025 09:36:33 +0000 (10:36 +0100)]
Fix too relaxed check on CUDA "fast copy" (can_be_transposed) condition (llama/17332)

* Fix too relaxed check on CUDA "fast copy" (can_be_transposed) condition

* Argh.

* Making CISC happy ;)

* Integrate CONT tests

* Use loopy loop

* Skip new tests for (B)F16 for now.

6 weeks agovulkan: force full subgroups for flash attention to fix intel subgroup crash (llama...
Ruben Ortlam [Wed, 19 Nov 2025 07:46:26 +0000 (08:46 +0100)]
vulkan: force full subgroups for flash attention to fix intel subgroup crash (llama/17356)

6 weeks agoggml-cpu: Don't pass -mpowerpc64 when -mcpu already implies it (llama/17308)
Jeremy Rand [Wed, 19 Nov 2025 06:19:00 +0000 (06:19 +0000)]
ggml-cpu: Don't pass -mpowerpc64 when -mcpu already implies it (llama/17308)

6 weeks agoCANN: fix acl_tensor_ptr usage in ASCEND_310P ROPE (llama/17347)
Chenguang Li [Tue, 18 Nov 2025 08:41:52 +0000 (16:41 +0800)]
CANN: fix acl_tensor_ptr usage in ASCEND_310P ROPE (llama/17347)

* cann: fix acl_tensor_ptr usage in ASCEND_310P ROPE implementation

Fix compilation errors in the ASCEND_310P-specific ROPE operation code
by adding .get() calls when passing acl_tensor_ptr smart pointers to
functions expecting raw aclTensor* pointers.

This fixes the code that was missed in the previous refactoring commit
(8981848) which changed ggml_cann_create_tensor() return type from
aclTensor* to acl_tensor_ptr.

* cann: format code

6 weeks agovulkan: support noncontig i32 copy (llama/17328)
Jeff Bolz [Tue, 18 Nov 2025 06:41:24 +0000 (00:41 -0600)]
vulkan: support noncontig i32 copy (llama/17328)

6 weeks agovulkan: add log RTE support to fix Nvidia CI (llama/17320)
Ruben Ortlam [Mon, 17 Nov 2025 20:37:49 +0000 (21:37 +0100)]
vulkan: add log RTE support to fix Nvidia CI (llama/17320)

* vulkan: add log RTE support to fix Nvidia CI

* actually use the rte shader

6 weeks agocmake : fix ARM feature verification (llama/17170)
Adrien Gallouët [Mon, 17 Nov 2025 20:37:29 +0000 (21:37 +0100)]
cmake : fix ARM feature verification (llama/17170)

* cmake : fix ARM feature verification

Use check_cxx_source_compiles to prevent conflicts with
the existing GGML_NATIVE detection code.

Signed-off-by: Adrien Gallouët <redacted>
* cmake : unset __ARM_FEATURE when feature is disabled

Signed-off-by: Adrien Gallouët <redacted>
* cmake : fix scope, this is really a macro

Signed-off-by: Adrien Gallouët <redacted>
* arm_neon.h is useless

Signed-off-by: Adrien Gallouët <redacted>
---------

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoggml : add missing AVX512 feature checks (llama/17270)
Adrien Gallouët [Mon, 17 Nov 2025 11:12:00 +0000 (12:12 +0100)]
ggml : add missing AVX512 feature checks (llama/17270)

_mm512_cvtepu8_epi16        requires  __AVX512BW__
_mm512_srli_epi16           requires  __AVX512BW__
__builtin_ia32_inserti32x8  requires  __AVX512DQ__

Signed-off-by: Adrien Gallouët <redacted>
2 months agoggml : remove dirty flag from version string (#1391)
Daniel Bevenius [Mon, 24 Nov 2025 11:51:50 +0000 (12:51 +0100)]
ggml : remove dirty flag from version string (#1391)

This commit removes the "-dirty" suffix from the GGML version string.

The motivation for this change is to ensure that the version string
works with different ways of checking out ggml and using it in projects.
By removing the dirty flag from the version string, we avoid potential
artifacts like shared libraries getting a -dirty suffix in their names.

Instead, if the project is built from a dirty git state, the dirty flag
will be appended to the commit hash in the GGML_BUILD_COMMIT variable.
This will enable users to still identify that the build was made from
from a modified/dirty state even though the version might match a "real"
version.

For example, the commit can be produces as follows:
```c++
    printf("commit: %s\n", ggml_commit());
```
Which would print the following for a dirty build:
```console
commit: 781baf2a-dirty
```

Refs: https://github.com/ggml-org/ggml/pull/1363#issuecomment-3569691546

2 months agosync : whisper.cpp
Georgi Gerganov [Thu, 20 Nov 2025 12:07:49 +0000 (14:07 +0200)]
sync : whisper.cpp

2 months agometal : fix compile on macos 11 (whisper/3533)
YangLe [Thu, 20 Nov 2025 11:54:54 +0000 (19:54 +0800)]
metal : fix compile on macos 11 (whisper/3533)

2 months agosync : llama.cpp
Georgi Gerganov [Mon, 17 Nov 2025 10:05:16 +0000 (12:05 +0200)]
sync : llama.cpp

2 months agometal : support I32 -> I32 copy (llama/17317)
Georgi Gerganov [Mon, 17 Nov 2025 09:52:00 +0000 (11:52 +0200)]
metal : support I32 -> I32 copy (llama/17317)

2 months agometal : faster argsort (llama/17315)
Georgi Gerganov [Mon, 17 Nov 2025 09:51:48 +0000 (11:51 +0200)]
metal : faster argsort (llama/17315)

* metal : faster argsort

* cont : keep data in registers