]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Raul Torres [Tue, 10 Feb 2026 06:19:30 +0000 (06:19 +0000)]
CANN: Remove unnecessary wrapper for `gml_backend_buft_is_cann` (llama/18968)
hipudding [Tue, 10 Feb 2026 06:18:59 +0000 (14:18 +0800)]
CANN: implement quantized MUL_MAT_ID for MoE models (llama/19228)
Implement ggml_cann_mul_mat_id_quant function to support quantized matrix
multiplication for Mixture of Experts (MoE) architectures on CANN backend.
Key features:
- Support Q4_0 and Q8_0 quantized weight formats
- Use IndexSelect to dynamically route expert-specific weights based on indices
- Leverage WeightQuantBatchMatmulV2 for efficient quantized computation
- Handle automatic F16 type conversion for hardware compatibility
- Support both per-expert and broadcast input modes
Implementation details:
- Extract expert weights and scales using CANN IndexSelect operation
- Process each batch and expert combination independently
- Create proper tensor views with correct stride for matmul operations
- Automatic input/output type casting to/from F16 as needed
Testing: All test cases passed for supported types (F32, F16, Q4_0, Q8_0).
Georgi Gerganov [Tue, 10 Feb 2026 06:07:16 +0000 (08:07 +0200)]
cuda : extend GGML_OP_PAD to work with non-cont src0 (llama/19429)
* cuda : extend GGML_OP_PAD to work with non-cont src0
* tests : add permuted pad
Oliver Simons [Sun, 8 Feb 2026 13:12:51 +0000 (14:12 +0100)]
CUDA: Fix non-contig rope (llama/19338)
* Rename variables + fix rope_neox
Seems memory layout is shared with Vulkan so we can port fix from
https://github.com/ggml-org/llama.cpp/pull/19299
* Fix rope_multi
* Fix rope_vision
* Fix rope_norm
* Rename ne* to ne0* for consistent variable naming
* cont : consistent stride names
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sat, 7 Feb 2026 08:36:51 +0000 (10:36 +0200)]
sync : llama.cpp
Georgi Gerganov [Sat, 7 Feb 2026 08:35:56 +0000 (10:35 +0200)]
metal : consolidate bin kernels (llama/19390)
* metal : refactor bin kernels
* cont
* cont : fix cv
Georgi Gerganov [Sat, 7 Feb 2026 05:37:15 +0000 (07:37 +0200)]
metal : fix event synchronization in cpy_tensor_async (llama/19402)
Abhijit Ramesh [Fri, 6 Feb 2026 18:33:30 +0000 (10:33 -0800)]
ggml-webgpu: JIT compile binary operators and handle binding overlaps (llama/19310)
* ggml webgpu: port binary operators to use pre-wgsl
* Add binary.wgsl: unified shader with conditionals for all 4 ops
* Add gen_binary_shaders.cpp: build tool for using pre_wgsl preprocessor
* Remove bin_op.tmpl.wgsl and binary.wgsl (Python template)
* Update CMake to generate binary operator shaders at build time
* ggml-webgpu: migrate binary ops to JIT compilation with overlap handling
* port binary operators from AOT to pre-wgsl JIT compilation
* add src1=dst overlap handling for binary ops
* use compile-time workgroup size defines instead of runtime overrides
* ggml-webgpu: complete overlap handling for binary ops
* add support for inplace & overlap case in binding setup
* restructure conditional logic to handle all overlap cases
* ensure all buffer bindings are correctly assigned for edge cases
* ggml-webgpu: remove unused binary overlap cases
Remove src0==src1 binary overlap case that never occurs in practice.
* keep INPLACE (src0==dst), OVERLAP (src1==dst), DEFAULT
* remove unused src0==src1 and all-same variant
* refactor wgsl to eliminate duplication
Georgi Gerganov [Sat, 7 Feb 2026 05:38:05 +0000 (07:38 +0200)]
sync : llama.cpp
Nechama Krashinski [Fri, 6 Feb 2026 15:13:44 +0000 (17:13 +0200)]
sycl: add F16 support for GGML_OP_CEIL (llama/19306)
* Fix SYCL CEIL operator
* sycl: implement GGML_OP_CEIL
Jeff Bolz [Fri, 6 Feb 2026 14:50:30 +0000 (08:50 -0600)]
tests: reduce number of FA test permutations (llama/19381)
Only test non-F16 for head size 64 and 72 (one a multiple of QK, one not).
Jeff Bolz [Fri, 6 Feb 2026 08:15:13 +0000 (02:15 -0600)]
vulkan: For coopmat2 FA, use fp16 accumulators for the final result (llama/19376)
The cpu and cuda backends use fp16 for the VKQ accumulator type, this change
does the same for vulkan. This helps particularly with large head sizes which
are very register-limited.
I tried this for the coopmat1 path and it slowed down a bit. I didn't try for
scalar.
I applied the softmax bias that the cuda backend uses to avoid overflow,
although I was not able to reproduce the original bug without it.
Jeff Bolz [Fri, 6 Feb 2026 07:49:58 +0000 (01:49 -0600)]
vulkan: make FA mask/softcap enables spec constants (llama/19309)
* vulkan: make FA mask/softcap enables spec constants
* don't specialize for sinks
* bump timeout a little bit
Georgi Gerganov [Fri, 6 Feb 2026 07:25:11 +0000 (09:25 +0200)]
metal : skip loading all-zero mask (llama/19337)
* metal : skip loading all-zero mask
* cont : minor
Georgi Gerganov [Fri, 6 Feb 2026 05:55:06 +0000 (07:55 +0200)]
cuda : cuda graphs now compare all node params (llama/19383)
Georgi Gerganov [Thu, 5 Feb 2026 17:07:22 +0000 (19:07 +0200)]
metal : adaptive CPU/GPU interleave based on number of nodes (llama/19369)
Jeff Bolz [Thu, 5 Feb 2026 15:26:38 +0000 (09:26 -0600)]
vulkan: Preprocess FA mask to detect all-neg-inf and all-zero. (llama/19281)
Write out a 2-bit code per block and avoid loading the mask when it
matches these two common cases.
Apply this optimization when the mask is relatively large (i.e. prompt
processing).
Georgi Gerganov [Thu, 5 Feb 2026 08:08:45 +0000 (10:08 +0200)]
metal : add diag (llama/19330)
Oleksandr Kuvshynov [Thu, 5 Feb 2026 08:06:59 +0000 (03:06 -0500)]
vulkan: fix GPU deduplication logic. (llama/19222)
* vulkan: fix GPU deduplication logic.
As reported in https://github.com/ggml-org/llama.cpp/issues/19221, the
(same uuid, same driver) logic is problematic for windows+intel igpu.
Let's just avoid filtering for MoltenVK which is apple-specific, and
keep the logic the same as before
88d23ad5 - just dedup based on UUID.
Verified that MacOS + 4xVega still reports 4 GPUs with this version.
* vulkan: only skip dedup when both drivers are moltenVk
Jeff Bolz [Thu, 5 Feb 2026 07:48:33 +0000 (01:48 -0600)]
vulkan: Set k_load_shmem to false when K is too large (llama/19301)
Jeff Bolz [Thu, 5 Feb 2026 07:38:59 +0000 (01:38 -0600)]
vulkan: fix non-contig rope (llama/19299)
will-lms [Thu, 5 Feb 2026 06:05:09 +0000 (01:05 -0500)]
metal : add missing includes (llama/19348)
Georgi Gerganov [Wed, 4 Feb 2026 10:45:21 +0000 (12:45 +0200)]
tests : add non-cont, inplace rope tests (llama/19296)
* tests : add non-cont, inplace rope tests
* cont : exercise dim 3
Co-authored-by: Jeff Bolz <redacted>
* cont : more dim3 exercises
---------
Co-authored-by: Jeff Bolz <redacted>
Kevin Pouget [Wed, 4 Feb 2026 02:46:18 +0000 (03:46 +0100)]
ggml-virtgpu: make the code thread safe (llama/19204)
* ggml-virtgpu: regenerate_remoting.py: add the ability to deprecate a function
* ggml-virtgpu: deprecate buffer_type is_host remoting
not necessary
* ggml-virtgpu: stop using static vars as cache
The static init isn't thread safe.
* ggml-virtgpu: protect the use of the shared memory to transfer data
* ggml-virtgpu: make the remote calls thread-safe
* ggml-virtgpu: backend: don't continue if couldn't allocate the tensor memory
* ggml-virtgpu: add a cleanup function for consistency
* ggml-virtgpu: backend: don't crash if buft->iface.get_max_size is missing
* fix style and ordering
* Remove the static variable in apir_device_get_count
* ggml-virtgpu: improve the logging
* fix review minor formatting changes
Aman Gupta [Wed, 4 Feb 2026 01:43:29 +0000 (09:43 +0800)]
ggml-cpu: use LUT for converting e8->f32 scales on x86 (llama/19288)
* ggml-cpu: use LUT for converting e8->f32 scales on x86
* add dispatch based on macro
Georgi Gerganov [Tue, 3 Feb 2026 21:43:14 +0000 (23:43 +0200)]
metal : add solve_tri (llama/19302)
Ruben Ortlam [Tue, 3 Feb 2026 16:37:32 +0000 (17:37 +0100)]
vulkan: disable coopmat1 fa on Nvidia Turing (llama/19290)
Aman Gupta [Tue, 3 Feb 2026 15:31:23 +0000 (23:31 +0800)]
CUDA: use mmvq for mul-mat-id for small batch sizes (llama/18958)
* CUDA: use mmvq for mul-mat-id for small batch sizes
* add mmvq too
* Fix perf issue on ampere. Use mmvf mm-id only for non-nvidia GPUs
* templatize multi_token_path
Georgi Gerganov [Tue, 3 Feb 2026 11:43:29 +0000 (13:43 +0200)]
metal : minor cleanup (llama/19251)
Oliver Simons [Tue, 3 Feb 2026 10:33:14 +0000 (11:33 +0100)]
CUDA: Fix loop unrolling for BW in mul_mat_q_stream_k_fixup (llama/19053)
By providing stride_* variables as size_t (i.e., 64-bit) the compiler can
correctly unroll the [two for-loops](https://github.com/ggml-org/llama.cpp/blob/
557515be1e93ed8939dd8a7c7d08765fdbe8be31 /ggml/src/ggml-cuda/mmq.cuh#L3789-L3816)
on BW. This gives some perf for prefill/pp phase on BW, while not affecting
other SMs:
| GPU | Model | Test | t/s master | t/s osimons/fix_bw_mmq_fixup_kernel | Speedup |
|:--------------------------------------------------------|:----------------------|:-------|-------------:|--------------------------------------:|----------:|
| NVIDIA RTX 6000 Ada Generation | gpt-oss 20B MXFP4 MoE | pp8096 | 8404.05 | 8375.79 | 1.00 |
| NVIDIA RTX 6000 Ada Generation | llama 3B Q4_K_M | pp8096 | 16148.93 | 16019.60 | 0.99 |
| NVIDIA RTX 6000 Ada Generation | llama 8B Q4_0 | pp8096 | 8008.29 | 7978.80 | 1.00 |
| NVIDIA RTX 6000 Ada Generation | nemotron_h 9B BF16 | pp8096 | 4263.16 | 4248.53 | 1.00 |
| NVIDIA RTX 6000 Ada Generation | nemotron_h 9B Q4_K_M | pp8096 | 5165.11 | 5157.43 | 1.00 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | gpt-oss 20B MXFP4 MoE | pp8096 | 12582.80 | 12758.37 | 1.01 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | llama 3B Q4_K_M | pp8096 | 16879.10 | 17619.47 | 1.04 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | llama 8B Q4_0 | pp8096 | 10649.90 | 10982.65 | 1.03 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | nemotron_h 9B BF16 | pp8096 | 7717.73 | 7716.22 | 1.00 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | nemotron_h 9B Q4_K_M | pp8096 | 7301.90 | 7370.38 | 1.01 |
George [Tue, 3 Feb 2026 06:43:39 +0000 (08:43 +0200)]
ggml: added cleanups in ggml_quantize_free (llama/19278)
Add missing cleanup calls for IQ2_S, IQ1_M quantization types and IQ3XS with 512 blocks during quantization cleanup.
Gaurav Garg [Tue, 3 Feb 2026 06:41:02 +0000 (12:11 +0530)]
cuda : revert CUDA_SCALE_LAUNCH_QUEUES override until investigated (llama/19227)
Hangs were reported on Jetson Orin AGX if we set CUDA_SCALE_LAUNCH_QUEUES=4x. Reverting the previous PR (#19042) and updating the document to consider setting CUDA_SCALE_LAUNCH_QUEUES=4x for faster throughput on multi-GPU systems.
lhez [Mon, 2 Feb 2026 23:54:43 +0000 (15:54 -0800)]
opencl: refactor some ops, concat, repeat, tanh and scale (llama/19226)
* opencl: refactor concat
* opencl: refactor repeat
* opencl: refactor tanh
* opencl: enable fp16 for tanh
* opencl: refactor scale
* opencl: fix unused variables
Aman Gupta [Mon, 2 Feb 2026 17:19:55 +0000 (01:19 +0800)]
ggml-cpu: FA split across kv for faster TG (llama/19209)
* ggml-cpu: split across kv for faster TG
* simplify sinks application
* add ref impl
Neo Zhang [Mon, 2 Feb 2026 13:06:21 +0000 (21:06 +0800)]
Remove support for Nvidia & AMD GPU, because the oneAPI plugin for Nvidia & AMD GPU is unavailable: download/installation channels are out of work. (llama/19246)
User can't build up the software for Nvidia & AMD GPU.
rm the oneMath since it is only used in NV and AMD code path.
Tamar [Mon, 2 Feb 2026 13:05:51 +0000 (15:05 +0200)]
sycl: implement GGML_OP_TOP_K (llama/19242)
Georgi Gerganov [Mon, 2 Feb 2026 12:29:44 +0000 (14:29 +0200)]
metal : support virtual devices (llama/18919)
* metal : support virtual devices
* cont : manage buffer type context memory
* metal : add events
* cont : implement cpy_tensor_async
Johannes Gäßler [Mon, 2 Feb 2026 09:00:05 +0000 (10:00 +0100)]
ggml-backend: fix async set/get fallback sync (llama/19179)
Christian Kastner [Mon, 2 Feb 2026 06:38:55 +0000 (07:38 +0100)]
docs : Minor cleanups (llama/19252)
* Update old URLs to github.com/ggml-org/
* Bump copyrights
Nikhil Jain [Mon, 2 Feb 2026 02:47:29 +0000 (18:47 -0800)]
Remove pipeline cache mutexes (llama/19195)
* Remove mutex for pipeline caches, since they are now per-thread.
* Add comment
* Run clang-format
* Cleanup
* Run CI again
* Run CI once more
* Run clang-format
Max Krasnyansky [Sun, 1 Feb 2026 22:13:38 +0000 (14:13 -0800)]
Bump cmake max version (needed for Windows on Snapdragon builds) (llama/19188)
* Bump max cmake version (needed for Windows on Snapdragon builds)
* cmake: move max version setting into ggml/CMakeLists
nullname [Sat, 31 Jan 2026 05:14:20 +0000 (13:14 +0800)]
ggml-hexagon: flash-attention and reduce-sum optimizations (llama/19141)
* wip
* ggml-hexagon: add vectorized dot product function for FP32 and FP16 accumulation
* ggml-hexagon: optimize dot product functions for FP16 and FP32 with new vectorized implementations
* wip
* ggml-hexagon: optimize hvx_vec_dump_f32_n and hvx_vec_reduce_sum_qf32x2 functions for improved performance
* ggml-hexagon: refactor dot product functions to use a common loading function for improved readability
* optimize vector dot product functions to use unified reduction for improved performance
* wip
* ggml-hexagon: add vectorized dot product function for FP32 and FP16 accumulation
* ggml-hexagon: optimize dot product functions for FP16 and FP32 with new vectorized implementations
* wip
* ggml-hexagon: optimize hvx_vec_dump_f32_n and hvx_vec_reduce_sum_qf32x2 functions for improved performance
* ggml-hexagon: refactor dot product functions to use a common loading function for improved readability
* optimize vector dot product functions to use unified reduction for improved performance
* hexagon: optimize reduce-sum for v75+
* hexagon: always keep row_sums in sf/fp32
* ggml-hexagon: enhance directory checks for HEXAGON_SDK_ROOT and HEXAGON_TOOLS_ROOT
* fix compiling error after rebase
---------
Co-authored-by: Max Krasnyansky <redacted>
shaofeiqi [Fri, 30 Jan 2026 18:19:27 +0000 (10:19 -0800)]
opencl: add optimized q8_0 mm kernel for adreno (llama/18871)
* Add Q8_0 OpenCL kernel
Co-authored-by: yunjie <redacted>
* opencl: fix build for non-adreno
* opencl: refactor q8_0
* opencl: enforce subgroup size of 64 for adreno for q8_0
* For A750 and older generations, subgroup size can be 64 or 128.
This kernel assumes subgroup size 64.
* opencl: suppress warning when adreno kernels are disabled
---------
Co-authored-by: yunjie <redacted>
Co-authored-by: Li He <redacted>
Simon Redman [Fri, 30 Jan 2026 16:27:16 +0000 (11:27 -0500)]
Correctly fetch q8_1 quantize pipeline in test as needed by
8a3519b (llama/19194)
Georgi Gerganov [Fri, 30 Jan 2026 11:52:57 +0000 (13:52 +0200)]
tests : add GQA=20 FA test (llama/19095)
Georgi Gerganov [Sat, 7 Feb 2026 08:33:58 +0000 (10:33 +0200)]
ci : remove "Release" word from the title of the release
Georgi Gerganov [Sat, 7 Feb 2026 07:58:02 +0000 (09:58 +0200)]
ggml : bump version to 0.9.6 (#1423)
Georgi Gerganov [Fri, 30 Jan 2026 14:29:51 +0000 (16:29 +0200)]
cmake : remove unused file (#1419)
Georgi Gerganov [Fri, 30 Jan 2026 14:25:41 +0000 (16:25 +0200)]
sync : whisper.cpp
Georgi Gerganov [Fri, 30 Jan 2026 13:56:15 +0000 (15:56 +0200)]
cuda : fix compile warnings (whisper/0)
Georgi Gerganov [Fri, 30 Jan 2026 08:35:15 +0000 (10:35 +0200)]
sync : llama.cpp
bssrdf [Fri, 30 Jan 2026 04:57:52 +0000 (23:57 -0500)]
add tensor type checking as part of cuda graph properties (llama/19186)
s8322 [Fri, 30 Jan 2026 04:01:38 +0000 (06:01 +0200)]
sycl: implement GGML_UNARY_OP_SOFTPLUS (llama/19114)
* sycl: add softplus unary op implementation
* sycl: add softplus unary op implementation
* docs(ops): mark SYCL SOFTPLUS as supported
* docs: update SYCL status for SOFTPLUS
RachelMantel [Fri, 30 Jan 2026 04:00:49 +0000 (06:00 +0200)]
sycl: implement GGML_OP_TRI (llama/19089)
* sycl: implement GGML_OP_TRI
* docs: update ops.md for SYCL TRI
* docs: regenerate ops.md
* docs: update SYCL support for GGML_OP_TRI
Zheyuan Chen [Thu, 29 Jan 2026 22:05:30 +0000 (14:05 -0800)]
ggml-webgpu: improve flastAttention performance by software pipelining (llama/19151)
* webgpu : pipeline flash_attn Q/K loads in WGSL
* ggml-webgpu: unroll Q*K accumlation inner loop
* ggml-webgpu: vectorization
* ggml-webgpu: unrolling
* ggml-webgpu: remove redundant unrolling
* ggml-webgpu: restore the config
* ggml-webgpu: remove redundant comments
* ggml-webgpu: formatting
* ggml-webgpu: formatting and remove vectorization
* ggml-webgpu: remove unnecessary constants
* ggml-webgpu: change QKV buffer to read_write to pass validation
* ggml-webgpu: add explanation for the additional bracket around Q K accumulate
* Indentation and for -> if for tail
* Kick off CI on wgsl only commits
---------
Co-authored-by: Reese Levine <redacted>
Todor Boinovski [Thu, 29 Jan 2026 20:33:21 +0000 (12:33 -0800)]
hexagon: enable offloading to Hexagon on Windows on Snapdragon (llama/19150)
* hexagon: updates to enable offloading to HTP on WoS
* Update windows.md
* Update windows.md
* hexagon: enable -O3 optimizations
* hexagon: move all _WINDOWS conditional compilation to _WIN32
* hexagon: updates to enable offloading to HTP on WoS
* hexagon: use run-time vs load-time dynamic linking for cdsp driver interface
* refactor htp-drv
* hexagon: add run-bench.ps1 script
* hexagon: htdrv refactor
* hexagon: unify Android and Windows build readmes
* hexagon: update README.md
* hexagon: refactor htpdrv
* hexagon: drv refactor
* hexagon: more drv refactor
* hexagon: fixes for android builds
* hexagon: factor out dl into ggml-backend-dl
* hexagon: add run-tool.ps1 script
* hexagon: merge htp-utils in htp-drv and remove unused code
* wos: no need for getopt_custom.h
* wos: add missing CR in htpdrv
* hexagon: ndev enforecement applies only to the Android devices
* hexagon: add support for generating and signing .cat file
* hexagon: add .inf file
* hexagon: working auto-signing and improved windows builds
* hexagon: futher improve skel build
* hexagon: add rough WoS guide
* hexagon: updated windows guide
* hexagon: improve cmake handling of certs and logging
* hexagon: improve windows setup/build doc
* hexagon: more windows readme updates
* hexagon: windows readme updates
* hexagon: windows readme updates
* hexagon: windows readme updates
* hexagon: windows readme updates
* Update windows.md
* Update windows.md
* snapdragon: rename docs/backend/hexagon to docs/backends/snapdragon
Also added a power shell script to simplify build env setup.
* hexagon: remove trailing whitespace and move cmake requirement to user-presets
* hexagon: fix CMakeUserPresets path in workflow yaml
* hexagon: introduce local version of libdl.h
* hexagon: fix src1 reuse logic
gpt-oss needs a bigger lookahead window.
The check for src[1] itself being quantized was wrong.
---------
Co-authored-by: Max Krasnyansky <redacted>
Georgi Gerganov [Thu, 29 Jan 2026 16:45:30 +0000 (18:45 +0200)]
cuda : fix nkvo, offload and cuda graph node properties matching (llama/19165)
* cuda : fix nkvo
* cont : more robust cuda graph node property matching
* cont : restore pre-leafs implementation
* cont : comments + static_assert
yulo [Thu, 29 Jan 2026 10:10:53 +0000 (18:10 +0800)]
HIP: add mmf for CDNA (llama/18896)
* refactor mmf rows_per_block
* speed up compile
* pass cdna compile
* fix cuda error
* clean up mmf
* f32 mmf
* clean float mma
* fix mmf error
* faster mmf
* extend tile k
* fix compile error
* Revert "extend tile k"
This reverts commit
4d2ef3d483932659801a59a5af0b6b48f6ffd5c7 .
* fix smem overflow
* speed up compiling mmf
* speed up compile for hip
* 512 block for cdna
* config pad size
* fix as comment
* update select logic
* move some code to cuh
* fix as comment
* correct cdna3 config
---------
Co-authored-by: zhang hui <redacted>
Vishal Singh [Thu, 29 Jan 2026 04:28:57 +0000 (09:58 +0530)]
ggml-zendnn : resolve ZenDNN backend cross-module symbol dependency (llama/19159)
Aman Gupta [Thu, 29 Jan 2026 02:31:28 +0000 (10:31 +0800)]
CUDA: refactor topk-moe to enable more models (GLM 4.7, Nemotron etc.) (llama/19126)
Neo Zhang [Thu, 29 Jan 2026 01:20:22 +0000 (09:20 +0800)]
sycl: fix norm kernels: l2_norm, group_norm, rms_norm by remove assert to support more cases (llama/19154)
Co-authored-by: Neo Zhang Jianyu <redacted>
Ruben Ortlam [Wed, 28 Jan 2026 17:52:45 +0000 (18:52 +0100)]
Vulkan Flash Attention Coopmat1 Refactor (llama/19075)
* vulkan: use coopmat for flash attention p*v matrix multiplication
* fix P loading issue
* fix barrier position
* remove reduction that is no longer needed
* move max thread reduction into loop
* remove osh padding
* add bounds checks and padding
* remove unused code
* fix shmem sizes, loop duration and accesses
* don't overwrite Qf, add new shared psh buffer instead
* add missing bounds checks
* use subgroup reductions
* optimize
* move bounds check, reduce barriers
* support other Bc values and other subgroup sizes
* remove D_split
* replace Of register array with shared memory Ofsh array
* parallelize HSV across the rowgroups
* go back to Of in registers, not shmem
* vectorize sfsh
* don't store entire K tile in shmem
* fixes
* load large k tiles to shmem on Nvidia
* adapt shared memory host check function to shader changes
* remove Bc 32 case
* remove unused variable
* fix missing mask reduction tmspsh barrier
* fix mask bounds check
* fix rowmax f16 under/overflow to inf
* fix flash_attn_cm2 BLOCK_SIZE preprocessor directives
Patryk Kaminski [Wed, 28 Jan 2026 15:33:54 +0000 (16:33 +0100)]
ggml-sycl: remove unused syclcompat header (llama/19140)
The syclcompat/math.hpp is not used anymore. The change that intrduced it was successfuly reverted (https://github.com/ggml-org/llama.cpp/pull/17826).
This include path will become obsolete and dropped in oneAPI 2026.0 effectively breaking ggml-sycl builds.
Oleksandr Kuvshynov [Wed, 28 Jan 2026 11:35:54 +0000 (06:35 -0500)]
vulkan: handle device dedup on MacOS + Vega II Duo cards (llama/19058)
Deduplication here relied on the fact that vulkan would return unique
UUID for different physical GPUs. It is at the moment not always the case.
On Mac Pro 2019 running Mac OS, with 2 Vega II Duo cards (so, 4 GPU total),
MotlenVK would assign same UUID to pairs of GPUs, unless they
are connected with Infinity Fabric.
See more details here: KhronosGroup/MoltenVK#2683.
The right way is to fix that in MoltenVK, but until it is fixed,
llama.cpp would only recognize 2 of 4 GPUs in such configuration.
The deduplication logic here is changed to only filter GPUs if UUID is
same but driver is different.
Kevin Pouget [Wed, 28 Jan 2026 09:49:40 +0000 (10:49 +0100)]
ggml: new backend for Virglrenderer API Remoting acceleration (v2) (llama/18718)
Alberto Cabrera Pérez [Wed, 28 Jan 2026 07:15:56 +0000 (07:15 +0000)]
ggml-cpu: arm64: Q4_K scale unroll and vectorization (llama/19108)
Georgi Gerganov [Wed, 28 Jan 2026 07:15:27 +0000 (09:15 +0200)]
cuda : fix "V is K view" check for non-unified KV cache (llama/19145)
Georgi Gerganov [Wed, 28 Jan 2026 07:15:11 +0000 (09:15 +0200)]
CUDA: tune GLM 4.7 Flash FA kernel selection logic (DGX Spark) (llama/19142)
Nikhil Jain [Wed, 28 Jan 2026 04:53:36 +0000 (20:53 -0800)]
ggml webgpu: Split shared state (webgpu_context) into global state and per-thread state (llama/18976)
* Squashed commit of the following:
commit
b3c6bf4b0450d8d452b934df27a0fb7cb53cd755
Author: Abhijit Ramesh <redacted>
Date: Mon Dec 1 18:29:00 2025 -0800
ggml webgpu: fix xielu parameter passing (llama/11)
The XIELU operation was incorrectly using static_cast to convert
float parameters to uint32_t, which converted numeric values instead
of preserving IEEE 754 bit patterns. This caused incorrect values
to be interpreted by the GPU shader.
* Use reinterpret_cast to preserve float bit patterns when passing
through uint32_t params buffer
* Update WGSL shader parameter types from u32 to f32
* Re-enable XIELU support (was disabled due to numerical issues)
Fixes NMSE test failures for XIELU operation on WebGPU backend.
commit
5ca9b5e49ea7cddc9ab7c8b43a11a9c76a4dff4a
Author: neha-ha <redacted>
Date: Tue Nov 18 12:17:00 2025 -0800
Refactored pipelines and workgroup calculations (llama/10)
* refactored pipelines
* refactored workgroup calculation
* removed commented out block of prior maps
* Clean up ceiling division pattern
---------
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
Author: James Contini <redacted>
Date: Wed Oct 29 23:13:06 2025 -0700
formatted embed wgsl and ggml-webgpu.cpp
commit
e1f6baea31645e5d96ad53664acae856f74b96f4
Author: James Contini <redacted>
Date: Wed Oct 29 23:08:37 2025 -0700
implemented REPL_Template support and removed bug in unary operators kernel
commit
8c70b8fece445cdc9a8c660dbddbf201e52da2bb
Author: James Contini <redacted>
Date: Wed Oct 15 16:14:20 2025 -0700
responded and dealt with PR comments
commit
f9282c660c10dec4487d434549bdb707a9cd9f37
Author: James Contini <redacted>
Date: Sun Oct 12 13:41:41 2025 -0700
removed unnecesarry checking if node->src[1] exists for unary operators
commit
4cf28d7dec41c29186d66152735b244c5699f9dc
Author: James Contini <redacted>
Date: Sun Oct 12 13:32:45 2025 -0700
All operators (inlcluding xielu) working
commit
74c6add1761a59d2c2ff60b60e8ad3c8300f6d3e
Author: James Contini <redacted>
Date: Fri Oct 10 13:16:48 2025 -0700
fixed autoconfig
commit
362749910be4f0120c8ffb21ceddeb7d2c088e51
Author: James Contini <redacted>
Date: Fri Oct 10 13:10:46 2025 -0700
removed vestigial files
commit
cb0858333785757804c5104e59c4981843207c16
Author: James Contini <redacted>
Date: Fri Oct 10 12:59:32 2025 -0700
abides by editor-config
commit
5360e2852a4b51197d7d67d0a5d42e908b02d7ed
Author: James Contini <redacted>
Date: Fri Oct 10 12:45:57 2025 -0700
rms_norm double declaration bug atoned
commit
7b09baa4aa53711be5a126043670cc182c78bfcd
Merge:
8a6ec843 74b8fc17
Author: James Contini <redacted>
Date: Fri Oct 10 11:50:03 2025 -0700
resolving merge conflicts
commit
8a6ec843a50ab82f8cef59b4558eb63f318ba02d
Author: James Contini <redacted>
Date: Wed Oct 8 18:06:47 2025 -0700
unary operators pass ggml tests
commit
c3ae38278a2db236adc5912c9140e4f0d63f2c19
Author: James Contini <redacted>
Date: Wed Oct 1 16:22:40 2025 -0700
neg passes backend test
commit
aa1c9b2f8877a405470ca56709c42a1fd43713de
Author: James Contini <redacted>
Date: Tue Sep 30 23:55:27 2025 -0700
neg f16xf32xip builds and runs, havent actually ran a model that uses neg kernel yet though
Co-authored-by: James Contini <redacted>
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Abhijit Ramesh <redacted>
* Remove extra code and format
* Add ops documentation (finally)
* ggml webgpu: add SOFTPLUS unary operator
Implements SOFTPLUS (log(1 + exp(x))) with f16/f32 support. Uses f32
precision for intermediate calculations to prevent f16 overflow.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* Follow Vulkan backend numerical stability pattern
* ggml webgpu: add EXPM1 unary operator
Implements EXPM1 (exp(x) - 1) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* ggml webgpu: add FLOOR unary operator
Implements FLOOR (rounds down to nearest integer) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* ggml webgpu: add CEIL unary operator
Implements CEIL (rounds up to nearest integer) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* ggml webgpu: add ROUND unary operator
Implements ROUND (rounds to nearest integer) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* ggml webgpu: add TRUNC unary operator
Implements TRUNC (truncates towards zero) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* docs : update WebGPU support for unary operators (FLOOR, CEIL, ROUND, TRUNC, EXPM1, SOFTPLUS)
* Updates to webgpu get_memory
* Move shared state (webgpu_context) and device creation out of registration context, device context, and buffer context, and move into backend context
* Small cleanup
* Move Instance, Device, Adapter, Device creation, and capabilities to global state while moving Queue, pipelines, and buffers to per-thread state.
* Cleanups
* More cleanup
* Move staging_buf mutex to global context
* Resolve merge
* Resolve merge
* Resolve merge
* Clean up merge errors, delete forward declaration, and run clang-format
* Rename device_init to backend_init
* Move webgpu_context to backend_context
* Move buffer context members into global context and refactor function calls
* Run clang-format
* Remove commends
* Move parameter buffers to per-thread, add single memset_tensor param buf
* Fix CI compilation issue
* Fix builds for emscripten not supporting subgroups
* cleanup
* cleanup
---------
Co-authored-by: Reese Levine <redacted>
Vishal Singh [Tue, 27 Jan 2026 22:21:36 +0000 (03:51 +0530)]
ggml-zendnn : update ZenDNN git tag to main branch (llama/19133)
Johannes Gäßler [Tue, 27 Jan 2026 13:28:56 +0000 (14:28 +0100)]
CUDA: tune GLM 4.7 Flash FA kernel selection logic (llama/19097)
Alberto Cabrera Pérez [Tue, 27 Jan 2026 09:08:10 +0000 (09:08 +0000)]
ggml-cpu: aarm64: q6_K repack gemm and gemv (and generic) implementations (i8mm) #18860 (llama/18888)
* Boilerplate for q6_K repack
* q6_K repack to q6_Kx8 implementation
Signed-off-by: Alberto Cabrera <redacted>
* q6_K generic gemv and gemm
* wip, gemm_q6_K 8x8
* Still WIP: loading of q8s, q6h and q6l
* first working version of q6_K gemm
* Moved q6 loads outside of sb block, Unrolled inner loop
* Replaced modulo with mask
* First implementation of GEMV
* ggml_vdotq_s32 -> vdotq_s32
* Reduce width of accumulators in q6_K gemv
* Bsums instead of calc bias. Preload scales to use vget_lane. Unroll.
* Reuse scales in GEMM (same GEMV opt)
* Added todos for bsum and different qh repack
* Arch fallback
* VSLIQ for merging qh adn ql
* Removed TODO, already tested
* Apply suggestions
Co-authored-by: Georgi Gerganov <redacted>
* Removed unused import
---------
Signed-off-by: Alberto Cabrera <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Gaurav Garg [Tue, 27 Jan 2026 06:52:44 +0000 (06:52 +0000)]
Reduce CPU-side stalls due to the CUDA command buffer being full (llama/19042)
* [CUDA] Reduce CPU-side stalls due to the CUDA command buffer being full
With pipeline parallelism, during prompt processing, the CPU-side CUDA command buffer gets full, stalling the CPU. Due to this, enough work doesn't get submitted to the GPU, causing bubbles in the GPU timeline.
Fix this by setting the CUDA environment variable CUDA_SCALE_LAUNCH_QUEUES to 4x to increase the command buffer size.
* Set the env variable in the CUDA backend registry allocation
* Add link to PR in code comment
* Remove warning logs and update documentation
shalinib-ibm [Tue, 27 Jan 2026 03:52:34 +0000 (09:22 +0530)]
ggml-cpu: Enable FP16 MMA kernels on PPC (llama/19060)
lhez [Fri, 30 Jan 2026 08:34:38 +0000 (10:34 +0200)]
opencl: add flattened q6_K mv (llama/19054)
Georgi Gerganov [Fri, 30 Jan 2026 08:33:57 +0000 (10:33 +0200)]
sync : llama.cpp
Johannes Gäßler [Mon, 26 Jan 2026 22:24:58 +0000 (23:24 +0100)]
CUDA: fix padding of GQA to power of 2 in FA (llama/19115)
Johannes Gäßler [Sun, 25 Jan 2026 20:19:47 +0000 (21:19 +0100)]
CUDA: faster FA for GQA > 1 but not power of 2 (llama/19092)
ccbinn [Sun, 25 Jan 2026 18:07:19 +0000 (02:07 +0800)]
metal : fix recommendedMaxWorkingSetSize availability on legacy iOS/macOS (llama/19088)
Co-authored-by: chenbin11 <redacted>
Aman Gupta [Sun, 25 Jan 2026 15:25:58 +0000 (23:25 +0800)]
ggml-cpu: Use tiled FA for prompt-processing (llama/19012)
* ggml-cpu: Use tiled FA for prompt-processing
the FA performance is gimped on CPU on long contexts because it essentially uses a vector kernel. This PR adds a tiled FA for PP. Perf tuning for tile sizes done on a AMD EPYC single-socket 64-c machine.
* fix out of bounds for mask
* skip rows where there are all masks
* skip tile if mask is inf
* store mask in worksize
* check inf tile earlier
Georgi Gerganov [Sun, 25 Jan 2026 13:48:56 +0000 (15:48 +0200)]
kv-cache : support V-less cache (llama/19067)
* kv-cache : support V-less cache
* cuda : better check for V_is_K_view
* cuda : improve V_is_K_view check
* graph : add comments
* hparams : refactor
Johannes Gäßler [Sat, 24 Jan 2026 09:09:36 +0000 (10:09 +0100)]
CUDA: re-use MLA K data for V in MMA FA (llama/19057)
Aman Gupta [Sat, 24 Jan 2026 06:25:20 +0000 (14:25 +0800)]
ggml-cuda: enable cuda-graphs for `n-cpu-moe` (llama/18934)
* ggml-cuda: add split-wise cuda graph
* add n-cpu-moe compare_llama_bench.py
* fix hip/musa builds
nullname [Sat, 24 Jan 2026 06:02:07 +0000 (14:02 +0800)]
ggml-hexagon: flash-attn opt (llama/19025)
* optimize flash attention kernel by improving score computation and online softmax update
* wip
* Refactor online softmax update in flash attention kernel for improved performance
* Optimize flash attention kernel by replacing float array with HVX_Vector for score computation
* wip
Neo Zhang [Fri, 23 Jan 2026 12:54:10 +0000 (20:54 +0800)]
use malloc to support both iGPU and dGPU in same time (llama/18992)
* use malloc to support both iGPU and dGPU in same time
* support windows
---------
Co-authored-by: Neo Zhang Jianyu <redacted>
Alberto Cabrera Pérez [Fri, 23 Jan 2026 07:55:08 +0000 (07:55 +0000)]
ggml-cpu: aarm64: q5_K repack gemm and gemv (and generic) implementations (i8mm) (llama/18860)
* Boilerplate for q5_Kx8 REPACK on ARM and fallback
Signed-off-by: Alberto Cabrera <redacted>
* Implements make_block_q5_Kx8 by extending make_block_q4_Kx8
Signed-off-by: Alberto Cabrera <redacted>
* q5_K repack gemm and gemv generics
* Gemm and Gemv ARM implementations (i8mm)
* Improved qh manipulation looking at non-repack vec_dot implementation
* Full unroll
* Apply Q5_K Gemv vand and vshl optimizations to gemm. Improve comments.
Signed-off-by: Alberto Cabrera <redacted>
* Fix wrong fallback definitions of Q5_K
Signed-off-by: Alberto Cabrera <redacted>
* Fixed comments. Reverted unnecessary formatting
Signed-off-by: Alberto Cabrera <redacted>
* Fixed typo in generic definitions
* Switching AND + Shift with Shift Insert. Better op interleaving.
* Vectorize + unroll the block scales
* Apply gemm optimizations to gemv
* Improve bias calculation
---------
Signed-off-by: Alberto Cabrera <redacted>
Georgi Gerganov [Thu, 22 Jan 2026 20:09:01 +0000 (22:09 +0200)]
mla : make the V tensor a view of K (llama/18986)
* mla : pass V as a view of K to the FA op
* cuda : adjust mla logic to new layout
* kv-cache : fix rope shift
* tests : remove comment
* cuda : fix reusable_cutoff
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Johannes Gäßler [Thu, 22 Jan 2026 19:39:25 +0000 (20:39 +0100)]
CUDA: fix alignment check for FA (llama/19023)
lhez [Thu, 22 Jan 2026 18:29:25 +0000 (10:29 -0800)]
opencl: enable the general fp mm for non-cont input and as a fallback for specialized kqv kernel for adreno (llama/18970)
* opencl: add `copy_to_contiguous` and utilize mm kernels
* opencl: only copy to cont for f32 and f16 tensors
* opencl: use cont mm for fallback when dst is large
* opencl: use nb local to copy-to-cont
* opencl: use local offset as well
Aman Gupta [Thu, 22 Jan 2026 10:51:53 +0000 (18:51 +0800)]
CUDA: add gqa_ratio 4 for GLM 4.7 flash (llama/18953)
shaofeiqi [Thu, 22 Jan 2026 06:05:54 +0000 (22:05 -0800)]
opencl: add TRI op support (llama/18979)
Aleksei Nikiforov [Thu, 22 Jan 2026 00:16:21 +0000 (01:16 +0100)]
ggml-zdnn : mark zDNN buffers as non-host (llama/18967)
While buffers reside in host memory,
additional transformation is needed to use buffers with zDNN.
Fixes #18848
Jeff Bolz [Wed, 21 Jan 2026 17:01:40 +0000 (11:01 -0600)]
vulkan: Remove transfer_ctx, do everything in compute_ctx. (llama/18945)
* vulkan: Remove transfer_ctx, do everything in compute_ctx.
We had a bug where a set_tensor_async (using transfer_ctx) didn't get
submitted before the graph_compute (using compute_ctx) that came after
it. To avoid this sort of issue, just do everything in compute_ctx.
Remove transfer_cmd_pool, which was already unused.
* fix crash with perf logger
Jeff Bolz [Wed, 21 Jan 2026 16:43:43 +0000 (10:43 -0600)]
vulkan: support flash attention GQA/split_k with small batches (llama/18938)
Masato Nakasaka [Wed, 21 Jan 2026 16:13:43 +0000 (01:13 +0900)]
Revert "vulkan: force full subgroups for flash attention to fix intel subgroup crash (#17356)" (llama/18831)
This reverts commit
980b7cd17e055c8c587f79ffda7eb4fddf405566 .
Jeff Bolz [Wed, 21 Jan 2026 15:22:02 +0000 (09:22 -0600)]
vulkan: Use mul_mat_vec_id for small values of n (llama/18918)
Change ggml_vk_mul_mat_vec_id_q_f16 to loop over the batch dimension and
update the indexing calculations in get_offsets.
Mat-vec is faster than mat-mat for small values of n. We don't get the same
reuse of the weights as in the non-ID path, but with this the cost is linear
in n rather than n>1 being far slower than n==1.
Oliver Simons [Wed, 21 Jan 2026 01:34:29 +0000 (02:34 +0100)]
CUDA: Fix builds for older CCCL versions by ifdefing strided_iterator (llama/18964)
* CUDA: Fix builds for older CCCL versions by ifdefing strided_iterator
Strided iterator was added in [CCCL
3.1](https://github.com/NVIDIA/cccl/releases/tag/v3.1.0), which is packaged into
[CTK
13.1](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#id5)
* Unindent as per code review request
Oliver Simons [Tue, 20 Jan 2026 12:11:01 +0000 (13:11 +0100)]
CUDA: Replace init_offsets kernel with iterators in cub-based argsort (llama/18930)
* CUDA: Replace `init_offsets` with iterators in argsort
This is a QOL improvement, saving us the cost of materializing the
iterator
* Remove unnecessary include from top-k.cu
Adrien Gallouët [Tue, 20 Jan 2026 10:42:49 +0000 (11:42 +0100)]
ggml : cleanup path_str() (llama/18928)
- Remove pragmas as `std::codecvt_utf8` is not used.
- Avoid implicit `strlen()`.
Signed-off-by: Adrien Gallouët <redacted>
Georgi Gerganov [Tue, 20 Jan 2026 10:21:28 +0000 (12:21 +0200)]
metal : enable FA for MLA heads (llama/18950)