Jeff Bolz [Mon, 12 Jan 2026 11:32:13 +0000 (05:32 -0600)]
vulkan: Use VK_EXT_shader_64bit_indexing to handle large mat_mul(_id) (llama/18678)
This fixes incoherent output in Llama-4-Maverick-17B-128E-PAB-Q8_0, which
has a mul_mat_id with an A matrix that's Q8_0 8192 x 5120 x 128.
This should work when the number of blocks in the A matrix is less than 2^32
(for mul_mat_vec or mul_mm_cm2), or for mul_mm I think the limit is like
2^32*LOAD_VEC_A elements.
- Divide batch_stride by QUANT_K earlier, so the block index calculation works in 32b.
- Each vk_pipeline_struct has a linked list of pipelines that will allow it to handle
variants. So far this change just adds a single use case for this, compiling with the
e64BitIndexingEXT flag.
- Use the 64b indexing variant when the A matrix is larger than maxStorageBufferRange.
64-bit indexing has some cost - around 3-5% in MoE models, so it's worth the effort
to avoid enabling it unconditionally.
Doctor Shotgun [Thu, 8 Jan 2026 09:03:21 +0000 (01:03 -0800)]
ggml: add env var GGML_OP_OFFLOAD_MIN_BATCH (llama/18535)
* ggml: add env var GGML_OP_OFFLOAD_MIN_BATCH
* makes the min_batch_size for triggering op offload configurable via env var, defaulting to the prior hardcoded value of 32
* ggml: read GGML_OP_OFFLOAD_MIN_BATCH once and store to dev ctx
* cann: forward declaration of device context struct
* cann: move offload op check after device context declaration
hipudding [Wed, 7 Jan 2026 08:11:31 +0000 (16:11 +0800)]
CANN: Fix rename for get_env (llama/18652)
In #18624, get_env in ggml-cann was renamed to get_env_as_lowercase
to accurately reflect the function’s behavior and reduce the chance
of misuse. However, the update missed renaming call sites in other
files. This commit fixes that oversight.
Max Krasnyansky [Wed, 7 Jan 2026 01:38:29 +0000 (17:38 -0800)]
Hexagon add support for f16/f32 flash attention, scale, set-rows and improve f16/32 matmul (llama/18611)
* hexagon: improve fp16 matmul and add fp32/fp16 flash-attention
* hexagon: add support for set-rows fp32 -> fp16 with i32/i64 row-idx
* hexagon: add support for SCALE fp32
* hexagon: replace scalar fp32 -> fp16 copy with HVX
* hexagon: optimize flash_atten_ext with aligned VTCM buffers and DMA
- Implements double-buffered DMA prefetching for K, V, and Mask tensors.
- Ensures K and V rows in VTCM are padded to 128 bytes to support aligned HVX operations.
- Correctly synchronizes DMA transfers to prevent race conditions.
- Uses `FLASH_ATTN_BLOCK_SIZE` of 128 for efficient chunking.
* hexagon: use aligned mad_f16
* hexagon: flash_atten more aligned ops
* hexagon: optimize scale_f32 hvx helpers
* hexagon: unroll fa loops
* hexagon: remove unused set-rows log
* hexagon: flash_attn_ext add support for DMAing Q
- Update `op_flash_attn_ext` to include Q row size in scratchpad allocation.
- Pad Q row size to 128 bytes for alignment.
- Implement DMA transfer for Q tensor in `flash_attn_ext_f16_thread`.
- Update dot product computations to use VTCM-buffered Q data.
* hexagon: fix handling of NANs hvx dotproducts
* hexagon: cleanup spad allocation in flash-atten
* hexagon: improve fp16/fp32 matmul
- Introduced `vec_dot_f16_f16` and `vec_dot_f16_f16_rx2` kernels using efficient HVX dot product intrinsics.
- Added `quantize_fp32_f16` to copy/convert weights from DDR to VTCM
- Updated `op_matmul` to use the optimized path when VTCM capacity allows and broadcasting requirements are compatible.
- Implemented fallback logic to the original implementation for complex broadcasting scenarios.
* hexagon: fix HVX_ARCH check
* hexagon: matmul cleanup and fp16 fixes
Use aligned vec_dot_f16 for 2d matmuls and unaligned version for 4d.
* hexagon: fix fp16 x fp16 matmuls and some minor refactoring
* hexagon: add support for GET_ROWS f32 -> f32
Also optimize SET_ROWS threading a bit when we have just a few rows to process.
* hexagon: optimize set-rows threading
* hexagon: update adb/run-bench.sh to properly support experimental and verbose options
* hexagon: flash_atten use aligned vectors for dot products
nwyin [Mon, 5 Jan 2026 19:38:57 +0000 (13:38 -0600)]
ggml webgpu: add CEIL operation support (llama/18605)
* ggml-webgpu: add CEIL operation support
Add support for the CEIL unary operation in the WebGPU backend:
- Add CEIL_FUNC shader template in unary_op.wgsl
- Add 4 shader variants (f32, f16, inplace versions)
- Initialize CEIL pipelines in ggml-webgpu.cpp
- Register CEIL in supports_op function
Chenguang Li [Mon, 5 Jan 2026 07:38:18 +0000 (15:38 +0800)]
CANN: add operator fusion support for ADD + RMS_NORM (llama/17512)
This commit implements operator fusion for ADD + RMS_NORM operations
in the CANN backend to reduce memory access overhead and improve
performance. The fusion is controlled by the GGML_CANN_OPERATOR_FUSION
environment variable (default: false).
Changes:
- Implement ggml_cann_op_add_rms_norm_fused() using ACLNN AddRmsNorm
- Add ggml_cann_can_fuse() to check fusion eligibility
- Integrate fusion logic into computation graph evaluation
- Add test cases for ADD + RMS_NORM fusion
- Update documentation with new environment variable
The fusion combines ADD and RMS_NORM into a single kernel call,
which is more efficient than executing them separately.
Daniel Bevenius [Sun, 4 Jan 2026 20:22:16 +0000 (21:22 +0100)]
sampling : add support for backend sampling (llama/17004)
* sampling : add support for backend sampling
This commit adds support for performing sampling operations on the
backend (e.g. GPU) as part of the model computation graph.
The motivation for this feature is to enable sampling to be performed
directly on the backend as part of the computation graph being executed,
allowing for some or all of the sampling to be done on the backend.
For example, the backend sampler chain might select/sample a token
directly in which case only the sampled token needs to be transferred
from device memory to host memory.
It is also possible for the backend samplers to perform filtering of
the logits, or compute and filter the probability distribution, in
which case only the filtered logits or probabilites need to be
transferred back to system memory for further processing by CPU
samplers.
Currently the backend sampling works in a similar manner to how
pooling works, it is a function that is called by build_graph and the
sampler operations become part of the models computation graph.
* llama-cli : add backend sampler configuration
* server : add backend sampling options/configuration
* webui : add backend sampling options
* ggml : add initial cumsum implementation for CUDA
* sampling : enable all backend sampler tests
This commit enables all exisiting backend sampler tests in the
test-backend-sampler. Previously, some tests were disabled because
there were missing ggml operation implementations.
* graph : do not include llama-model.h
* sampling : always expose sampled_ids
This commit precomputes and caches the full-vocab token id list in
llama_context's constructor, so llama_get_backend_sampled_token_ids_ith
always returns a valid pointer.
The motivation for this is that this enables both common/sampling.cpp
and src/llama-sampling.cpp can simplify their logic.
Not all backends samplers that process logits need to set the
sampled_tokens_id as they may not change the order of the logits, for
example the temperature sampler only scales the logits but does not
change their order. Simliar the logit bias sampler only adds bias to
specific token ids but does not change the order of the logits. In
these cases there will not be a device to host copy of the sampled
token ids, and this is the use case where having this precomputed
list is useful.
* sampling : ensure at most one output token per seq
This commit adds a check in the batch allocator to ensure that when
backend sampling is enabled, at most one output token is specified per
sequence.
* CUDA: Optimize argsort for gpu-based token sampling
Argsort is used for top-k currently. WE optimize argsort by 2 things:
1. Use `DeviceRadixSort` for single-row/sequence to parallelize it
across our SMs
2. Use `DeviceSegmentedSort` for multi-row/sequence as this is the
correct entrypoint (the function chooses different execution paths,
it contains `DeviceSegmentedRadixSort` as one of the paths and will
choose the best one according to heuristics.
https://nvidia.github.io/cccl/cub/api/structcub_1_1DeviceSegmentedSort.html#overview
Jeff Bolz [Fri, 2 Jan 2026 21:32:30 +0000 (15:32 -0600)]
vulkan: Optimize GGML_OP_CUMSUM (llama/18417)
* vulkan: Optimize GGML_OP_CUMSUM
There are two paths: The preexisting one that does a whole row per workgroup
in a single shader, and one that splits each row into multiple blocks and does
two passes. The first pass computes partials within a block, the second adds
the block partials to compute the final result. The multipass shader is used
when there are a small number of large rows.
In the whole-row shader, handle multiple elements per invocation.
Jeff Bolz [Thu, 1 Jan 2026 07:58:27 +0000 (01:58 -0600)]
vulkan: extend topk_moe to handle sigmoid w/exp_probs_b for nemotron (llama/18295)
* vulkan: extend topk_moe to handle sigmoid w/exp_probs_b for nemotron
Also handle GGML_OP_SCALE at the end (nemotron, deepseek2).
Fewer pipeline variants and spec constants, just use push constants.
In test_topk_moe, change exp_probs_b to be 1D, matching real networks.
Update test-backend-ops and ggml-backend to allow verifying multiple outputs
in a fusion test (topk_moe has two outputs). Previously only the final node
was verified.
* change test_topk_moe to allow results in arbitrary order
Rahul Sathe [Wed, 31 Dec 2025 01:08:44 +0000 (06:38 +0530)]
Work around broken IntelSYCLConfig.cmake in Intel oneAPI 2025.x (llama/18345)
* cmake: work around broken IntelSYCLConfig.cmake in oneAPI 2025.x
* [AI] sycl: auto-detect and skip incompatible IntelSYCL package
Automatically detect compiler versions with incompatible IntelSYCL
CMake configuration files and fall back to manual SYCL flags instead
of requiring users to set options manually.
Fixes build failures with oneAPI 2025.x where IntelSYCLConfig.cmake
has SYCL_FEATURE_TEST_EXTRACT invocation errors.
* refactor: improve SYCL provider handling and error messages in CMake configuration
* refactor: enhance SYCL provider validation and error handling in CMake configuration
* ggml-sycl: wrap find_package(IntelSYCL) to prevent build crashes
Jeff Bolz [Fri, 26 Dec 2025 17:15:50 +0000 (11:15 -0600)]
vulkan: optimize decodeFuncB in coopmat2 mul_mat_id shader (llama/18349)
* vulkan: Use BK=32 for coopmat2 mul_mat_id
* vulkan: optimize decodeFuncB in coopmat2 mul_mat_id shader
Disable robustness, remove the OOB check in decodeFuncB, and initialize the
row_ids to zero to avoid OOB access.
Don't slice/offset the B matrix to ic * BN, only to adjust the coord back down
to the range [0, BN) in decodeFuncB. Instead just slice with a row offset of
zero and remove the '& (BN - 1)'. This allows the compiler to common some of
the shared memory loads.