]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
uvos [Wed, 11 Mar 2026 05:04:32 +0000 (06:04 +0100)]
cuda/hip: fix loop unrolling in ssm-conv (#20369)
Pascal [Wed, 11 Mar 2026 04:31:33 +0000 (05:31 +0100)]
Fix agentic mcp image single model (#20339)
* webui: fix MCP image attachments dropped during the agentic loop in single-model mode
* chore: update webui build output
Alessandro de Oliveira Faria (A.K.A.CABELO) [Wed, 11 Mar 2026 03:03:53 +0000 (00:03 -0300)]
vendor : update cpp-httplib to 0.37.0 (#20207)
Alessandro de Oliveira Faria (A.K.A.CABELO) [Wed, 11 Mar 2026 03:01:56 +0000 (00:01 -0300)]
vendor : update miniaudio to 0.11.25 (#20209)
Neo Zhang [Wed, 11 Mar 2026 01:53:34 +0000 (09:53 +0800)]
fix op rope, add rope_back (#20293)
Neo Zhang [Wed, 11 Mar 2026 01:53:05 +0000 (09:53 +0800)]
fix for failed UT case: ACC, L2_NORM, UPSCALE, fused_glu, unary (#20283)
Vinicios Lugli [Tue, 10 Mar 2026 22:40:14 +0000 (19:40 -0300)]
model : qwen3vl reranker text support (#20332)
* model : fix qwen3vl reranker support
* Remove CLS_OUT
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
ddh0 [Tue, 10 Mar 2026 19:43:29 +0000 (14:43 -0500)]
llama-quant : correct `n_attention_wv` usage (#20357)
* llama-quant : correct `n_attention_wv` usage
In #19770, I introduced a regression in the way the
`quantize_state_impl` counter values were initialized. I was
incrementing and using `n_attention_wv` in the same loop, when it should
have been fixed by the time we're deciding tensor types in
`llama_tensor_get_type_impl` (for `use_more_bits`).
I never observed a difference in any of [my
tests](https://github.com/ggml-org/llama.cpp/pull/19770#issuecomment-
4000424712 )
- it was only after @bartowski kindly pointed this out that I realized
it was incorrect. (Thanks!)
* simplify
Georgi Gerganov [Tue, 10 Mar 2026 19:36:57 +0000 (21:36 +0200)]
ggml : bump RPC version (#20330)
Reese Levine [Tue, 10 Mar 2026 16:14:27 +0000 (09:14 -0700)]
ggml webgpu: faster normal quant and some k-quant matrix operations, better shader parameter handling (#20173)
* K quant speedup (#20)
* Basic JIT compilation for mul_mat, get_rows, and scale (#17)
* scale jit working
* preliminary working jit for getrows and mulmat, needs refining
* simplified mul_mat preprocessing switch statement
* get_rows fixes, mul_mat refinement
* formatted + last edits
* removed some extraneous prints
* fixed get_rows, fixed workgroup dispatch in mul_mat. no gibberish
* small fix
* some changes, working
* get_rows and mul_mat jit fixed and working
* Update formatting
* formatting
* Add header
---------
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
* Start work on all-encompassing shader library
* refactor argmax, set_rows
* Refactor all but flashattention, mat mul
* no gibberish, all k quants added, merged
* vec memory fix
* q6_k matching metal on my machine, tests passing
* Set tile size for q6_k separately
* Separate out fast shaders
---------
Co-authored-by: neha-ha <redacted>
* Move towards writeBuffer for params
* Move away from multiple buffers for set_rows errors, remove host buffer for parameter buffers, minor cleanups
* Remove extra file
* Formatting
---------
Co-authored-by: neha-ha <redacted>
Piotr Wilkin (ilintar) [Tue, 10 Mar 2026 14:21:51 +0000 (15:21 +0100)]
Reduce level of content parser warning message to avoid log spam on non-debug verbosity (#20347)
Ray Xu [Tue, 10 Mar 2026 13:38:18 +0000 (21:38 +0800)]
examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)
* Fix logic for retrieving schema items in `json_schema_to_grammar.py`
If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.
I think if `schema['items']` is `{}`, them items should just be `{}`
* Apply suggestion from @CISC
Co-authored-by: Sigbjørn Skjæret <redacted>
* Add tests for arrays with empty items
Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
a3894281 [Tue, 10 Mar 2026 13:31:24 +0000 (15:31 +0200)]
docs: update CPU backend ops to mark POOL_1D as supported (#20304)
Georgi Gerganov [Tue, 10 Mar 2026 13:00:08 +0000 (15:00 +0200)]
models : fix assert in mamba2 (cont) (#20335)
* models : fix assert in mamba2 (cont)
* cont : add n_group mod
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Georgi Gerganov [Tue, 10 Mar 2026 12:28:23 +0000 (14:28 +0200)]
server : make 2 checkpoints near the end of the prompt (#20288)
* server : make 2 checkpoints near the end of the prompt
* cont : adjust checkpoints
Sigbjørn Skjæret [Tue, 10 Mar 2026 10:40:26 +0000 (11:40 +0100)]
common : fix incorrect uses of stoul (#20313)
Charles Xu [Tue, 10 Mar 2026 07:25:25 +0000 (08:25 +0100)]
kleidiai : support for concurrent sme and neon kernel execution (#20070)
Taimur Ahmad [Tue, 10 Mar 2026 06:49:52 +0000 (11:49 +0500)]
ggml-cpu: add RVV repack GEMM and GEMV for quantization types (#19121)
* ggml-cpu: add rvv ggml_quantize_mat_4x8 for q8_0
Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: add rvv repacking for iq4_nl
* ggml-cpu: add generic impl for iq4_nl gemm/gemv
* ggml-cpu: add rvv repacking for q8_0
* ggml-cpu: refactor; add rvv repacking for q4_0, q4_K
* ggml-cpu: refactor; add rvv repacking for q2_K
Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: refactor rvv repack
---------
Co-authored-by: Rehan Qasim <redacted>
Julian Pscheid [Tue, 10 Mar 2026 06:32:24 +0000 (23:32 -0700)]
metal: handle command buffer failures gracefully in synchronize (#20306)
Replace GGML_ABORT("fatal error") in ggml_metal_synchronize() with
error flag + return. This aligns synchronize error handling with
graph_compute, which already returns GGML_STATUS_FAILED for the same
condition.
When a command buffer fails (e.g., iOS GPU access revocation during
backgrounding, macOS eGPU disconnect, OOM), the backend enters an
error state instead of killing the host process. Subsequent
graph_compute calls return GGML_STATUS_FAILED immediately. Recovery
requires recreating the backend.
Failed extra command buffers are properly released on the error path
to avoid Metal object leaks.
ddh0 [Tue, 10 Mar 2026 06:16:05 +0000 (01:16 -0500)]
llama-quant : fail early on missing imatrix, refactor type selection, code cleanup (#19770)
* quantize : imatrix-fail early + code cleanup
* fix manual override printing
it's in the preliminary loop now, so needs to be on its own line
* revert header changes per ggerganov
* remove old #includes
* clarify naming
rename `tensor_quantization` to `tensor_typo_option` to descirbe its
functionality
* fix per barto
Aldehir Rojas [Mon, 9 Mar 2026 23:29:21 +0000 (18:29 -0500)]
common: consolidate PEG string parsers (#20263)
* common : consolidate PEG string parsers
* cont : fix json_string_content()
Xuan-Son Nguyen [Mon, 9 Mar 2026 22:42:24 +0000 (23:42 +0100)]
model: fix step3.5 n_rot (#20318)
Xuan-Son Nguyen [Mon, 9 Mar 2026 21:22:39 +0000 (22:22 +0100)]
llama: dynamic head_dim and n_rot for SWA (#20301)
* llama: dynamic head_dim and n_rot for SWA
* also add gguf_writer wrappers
* fix build
* build_rope_shift arg reorder
Evan Huus [Mon, 9 Mar 2026 16:47:54 +0000 (12:47 -0400)]
server: Parse port numbers from MCP server URLs in CORS proxy (#20208)
* Parse port numbers from MCP server URLs
* Pass scheme to http proxy for determining whether to use SSL
* Fix download on non-standard port and re-add port to logging
* add test
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Paul Flynn [Mon, 9 Mar 2026 14:48:12 +0000 (10:48 -0400)]
metal : extend mul_mv_ext to BF16, Q2_K, Q3_K (#20250)
Enable mul_mv_ext small-batch kernels (BS 2-8) for BF16, Q2_K,
and Q3_K quantization types. These types previously fell through
to the slower single-row mul_mv path.
BF16 uses the float4 dequantize path (like F16). Q2_K and Q3_K
use the float4x4 K-quant path (like Q4_K/Q5_K/Q6_K).
Co-authored-by: Claude Opus 4.6 <redacted>
Georgi Gerganov [Mon, 9 Mar 2026 14:47:06 +0000 (16:47 +0200)]
server : fix checkpoints n_tokens calculation (#20287)
Georgi Gerganov [Mon, 9 Mar 2026 14:45:11 +0000 (16:45 +0200)]
metal : add upscale (#20284)
Georgi Gerganov [Mon, 9 Mar 2026 14:44:25 +0000 (16:44 +0200)]
server : warn swa-full is not supported for non-SWA models (#20291)
Georgi Gerganov [Mon, 9 Mar 2026 14:43:38 +0000 (16:43 +0200)]
server : fix off-by-1 in server_tokens::size_up_to_pos() (#20279)
* server : fix off-by-1 in server_tokens::size_up_to_pos()
* cont : fix typo [no ci]
Piotr Wilkin (ilintar) [Mon, 9 Mar 2026 13:25:11 +0000 (14:25 +0100)]
common: map developer role to system (#20215)
* Map developer role to system
* Simplify
Georgi Gerganov [Mon, 9 Mar 2026 11:15:15 +0000 (13:15 +0200)]
models : fix assert in mamba2 graph (#20270)
Georgi Gerganov [Mon, 9 Mar 2026 08:33:12 +0000 (10:33 +0200)]
server : add kill switch when server is stuck (#20277)
Aman Gupta [Mon, 9 Mar 2026 08:15:36 +0000 (16:15 +0800)]
ggml-cuda: disable gdn for musa (#20278)
ddh0 [Mon, 9 Mar 2026 07:28:41 +0000 (02:28 -0500)]
llama-quant : left-align tensor names in output (#20117)
Aman Gupta [Mon, 9 Mar 2026 07:05:34 +0000 (15:05 +0800)]
contributing: limit open PRs for new contributors to 1 (#20036)
Bertay Eren [Mon, 9 Mar 2026 06:24:16 +0000 (09:24 +0300)]
ggml-vulkan: add SGN operator, auto-generate Vulkan.csv and ops.md (#20219)
Ruben Ortlam [Mon, 9 Mar 2026 06:23:45 +0000 (07:23 +0100)]
vulkan: skip zero size tensors in backend copies (#20233)
Michael Huang [Mon, 9 Mar 2026 04:45:43 +0000 (21:45 -0700)]
cuda : display total and free VRAM capacity during device initialization (#20185)
Aaron Teo [Mon, 9 Mar 2026 01:05:44 +0000 (09:05 +0800)]
llama-bench: introduce `-hf` and `-hff` flags & use `--mmap 1` by default (#20211)
Piotr Wilkin (ilintar) [Mon, 9 Mar 2026 00:11:22 +0000 (01:11 +0100)]
PEG parser for LFM2 (#20251)
* PEG parser for LFM2
* Simplify using python_value()
Georgi Gerganov [Sun, 8 Mar 2026 20:16:46 +0000 (22:16 +0200)]
server : do not create checkpoints right after mtmd chunks (#20232)
Sigbjørn Skjæret [Sun, 8 Mar 2026 17:58:28 +0000 (18:58 +0100)]
graph : remove redundant scale_w parameter (#20235)
Aldehir Rojas [Sun, 8 Mar 2026 16:17:02 +0000 (11:17 -0500)]
common : gracefully handle incomplete output (#20191)
* common : handle incomplete UTF-8 at end of input in PEG parser
* cont : if reached end prematurely, emit needs_more_input to propagate partial output
* cont: refactor peg parse context to add lenient flag
* cont : remove partial flag, keep lenient flag
Piotr Wilkin (ilintar) [Sun, 8 Mar 2026 16:15:49 +0000 (17:15 +0100)]
Fix compile bug (#20203)
* Fix compile bug
* Update common/chat-auto-parser-helpers.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Piotr Wilkin (ilintar) [Sun, 8 Mar 2026 16:14:43 +0000 (17:14 +0100)]
Fix structured outputs (#20223)
* Fix structured outputs
* Update common/chat-auto-parser-generator.cpp
Co-authored-by: Aldehir Rojas <redacted>
---------
Co-authored-by: Aldehir Rojas <redacted>
GiantPrince [Sun, 8 Mar 2026 11:38:17 +0000 (07:38 -0400)]
ggml-vulkan: Add ELU op support (#20183)
* ggml-Vulkan: add ELU support
* ggml-Vulkan: remove extra spaces and variables
* ggml-Vulkan: fix format issue
* ggml-Vulkan: fix format issue
* fix whitespace issue
* Update Vulkan.csv and ops.md
Jeff Bolz [Sun, 8 Mar 2026 11:33:48 +0000 (06:33 -0500)]
vulkan: Fix data races in coopmat1 mul_mat(_id) (#20084)
* vulkan: Fix data races in coopmat1 mul_mat(_id)
Add barriers between coopmat store and regular loads. We sort of got away with
this because it was the same subgroup accessing the values, but it's still a
race and may not work.
* switch to subgroup control barriers
Johannes Gäßler [Sun, 8 Mar 2026 11:30:21 +0000 (12:30 +0100)]
llama: end-to-end tests (#19802)
* tests: add end-to-end tests per model architecture
* fixup for rebase
* fix use-after-free in llama-model-loader.cpp
* fix CI
* fix WebGPU
* fix CI
* disable CI for macOS-latest-cmake-arm64
* use expert_weights_scale only if != 0.0f
* comments
Christopher Maher [Sun, 8 Mar 2026 10:42:28 +0000 (03:42 -0700)]
readme : update infra list (#20212)
Piotr Wilkin (ilintar) [Sun, 8 Mar 2026 10:33:03 +0000 (11:33 +0100)]
Revert to OAI-compatible args (#20213)
* Revert to OAI-compatible args
* Apply workaround::func_args_not_string
decahedron1 [Sun, 8 Mar 2026 09:08:57 +0000 (04:08 -0500)]
server : correct index on finish in OAI completion streams (#20226)
Neo Zhang [Sun, 8 Mar 2026 04:00:07 +0000 (12:00 +0800)]
[SYCL] supprt Flash Attention for fp32/fp16/Q4/Q5/Q8 (#20190)
* support flash-attention for fp32/fp16/Q4/Q5/Q8
* rm warining
* update for JIT
Aman Gupta [Sat, 7 Mar 2026 07:41:10 +0000 (15:41 +0800)]
ggml: add GATED_DELTA_NET op (#19504)
* ggml: add GATED_DELTA_NET op
* remove the transpose
* add KDA
* add qwen35 dense
* llama : check for fused gated delta net backend support
---------
Co-authored-by: Georgi Gerganov <redacted>
lhez [Sat, 7 Mar 2026 02:03:05 +0000 (18:03 -0800)]
opencl: add l2_norm (#20160)
Piotr Wilkin (ilintar) [Sat, 7 Mar 2026 00:55:33 +0000 (01:55 +0100)]
Autoparser: True streaming (#20177)
* Relax atomicity constraint for nicer, more pleasent, True Streaming parsing
* Whitespace
* Remove redundant atomics
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 21:34:15 +0000 (22:34 +0100)]
Autoparser: add optional argument reshuffle capability (#20171)
* Allow reshuffled arguments in tagged argument parser format tool calls.
* Remove shuffle just keep the optional parsers in any order
* Remove unnecessary import
Bartowski [Fri, 6 Mar 2026 21:06:56 +0000 (16:06 -0500)]
quants : Add memsets and other fixes for IQ quants (#19861)
* Add memsets and other fixes for IQ quants
* Make memset unconditional, change Laux back to L
* Move another memset
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 20:25:41 +0000 (21:25 +0100)]
Add @pwilkin to CODEOWNERS for autoparser code (#20174)
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 20:01:00 +0000 (21:01 +0100)]
Autoparser - complete refactoring of parser architecture (#18675)
* Autoparser - full single commit squish
* Final pre-merge changes: minor fixes, Kimi 2.5 model parser
Todor Boinovski [Fri, 6 Mar 2026 17:59:26 +0000 (09:59 -0800)]
hexagon: add f32 ssm_conv op (#20122)
* hexagon: add ssm_conv op
* hexagon: hvx kernel is functional
* hexagon: improvements to ssm-conv hvx kernel
* hexagon: added dma to ssm-conv hvx kernel
* hexagon: ssm-conv dynamically compute gather scratchpad
* hex-ssm-conv: add local context and fix various issues (spad indexing, etc)
---------
Co-authored-by: Max Krasnyansky <redacted>
Tom Vaucourt [Fri, 6 Mar 2026 16:41:12 +0000 (17:41 +0100)]
server : preserve anthropic thinking blocks in conversion (#20120)
* server : preserve anthropic thinking blocks in conversion (#20090)
* server : add tests for anthropic thinking block conversion
---------
Co-authored-by: root <redacted>
Max Krasnyansky [Fri, 6 Mar 2026 16:32:40 +0000 (08:32 -0800)]
cpu: skip redudant ROPE cache updates (#20149)
Aman Gupta [Fri, 6 Mar 2026 16:05:43 +0000 (00:05 +0800)]
ggml-cuda: add mem check for fusion (#19916)
* ggml-cuda: add mem check for fusion
* Replace NaNs with -FLT_MAX
* fix typo
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Aaron Teo [Fri, 6 Mar 2026 15:24:38 +0000 (23:24 +0800)]
ggml: update comments for backends which have no memory to report (#20157)
Signed-off-by: Aaron Teo <redacted>
shalinib-ibm [Fri, 6 Mar 2026 15:22:39 +0000 (20:52 +0530)]
ggml-cpu: Fix gcc 15 ICE on ppc64le (#20083) (#20130)
This patch addresses an Internal Compiler Error (Segmentation fault)
observed with gcc 15 by replacing the intrinsic + cast by doing
a cat on the data first and then calling the intrinsic. This bypasses the
buggy compiler path while maintaining identical instruction selection.
Performance Verification:
Assembly analysis on RHEL 9 (GCC 15.1.1) confirms that both the original
code and this fix generate the identical Power10 prefixed load instruction:
`plxv 40, 2(14)`
This ensures zero performance regression while unblocking builds on
newer toolchains.
Reproduced on:
- Alpine Linux + GCC 15.2.0-r2
- RHEL 9 + GCC 15.1.1 (gcc-toolset-15)
Signed-off-by: Shalini Salomi Bodapati <redacted>
Aman Gupta [Fri, 6 Mar 2026 15:09:59 +0000 (23:09 +0800)]
CUDA: use shared mem for ssm_conv (#20128)
* CUDA: use shared mem for ssm_conv
* fuse silu + ssm_conv
* fuse unary + mul
* enable for fp16
* formatting
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Tim Neumann [Fri, 6 Mar 2026 13:05:52 +0000 (14:05 +0100)]
context: ignore zero scale LoRAs when checking sameness (#20166)
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 10:39:26 +0000 (11:39 +0100)]
Checkpoint every n tokens: squash (#20087)
Aleksander Grygier [Fri, 6 Mar 2026 09:00:39 +0000 (10:00 +0100)]
webui: Agentic Loop + MCP Client with support for Tools, Resources and Prompts (#18655)
Johannes Gäßler [Fri, 6 Mar 2026 08:12:49 +0000 (09:12 +0100)]
ggml-cpu: fix data race for debug asserts (#20148)
Georgi Gerganov [Fri, 6 Mar 2026 06:46:51 +0000 (08:46 +0200)]
kv-cache : fix M-RoPE checkpoints (#20132)
Roj234 [Fri, 6 Mar 2026 05:41:11 +0000 (13:41 +0800)]
cli : Don't clear system prompt when using '/clear' (#20067)
* Enhance /clear command to include system prompt
Add system prompt to messages when clearing chat history.
* Use lambda
lhez [Fri, 6 Mar 2026 05:16:39 +0000 (21:16 -0800)]
opencl: add neg, exp and diag (#20127)
* opencl: add `neg`
* opencl: add `exp`
* opencl: add `diag`
YardenTal44 [Fri, 6 Mar 2026 02:29:13 +0000 (04:29 +0200)]
hexagon: add fp16 support for binary ops: add,sub,mul,div (#20139)
* hexagon: add fp16 support for binary ops: add,sub,mul,div
* hexagon: fix test-backend-ops failures for fp16 binary ops on older arches (<v79)
* hexagon: decide on n_threads (aka n_jobs) early to avoid overallocating scratchpad
* snapdragon: fix readme link
---------
Co-authored-by: Max Krasnyansky <redacted>
ymcki [Thu, 5 Mar 2026 15:01:23 +0000 (23:01 +0800)]
models : kda chunk size = 16 (#19827)
* models : add llm_build_delta_net_base
* cont : keep qwen35 and qwen35moe graphs intact
* cont : add comments [no ci]
* add kimi linear to delta-net-base
* removed unnecessary ggml_cont from g_exp_t
* removed ggml_cont from g_diff_exp_t. moved ggml_cont for o to kimi-linear.cpp
* removed unnecessary diag mask
* cont : simplify
* cont : avoid graph splits
* scale q after mul instead of beginning
* scale q after mul instead of beginning
* identical ppl
* cont : fix scale and decay mask
* minor : remove TODO
* block implementation for kda
* remove space at the end of line 101
* concat+pad
* pad+binary row concat
* chunk size 16 for kda
* removed minor differences to master
---------
Co-authored-by: Georgi Gerganov <redacted>
Andreas Kieslinger [Thu, 5 Mar 2026 11:53:21 +0000 (12:53 +0100)]
CUDA: Improve performance via less synchronizations between token (#17795)
* Adds CPU-to-CUDA copy capability to
ggml_backend_cuda_cpy_tensor_async()
* Adds function to relax sync requirements between input copies on
supported backends (CUDA for now)
* Exchanges synchronous copy with async copy function.
* Adds macro guards to allow compilation in non-CUDA builds
* Reworked backend detection in ggml-backend.cpp to avoid linking
conflicts
* Relax requirement of checks in async CUDA copies from backend and buffer type to just buffer type, to avoid linking issues
* Minor cleanup
* Makes opt-in to relax use of explicit syncs more general. Backends like
vulkan which require a synchronization between HtoD copies and graph
execution could also adopt this change now.
* Reintroduces stricter check for CPU->CUDA backend async copy via
GGML_DEVICE_TYPE_CPU.
* Corrects initialization of ggml_backend_sync_mode in
ggml_backend_sched_split initialization
* Simplifies synchronizations to adhere to `saaasg` pattern.
* Apply suggestion from @ggerganov (src->buffer to buf_src)
Co-authored-by: Georgi Gerganov <redacted>
* Apply suggestion from @ggerganov (src->buffer to buf_src) v2
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Eric Zhang [Thu, 5 Mar 2026 11:47:14 +0000 (19:47 +0800)]
model : update Qwen3.5 model type detection (#20126)
* model : fix Qwen3.5 model type detection
* Update src/llama-model.cpp
whoops, my bad
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Sigbjørn Skjæret [Thu, 5 Mar 2026 09:47:28 +0000 (10:47 +0100)]
cli : add command and file auto-completion (#19985)
Sigbjørn Skjæret [Thu, 5 Mar 2026 09:30:02 +0000 (10:30 +0100)]
convert : register Qwen 3.5 ForCausalLM for text only (#20119)
Aleksander Grygier [Thu, 5 Mar 2026 07:52:22 +0000 (08:52 +0100)]
webui: Improvements for Models Selector UI (#20066)
Marcel Petrick [Thu, 5 Mar 2026 07:50:21 +0000 (08:50 +0100)]
chore : correct typos [no ci] (#20041)
* fix(docs): correct typos found during code review
Non-functional changes only:
- Fixed minor spelling mistakes in comments
- Corrected typos in user-facing strings
- No variables, logic, or functional code was modified.
Signed-off-by: Marcel Petrick <redacted>
* Update docs/backend/CANN.md
Co-authored-by: Aaron Teo <redacted>
* Revert "Auxiliary commit to revert individual files from
846d1c301281178efbc6ce6060ad34c1ebe45af8 "
This reverts commit
02fcf0c7db661d5ff3eff96b2b2db9fdb7213256 .
* Update tests/test-backend-ops.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-backend-ops.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Signed-off-by: Marcel Petrick <redacted>
Co-authored-by: Aaron Teo <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Max Krasnyansky [Thu, 5 Mar 2026 05:55:29 +0000 (21:55 -0800)]
hexagon: Flash Attention optimizations (dma, mpyacc, multi-row) and MatMul updates (#20118)
* ggml-hexagon: enhance hvx_dot_f16_f16_aa_rx4 for improved performance by expanding vector handling and optimizing accumulation
# Conflicts:
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c
* ggml-hexagon: optimize hvx_dot_f16_f16_aa_rx4 and enhance hvx_vec_reduce_sum_f32x4 for improved performance and reduced complexity
* ggml-hexagon: add hvx_dot_f16_f16_aa_rx32 for enhanced vector processing in flash attention
# Conflicts:
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c
* optimize hvx_dot_f16_f16_aa_rx4 and hvx_dot_f16_f16_aa_rx32 by removing unused scale parameter and improving vector accumulation
# Conflicts:
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c
* ggml-hexagon: refactor hvx_dot_f16_f16_aa_rx4 for improved readability and return HVX_Vector for better integration
# Conflicts:
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c
* ggml-hexagon: initialize sums variable in hvx_dot_f16_f16_aa_rx32 for clarity
* ggml-hexagon: fix compiling error
* fix hvx_dot_f16_f16_aa_rx4 to handle leftover elements correctly using masking
* refactor hvx_dot_f16_f16_aa_rx4 to accept vector and leftover element counts as parameters for improved clarity and flexibility
* wip
* fa: instrumentation and dma reordering
* hex-fa: use block-size 64 to improve DMA pipelining
* hex-fa: optimize vec-dot for v79 and above
* hex-fa: use block size 64
* hex-fa: avoid scalar fp32->fp16 conversions
* hex-fa: simplify dot_f16 functions using optimized vec_mpyacc
* hex-fa: rewrite mad_f32_f16 using hvx_vec_mpyacc
* hex-mm: use mpyacc in matmul dot functions
---------
Co-authored-by: chraac <redacted>
lhez [Thu, 5 Mar 2026 05:32:26 +0000 (21:32 -0800)]
opencl: add `SET`, support i32 for `CPY`, minor refactor for cpy (#20101)
Todor Boinovski [Wed, 4 Mar 2026 23:04:59 +0000 (15:04 -0800)]
hexagon: add llama-completion runner script (#20095)
Nikhil Jain [Wed, 4 Mar 2026 19:54:55 +0000 (11:54 -0800)]
[WebGPU] Fix wait logic for inflight jobs (#20096)
* Enable tmate debugging for investigating thread safety issue
* Refactor wait and submit to operate on vector<wgpu::FutureWaitInfo>, and fix wait to delete only the future that is completed.
* Cleanup
* Remove clear change and run clang-format
* Cleanup
Masashi Yoshimura [Wed, 4 Mar 2026 19:19:00 +0000 (04:19 +0900)]
Add concat op to webgpu. (#20068)
Sigbjørn Skjæret [Wed, 4 Mar 2026 13:18:04 +0000 (14:18 +0100)]
tools : add missing clocale include in mtmd-cli [no ci] (#20107)
Johannes Gäßler [Wed, 4 Mar 2026 11:04:31 +0000 (12:04 +0100)]
ggml: fix ggml_is_contiguous_n for ne == 1 (#20092)
Adrien Gallouët [Wed, 4 Mar 2026 10:57:09 +0000 (11:57 +0100)]
ggml : use a simple std::thread in AMX without OpenMP (#20074)
Disabling OpenMP generally provides better inference performance (at
least in my testing) but the loading becomes slightly slower.
Benchmark results for `convert_B_packed_format()`:
Before this commit:
N K | No OpenMP OpenMP | Diff | Speedup
------------------------------------------------------------
512 2880 | 640.9us 263.5us | -58.9% | 0.41x
2880 4096 | 2.55ms 261.7us | -89.8% | 0.10x
201088 2880 | 256.44ms 21.61ms | -91.6% | 0.08x
------------------------------------------------------------
Total: 325.43ms vs 31.05ms
After:
N K | No OpenMP OpenMP | Diff | Speedup
------------------------------------------------------------
512 2880 | 1.49ms 263.5us | -82.3% | 0.18x
2880 4096 | 1.55ms 261.7us | -83.1% | 0.17x
201088 2880 | 24.03ms 21.61ms | -10.1% | 0.90x
------------------------------------------------------------
Total: 78.97ms vs 31.05ms
Tested with unsloth/gpt-oss-20b-GGUF:Q4_K_M.
Signed-off-by: Adrien Gallouët <redacted>
ddh0 [Wed, 4 Mar 2026 08:53:38 +0000 (02:53 -0600)]
impl : use 6 digits for tensor dims (#20094)
Many models have vocabulary sizes, and thus tensor shapes, with more
than 5 digits (ex: Gemma 3's vocab size is 262,208).
I already fixed this for `llama_format_tensor_shape` but missed it for
`llama_format_tensor_shape` until now. Oops.
SamareshSingh [Wed, 4 Mar 2026 08:30:40 +0000 (02:30 -0600)]
Fix locale-dependent float printing in GGUF metadata (#17331)
* Set C locale for consistent float formatting across all binaries.
* Add C locale setting to all tools binaries
Add std::setlocale(LC_NUMERIC, "C") to all 16 binaries in the tools/
directory to ensure consistent floating-point formatting.
* Apply suggestion from @JohannesGaessler
---------
Co-authored-by: Johannes Gäßler <redacted>
standby24x7 [Wed, 4 Mar 2026 05:44:49 +0000 (14:44 +0900)]
completion : Fix a typo in warning message (#20082)
resuse -> reuse
Mickael Desgranges [Tue, 3 Mar 2026 13:50:00 +0000 (14:50 +0100)]
docs: Fix intel documentation link (#20040)
Charles Xu [Tue, 3 Mar 2026 09:40:26 +0000 (10:40 +0100)]
kleidiai : add sme fp16 compute path for q4_0 gemm on aarch64 (#20043)
shaofeiqi [Tue, 3 Mar 2026 03:49:41 +0000 (19:49 -0800)]
opencl: add optimized q4_1 mm kernel for adreno (#19840)
* Add Q4_1 OpenCL Kernels
* opencl: refactor transpose
* opencl: format
* opencl: refactor q4_1 unpack
* opencl: move `ggml_cl_mul_mat_q4_1_f32_adreno`
* opencl: refactor `ggml_cl_mul_mat_q4_1_f32_adreno` and kernels
* opencl: rename kernel files and kernes
* opencl: fix build for non adreno
* opencl: move code around and format
---------
Co-authored-by: Li He <redacted>
Abhijit Ramesh [Tue, 3 Mar 2026 03:35:11 +0000 (19:35 -0800)]
ggml webgpu: fix workgroup dispatch limit for large batch sizes (#19965)
* ggml-webgpu: fix workgroup dispatch limit for large batch sizes
WebGPU limits workgroup sizes to 65535 per dimension. Large MUL_MAT
operations with batch sizes exceedeing this limi would fail.
* add compute_2d_workgroups() helper to split total workgroup ID across
X/Y dimensions
* update mul_mat_reg_tile.wgsl to reconstruct linear workgroup ID from 2D
dispatch
* update mul_mat_subgroup_matrix.wgsl to reconstruct linear workgroup ID
from 2D dispatch
* update mul_mat.wgsl to compute global index from 2D workgroup
coordinates
* refactor all three mul_mat dispatch paths to use the shared helper
* ggml-webgpu: add bounds checking for over-dispatched workgroups
2D workgroup dispatch can over-dispatch when total workgroups don't
divide evenly into the 65535 per-dimension limit. Extra workgroups
would compute invalid batch indices, causing memory corruption.
* add batch_idx bound check to mul_mat_reg_tile.wgsl and
mul_mat_subgroup_matrix.wgsl to prevent over-dispatched workgroups
from accessing invalid memory
* fixes test failures with large batch sizes (eg., bs=[128, 1024])
* ggml-webgpu: add back TODO for spliting large sizes into batches
* Optimize 2d workgroup provisioning
* Set some parameters that increase speed
---------
Co-authored-by: Reese Levine <redacted>
Nikhil Jain [Mon, 2 Mar 2026 18:23:34 +0000 (10:23 -0800)]
ggml webgpu: Clean up per-thread parameter buffer pool and job submission logic (#19772)
* Allow webgpu_buf_pool to resize if needed, remove inflight_threads, and replace inflight_threads with num_kernels for submission
* Run clang-format
* Keep track of num batched kernels that have not been submitted yet
* Run clang-format
* Increase buf pool max size
* Increase param buf pool init size
* Remove webgpu buf pool resizing
* Merge with master
* Add buffer pool growth
* Move buffer pool growth outside of lock
* Reduce max pool size to 32
* Run clang-format
* Only resize param buf pool
Masashi Yoshimura [Mon, 2 Mar 2026 15:59:53 +0000 (00:59 +0900)]
ggml-webgpu: Support non-contiguous `src0` and overlapping `src0/src1` in binary ops (#19850)
* ggml-webgpu: Add binary op support for overlapping and non-contiguous.
* Add newline to binary.wgsl
* Append the test of binary op for src overlapping to test_bin_bcast.
* Remove unnecessary newline.
Ruben Ortlam [Mon, 2 Mar 2026 14:58:25 +0000 (15:58 +0100)]
vulkan: tune MMVQ for Intel Windows (#19988)
Adrien Gallouët [Mon, 2 Mar 2026 14:40:49 +0000 (15:40 +0100)]
scripts : improve get-wikitext-2.sh (#19952)
* scripts : improve get-wikitext-2.sh
Switch to sh, add curl fallback, and avoid redundant downloads
Signed-off-by: Adrien Gallouët <redacted>
* fix indent
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Signed-off-by: Adrien Gallouët <redacted>