]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
8 weeks agosupport permuted, remove check s0/s10 (#19889)
Neo Zhang [Thu, 26 Feb 2026 02:27:20 +0000 (10:27 +0800)]
support permuted, remove check s0/s10 (#19889)

Co-authored-by: Neo Zhang Jianyu <redacted>
8 weeks agovulkan: check for memory overlap before doing fusion (#19768)
Jeff Bolz [Wed, 25 Feb 2026 17:25:38 +0000 (11:25 -0600)]
vulkan: check for memory overlap before doing fusion (#19768)

* vulkan: check for memory overlap before doing fusion

* Update ggml/src/ggml-vulkan/ggml-vulkan.cpp

* address feedback

8 weeks agocommon : add more aliases for sampler CLI params (#19797)
ddh0 [Wed, 25 Feb 2026 15:34:25 +0000 (09:34 -0600)]
common : add more aliases for sampler CLI params (#19797)

* common : add more aliases for sampler CLI params

8 weeks agoci : update the ROCm/HIP toolchain versions [no ci] (#19891)
Slobodan Josic [Wed, 25 Feb 2026 14:54:49 +0000 (15:54 +0100)]
ci : update the ROCm/HIP toolchain versions [no ci] (#19891)

* [HIP] Update ROCm build container to rocm/dev-ubuntu-22.04:7.2 and HIP_SDK to 26.Q1

* revert container version

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
8 weeks agoserver : enable multi-modal prompt caching (#19877)
Georgi Gerganov [Wed, 25 Feb 2026 13:15:42 +0000 (15:15 +0200)]
server : enable multi-modal prompt caching (#19877)

8 weeks agoserver : support multi-modal context checkpoints (#19849)
Georgi Gerganov [Wed, 25 Feb 2026 13:14:27 +0000 (15:14 +0200)]
server : support multi-modal context checkpoints (#19849)

* Modify llama-memory-hybrid-iswa.cpp

* Modify llama-memory-recurrent.cpp

* Modify server-common.cpp

* Modify server-common.h

* Modify server-context.cpp

* Modify server-task.h

* Added comment to llama-memory-hybrid-iswa.cpp

* Remove comment from server-context.cpp

* Stylistic fix server-context.cpp

* Fix an issue when seqrm isn't called in server-context.cpp

* cont : alternative impl

* cont : cleanup

* cont : n_tokens -> int64_t

---------

Co-authored-by: timkhronos <redacted>
8 weeks agoscripts: update corpus of compare-logprobs (#19326)
Xuan-Son Nguyen [Wed, 25 Feb 2026 11:57:34 +0000 (12:57 +0100)]
scripts: update corpus of compare-logprobs (#19326)

* scripts: update corpus of compare-logprobs

* fix

8 weeks agoci : update Windows ROCm build to 26.Q1 [no ci] (#19810)
Mario Limonciello [Wed, 25 Feb 2026 11:30:19 +0000 (05:30 -0600)]
ci : update Windows ROCm build to 26.Q1 [no ci] (#19810)

* Update build command to build llama-* tools not just ggml-hip
* Update rocWMMA headers to 7.2
* Add GFX1150 target
* Correct library paths for AMD libraries in 26.Q1

8 weeks agogguf : fix ftell/fseek for Windows (#19870)
Aldehir Rojas [Wed, 25 Feb 2026 04:58:11 +0000 (22:58 -0600)]
gguf : fix ftell/fseek for Windows (#19870)

8 weeks agomodels : fix graph splits (#19866)
Georgi Gerganov [Tue, 24 Feb 2026 22:01:13 +0000 (00:01 +0200)]
models : fix graph splits (#19866)

2 months agoserver: fix query params lost when proxying requests in multi-model router mode ...
Pascal [Tue, 24 Feb 2026 20:46:06 +0000 (21:46 +0100)]
server: fix query params lost when proxying requests in multi-model router mode (#19854)

* server: fix query params lost when proxying requests in multi-model router mode

* server: re-encode query params using httplib::encode_query_component in proxy

2 months agoggml/gguf : prevent integer overflows (#19856)
Georgi Gerganov [Tue, 24 Feb 2026 18:17:11 +0000 (20:17 +0200)]
ggml/gguf : prevent integer overflows (#19856)

* gguf : prevent integer overflow for ggml_context mem size

* ggml : fix int overflows in ggml_new_object()

* gguf : prevent string exhaustion

* gguf : prevent array elements exhaustion

* ggml : fix negative tensor type oob

* py : assert that alignment is non-zero power of 2

* ggml : check int overflow in ggml_new_tensor_impl and ggml_new_object

* gguf-py : error on duplicate keys when reading

* py : restore tensor_fields

* enforce proper alignment in add_custom_alignment

* gguf : better name

* gguf : fix ctx size for no_alloc == true

* gguf : minor print fix

* ggml : print values when overflow

* ggml : remove deprecated ggml_type_sizef()

* ggml : relax ggml_type asserts to debug-only

* gguf : add mem_size overflow test

* gguf : add file size check for arrays

* ggml : relax asseerts for ggml_get_type_traits()

* flake8 fix

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agomodel : update label for LFM2-24B-A2B (#19848)
Tarek Dakhran [Tue, 24 Feb 2026 13:27:42 +0000 (14:27 +0100)]
model : update label for LFM2-24B-A2B (#19848)

* model : Update label for LFM2-24B-A2B

```
❯ build/bin/llama-bench -m /data/playground/checkpoints/LFM2-24B-A2B-Preview-Q4_0.gguf,/data/playground/checkpoints/LFM2-8B-A1B-Q4_0.gguf -p 1 -n 0
| model                          |       size |     params | backend    | threads |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| lfm2moe 24B.A2B Q4_0           |  12.54 GiB |    23.84 B | CPU        |      10 |             pp1 |         30.35 ± 2.49 |
| lfm2moe 8B.A1B Q4_0            |   4.41 GiB |     8.34 B | CPU        |      10 |             pp1 |         49.24 ± 1.93 |
```

* Remove extra line

2 months agoserver : support max_completion_tokens request property (#19831)
Radoslav Gerganov [Tue, 24 Feb 2026 08:30:00 +0000 (10:30 +0200)]
server : support max_completion_tokens request property (#19831)

"max_tokens" is deprectated in favor of "max_completion_tokens" which
sets the upper bound for reasoning+output token.

Closes: #13700
2 months agoVulkan Scalar Flash Attention Refactor (#19625)
Ruben Ortlam [Tue, 24 Feb 2026 07:35:48 +0000 (08:35 +0100)]
Vulkan Scalar Flash Attention Refactor (#19625)

* vulkan: allow using fp16 in scalar flash attention shader

* split rows inside of subgroups for faster synchronization

* use row_split when Br >= 4, change reductions to use shared memory if row_split == 1

* use f32 scalar FA if f16 is not supported by device

* fix amd workgroup size issue

* optimize masksh use

* add medium rows FA shader Br size

* fixes

* add padding to mask shmem buffer

* cache q values into registers for KQ

* fuse lf accumulation, pf and v accumulation into a loop

* stage K loads through shmem

* stage V loads through shmem

* only stage through shmem on Nvidia

* default to Bc 32

* also stage V through shmem when this is done for K

* dynamic subgroups for intel

* use vectorized stores

* use float_type for dequantize4 functions

* use smaller scalar rows size for smaller rows count

* relax flash attention split_k condition to allow non-gqa use

* use minimal subgroup size on Intel

* fix shmem support function

* fix rebase issues

* fixes

* Bc 4 for scalar FA is not a valid configuration

* Use wave32 on AMD RDNA for scalar FA

* add Intel shader core count lookup-table

* fix regressions

* device tuning

* tmpsh size fix

* fix editorconfig

* refactor fa tuning logic into a single place

* fix gqa opt logic

* fix block_rows with small n_rows

* amd tuning

* fix hsk=72/80 issue

* tuning

* allow condition skipping for column check

* use float16 for Of if available

* address feedback

* fix bad RDNA performance on head size <= 128 by limiting occupancy

* allow printing pipeline stats

* cleanup and fixes

* limit occupancy for GCN for small batch FA with large HSK

* disable f16 FA for GCN AMD GPUs on the proprietary driver

2 months agovulkan: fix coopmat1 without bf16 support (#19793)
Jeff Bolz [Tue, 24 Feb 2026 06:48:32 +0000 (00:48 -0600)]
vulkan: fix coopmat1 without bf16 support (#19793)

2 months agovulkan: fix data race in mul_mat_id shader (#19790)
Jeff Bolz [Tue, 24 Feb 2026 06:43:12 +0000 (00:43 -0600)]
vulkan: fix data race in mul_mat_id shader (#19790)

2 months agohexagon refactor all Ops to use local context struct (#19819)
Max Krasnyansky [Tue, 24 Feb 2026 00:32:14 +0000 (16:32 -0800)]
hexagon refactor all Ops to use local context struct (#19819)

* hexagon: refactor set/get/sum-rows ops to use local context

* hexagon: refactor ROPE and Softmax Ops to use local context

Improves performance a bit by precomputing things and saving in the context.

* hexagon: refactor activation ops to use local context struct

* hexagon: refactor unary ops to use local context struct and DMA/VTCM

* hexagon: use aligned hvx_scale function

* hexagon: remove unused fields from op_context

* hexagon: rewrite ROPE to use DMA and VTCM scratchpad

* hex-rope: keep N rows in scratchpad (instead of just two)

* hex-rope: introduce rowidx cache

* hex-rope: remove unused fields

* hex-rope: rewrite dma prefetch logic to allow for multi-row fetch/compute

also removes the need for fastdiv.

* hex-rope: minor formatting

* hex-rope: use indices and unroll the loops

* hex-rope: more updates to cleanup rope-block handling

* hexagon: cleanup supported type/dims checks

* hexagon: all reduce funcs replicated across lanes

There is no need to explicitly replicate the first value.

* snapdragon: update adb and windows scripts to use ubatch-size 256

Updated Ops support handles larger ubatches.

2 months agofeat: Add code blocks full height setting to parameter sync service (#19835)
Aleksander Grygier [Mon, 23 Feb 2026 21:30:13 +0000 (22:30 +0100)]
feat: Add code blocks full height setting to parameter sync service (#19835)

2 months agovendor : update cpp-httplib to 0.34.0 (#19830)
Adrien Gallouët [Mon, 23 Feb 2026 20:05:48 +0000 (21:05 +0100)]
vendor : update cpp-httplib to 0.34.0 (#19830)

Signed-off-by: Adrien Gallouët <redacted>
2 months agotests : fix typos in comments in test-backend-sampler [no ci] (#19824)
Daniel Bevenius [Mon, 23 Feb 2026 16:12:02 +0000 (17:12 +0100)]
tests : fix typos in comments in test-backend-sampler [no ci] (#19824)

* tests : fix typos in comments in test-backend-sampler [no ci]

2 months agowebui: Add setting to have full height Code Blocks in Chat Messages (#19829)
Aleksander Grygier [Mon, 23 Feb 2026 13:16:50 +0000 (14:16 +0100)]
webui: Add setting to have full height Code Blocks in Chat Messages (#19829)

2 months agomodel-conversion : merge inspect-org-model.py with tensor-info.py (#19823)
Daniel Bevenius [Mon, 23 Feb 2026 13:15:16 +0000 (14:15 +0100)]
model-conversion : merge inspect-org-model.py with tensor-info.py (#19823)

This commit replaces/merges the inspect-org-model.py script with the
contents tensor-info.py script. The merged script has also been updated
to also print tensor sizes which was the only thing that was not done
before (by tensor-info.py that is).

The motivation for this is that tensor-info.py does not load the tensor
weights which can be time consuming for larger models. And also now that
both are doing almost the same thing it makes sense to just have one and
not two scripts to maintain.

2 months agoggml-cpu: arm64: q5_K repack gemm and gemv (and generic) implementations (dotprod...
Alberto Cabrera Pérez [Mon, 23 Feb 2026 12:42:52 +0000 (12:42 +0000)]
ggml-cpu: arm64: q5_K repack gemm and gemv (and generic) implementations (dotprod) (#19356)

* Generic GEMV and boilerplate for q5_K dotprod
* Generic GEMM and boilerplate for q5_K dotprod
* ARM64 q5_K dotprod GEMM
* ARM64 q5_K dotprod GEMV

2 months agollama : remove write/read of output ids/logits/embeddings (#18862)
Daniel Bevenius [Mon, 23 Feb 2026 06:04:30 +0000 (07:04 +0100)]
llama : remove write/read of output ids/logits/embeddings (#18862)

* llama : remove write/read of output ids/logits/embeddings

This commit removes the write/read of output ids, logits and
embeddings from the llama context state.

Refs: https://github.com/ggml-org/llama.cpp/pull/18862#issuecomment-3756330941

* completion : add replying of session state

This commit updates the session handing in the completion tool to handle
the that logits are no longer stored in the session file. Instead, we
need to replay the last token to get the logits for sampling.

* common : add common_prompt_batch_decode function

This commit adds a new function which is responsible for decoding prompt
and optionally handle the saving for session data.

* update save-state.cpp to use llama_state_load_file

This commit updates the save-load-state example to utilize the new
llama_state_load_file function for loading the model state from a file.
And it also replays the last token after loading since this state is now
stored before the last token is processed.

* examples : set n_seq_max = 2 for ctx3

This commit updates the save-load-state example to set the n_seq_max
parameter to 2 when initializing the ctx3 context.

The motivation for this change is that using 1 as n_parallel/n_seq_max
the context only supports one sequence, but the test laster tries to
use a second sequence which results in the following error:
```console
main : loaded state with 4 tokens
main : seq 0 copied, 225760 bytes
main : kv cache cleared
find_slot: seq_id=1 >= n_seq_max=1 Try using a bigger --parallel value
state_read_meta: failed to find available cells in kv cache
```
This seems to only happen for recurrent/hybrid models.

2 months agocli : provide model with text filename (#19783)
Sigbjørn Skjæret [Sun, 22 Feb 2026 21:33:49 +0000 (22:33 +0100)]
cli : provide model with text filename (#19783)

2 months agojinja: correct stats for tojson and string filters (#19785)
Xuan-Son Nguyen [Sun, 22 Feb 2026 20:08:23 +0000 (21:08 +0100)]
jinja: correct stats for tojson and string filters (#19785)

2 months agocommon : fix improper trimming in XML parser on complete message (#19805)
Aldehir Rojas [Sun, 22 Feb 2026 16:34:54 +0000 (10:34 -0600)]
common : fix improper trimming in XML parser on complete message (#19805)

Co-authored-by: Jules LEIDELINGER <redacted>
2 months agoFix wrong cli-argument in documentation (#19804)
Kilian Krampf [Sun, 22 Feb 2026 15:26:33 +0000 (16:26 +0100)]
Fix wrong cli-argument in documentation (#19804)

2 months agomodel : add Kanana-2 model support (#19803)
HelloKS [Sun, 22 Feb 2026 15:15:02 +0000 (00:15 +0900)]
model : add Kanana-2 model support (#19803)

* model: Add Kanana-2 model support

* lint: adjust spacing

2 months agoci : fix rocm archive name [no ci] (#19808)
Sigbjørn Skjæret [Sun, 22 Feb 2026 15:14:37 +0000 (16:14 +0100)]
ci : fix rocm archive name [no ci] (#19808)

2 months agoserver : merge contiguous Responses input items into a single assistant message ...
Aldehir Rojas [Sun, 22 Feb 2026 13:11:31 +0000 (07:11 -0600)]
server : merge contiguous Responses input items into a single assistant message (#19773)

* server : merge contiguous input items into a single assistant message

* cont : simplify tool call msg

* cont : reduce and combine content

* cont : fix merging content items

2 months agoci : fix rocm release path [no ci] (#19784)
Sigbjørn Skjæret [Sun, 22 Feb 2026 07:07:46 +0000 (08:07 +0100)]
ci : fix rocm release path [no ci] (#19784)

2 months agoUpdate ROCm docker container to 7.2 release (#19418)
Mario Limonciello [Sat, 21 Feb 2026 20:53:39 +0000 (14:53 -0600)]
Update ROCm docker container to 7.2 release (#19418)

Also update architectures

2 months agoAdd a build target to generate ROCm artifacts using ROCm 7.2 (#19433)
Mario Limonciello [Sat, 21 Feb 2026 18:56:26 +0000 (12:56 -0600)]
Add a build target to generate ROCm artifacts using ROCm 7.2 (#19433)

This builds the following targets:
 * gfx1151
 * gfx1150
 * gfx1200
 * gfx1201
 * gfx1100
 * gfx1101
 * gfx1030
 * gfx908
 * gfx90a
 * gfx942

2 months agovendor : update cpp-httplib to 0.33.1 (#19778)
Adrien Gallouët [Sat, 21 Feb 2026 18:12:31 +0000 (19:12 +0100)]
vendor : update cpp-httplib to 0.33.1 (#19778)

Signed-off-by: Adrien Gallouët <redacted>
2 months agoImprove CUDA graph capture (#19754)
Gaurav Garg [Sat, 21 Feb 2026 09:39:36 +0000 (15:09 +0530)]
Improve CUDA graph capture (#19754)

* Improve CUDA graph capture

Currently, CUDA graphs are eagerly enabled on the first call to ggml_backend_cuda_graph_compute. If the graph properties keep changing (4+ consecutive updates), the graph is permanently disabled. This is suboptimal because:

- The first call always incurs CUDA graph capture overhead even if the graph is unstable
- Once permanently disabled, CUDA graphs never re-enable even after the graph stabilizes (e.g., switching from prompt processing to decode)

The new approach delays CUDA graph activation until warmup completes: the same cgraph must be called at least twice with matching properties before CUDA graph capture begins. This avoids wasted capture overhead on volatile graphs and allows graphs to become eligible once they stabilize.
This also fixes issues such as https://github.com/ggml-org/llama.cpp/discussions/19708

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* Remove EM dashes

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
Co-authored-by: Aman Gupta <redacted>
2 months agofix: UI single model selection in router mode (#19767)
crsawyer [Sat, 21 Feb 2026 08:28:39 +0000 (02:28 -0600)]
fix: UI single model selection in router mode (#19767)

2 months agohexagon : fix build release (#19444) (#19587)
Mengsheng Wu [Sat, 21 Feb 2026 00:40:00 +0000 (16:40 -0800)]
hexagon : fix build release (#19444) (#19587)

2 months agocommon : merge qwen3-coder and nemotron nano 3 parsers (#19765)
Aldehir Rojas [Fri, 20 Feb 2026 22:22:22 +0000 (16:22 -0600)]
common : merge qwen3-coder and nemotron nano 3 parsers (#19765)

* common : migrate qwen3-coder to PEG parsing variant

* cont : add JSON parameter test

2 months agoggml-cpu: add RVV vec dot kernels for quantization types (#18784)
Taimur Ahmad [Fri, 20 Feb 2026 11:30:07 +0000 (16:30 +0500)]
ggml-cpu: add RVV vec dot kernels for quantization types (#18784)

* ggml-cpu: add rvv vec_dot for iq2_s

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: add rvv vec_dot for iq3_s

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: add rvv vec_dot for tq1_0, tq2_0

Co-authored-by: Rehan Qasim <redacted>
ggml-cpu: add rvv vec_dot for tq1_0, tq2_0

* ggml-cpu: add rvv vec_dot for iq1_s, iq1_m

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: add vlen switch for rvv vec_dot

---------

Co-authored-by: Rehan Qasim <redacted>
2 months agoquantize : add --dry-run option (#19526)
ddh0 [Fri, 20 Feb 2026 08:20:16 +0000 (02:20 -0600)]
quantize : add --dry-run option (#19526)

* clean slate for branch

* use 6 characters for tensor dims

* add --dry-run to llama-quantize

* use 6 characters for tensor dims (cont.)

* no need to re-calculate ggml_nbytes for tensor

* fix indent

* show model and quant BPW when quant completes

* add example to --help

* new function `tensor_requires_imatrix`, add courtesy warning about imatrix

* missing __func__, move imatrix flag set

* logic error

* fixup tensor_requires_imatrix

* add missing `GGML_TYPE`s

* simplify and rename `tensor_type_requires_imatrix`

* simplify for style

* add back Q2_K edge case for imatrix

* guard ftype imatrix warning

* comment ref #12557

* remove per @compilade

* remove unused `params` parameter

* move `bool dry_run` per GG

* move `bool dry_run` per GG

* Update src/llama-quant.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-quant.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-quant.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agotest: mul_mat tests with huge batch size (#19519)
Jeff Bolz [Fri, 20 Feb 2026 02:08:25 +0000 (18:08 -0800)]
test: mul_mat tests with huge batch size (#19519)

2 months agoWebUI hide models in router mode (#19374)
crsawyer [Thu, 19 Feb 2026 21:53:42 +0000 (15:53 -0600)]
WebUI hide models in router mode (#19374)

2 months agocommon : fix Step-3.5-Flash format detection and thinking support (#19635)
Jesse Posner [Thu, 19 Feb 2026 21:40:52 +0000 (13:40 -0800)]
common : fix Step-3.5-Flash format detection and thinking support (#19635)

* common : fix Step-3.5-Flash format detection and thinking support

Step-3.5-Flash uses the same XML-style tool call format as Qwen3-Coder
(<tool_call><function=...><parameter=...>) but its Jinja template lacks
the bare <function> and plural <parameters> markers that the detection
logic previously required. This caused it to fall through to Hermes 2
Pro, which doesn't call func_args_not_string(), so arguments stayed as
JSON strings and templates using arguments|items crashed.

Additionally, the Qwen3-Coder-XML format handler had no thinking support.
Models like Step-3.5-Flash that unconditionally emit <think> in their
generation prompt need the same thinking_forced_open handling that
Nemotron v3 and Hermes 2 Pro already have, otherwise reasoning_content
is never separated from content in API responses.

Changes:
- Relax Qwen3-Coder XML detection to only require the 3 shared markers
- Tighten Nemotron v3 branch to also require bare <function> and plural
  <parameters>, preventing Step-3.5-Flash from being misrouted via <think>
- Add thinking_forced_open support to Qwen3-Coder-XML init function
- Add <think>/</think> to preserved tokens
- Fix build_grammar_xml_tool_call to handle thinking_forced_open in the
  grammar root rule, allowing </think> before tool calls
- Add Step-3.5-Flash chat template and format detection test

Builds on: https://github.com/ggml-org/llama.cpp/pull/19283

* chat : route Step-3.5-Flash to Nemotron v3 PEG parser, add tests

Step-3.5-Flash uses the same XML tool call format as Qwen3-Coder and
Nemotron 3 Nano (<tool_call>/<function=...>/<parameter=...>) but with
unconditional <think> output. Route it to the Nemotron v3 PEG parser
for streaming and schema-aware parameter parsing.

Detection: templates with <think> + XML tool tags use Nemotron v3 PEG
parser; templates without <think> (Qwen3-Coder) use GBNF grammar.

Tests cover: basic messages, tool calls with/without thinking content,
parallel tool calls, code string parameters, optional </parameter>
closing tags, and JSON schema response format.

* chat : remove dead thinking code from qwen3_coder_xml

Remove thinking handling code that became unreachable after routing
Step-3.5-Flash to the Nemotron v3 PEG parser. Qwen3-Coder has no
<think> in its template, so the thinking_forced_open logic, preserved
tokens, and grammar prefix were dead paths.

2 months agocommon : fix gpt-oss Jinja error when assistant message has both content and thinking...
abhijitb11 [Thu, 19 Feb 2026 20:59:20 +0000 (12:59 -0800)]
common : fix gpt-oss Jinja error when assistant message has both content and thinking with tool calls (#19704)

2 months agoggml-webgpu: Add unary op (SQR, SQRT, SIN, COS) support. (#19700)
Masashi Yoshimura [Thu, 19 Feb 2026 16:18:30 +0000 (01:18 +0900)]
ggml-webgpu: Add unary op (SQR, SQRT, SIN, COS) support. (#19700)

* ggml-webgpu: Add unary op (SQR, SQRT, SIN, COS) support.

* Fix to cast the src value to f32 before sin/cos computing.

2 months agomodel: Add PaddleOCR-VL model support (#18825)
megemini [Thu, 19 Feb 2026 16:05:25 +0000 (00:05 +0800)]
model: Add PaddleOCR-VL model support (#18825)

* support PaddleOCR-VL

* clip: update PaddleOCR model loader parameters to prevent OOM during warmup

* [update] add paddleocr vl text model instead of ernie4.5

* [update] restore change of minicpmv

* [update] format

* [update] format

* [update] positions and patch merge permute

* [update] mtmd_decode_use_mrope for paddleocr

* [update] image min/max pixels

* [update] remove set_limit_image_tokens

* upate: preprocess without padding

* clean up

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agovulkan: fix MMQ shader push constants and multi-dispatch (#19732)
Ruben Ortlam [Thu, 19 Feb 2026 13:59:16 +0000 (14:59 +0100)]
vulkan: fix MMQ shader push constants and multi-dispatch (#19732)

2 months agomodels : fix qwen3.5 beta/gate shapes (#19730)
Georgi Gerganov [Thu, 19 Feb 2026 13:19:53 +0000 (15:19 +0200)]
models : fix qwen3.5 beta/gate shapes (#19730)

* models : fix qwen3.5 beta/gate shapes

* cont : avoid extra reshapes

2 months agomtmd: build_attn modified, flash_attn on/off via ctx_params (#19729)
Saba Fallah [Thu, 19 Feb 2026 12:50:29 +0000 (13:50 +0100)]
mtmd: build_attn modified, flash_attn on/off via ctx_params (#19729)

2 months agomodel : add JAIS-2 architecture support (#19488)
3 a l i [Thu, 19 Feb 2026 12:30:17 +0000 (16:30 +0400)]
model : add JAIS-2 architecture support (#19488)

* model: add JAIS-2 architecture support

Add support for the JAIS-2 family of Arabic-English bilingual models
from Inception AI (https://huggingface.co/inceptionai/Jais-2-8B-Chat).

Architecture characteristics:
- LayerNorm (not RMSNorm) with biases
- ReLU² (ReLU squared) activation function
- Separate Q/K/V projections with biases
- Simple MLP without gate projection (up -> act -> down)
- RoPE positional embeddings
- GPT-2 BPE tokenizer

Supported model sizes:
- Jais-2-8B (32 layers, 26 heads, 3328 hidden)
- Jais-2-70B (68 layers, 56 heads, 7168 hidden)

Tested with quantizations: BF16, Q8_0, Q6_K, Q5_K_M, Q5_0, Q4_K_M, Q4_0, Q3_K_M, Q2_K

Note: JAIS-2 requires F32 precision accumulators for numerical stability
and uses standard attention (not flash attention) on CUDA backends.

* fix: run convert_hf_to_gguf_update.py for jais-2 tokenizer hash

* fix: use NEOX RoPE type for JAIS2

* fix: remove Q/K permutation (NEOX RoPE doesn't need it)

* fix: enable flash attention for JAIS2 (fixed by #19115)

* fix: add dedicated JAIS2 pre-tokenizer type and control vector support

- Add LLAMA_VOCAB_PRE_TYPE_JAIS2 with cascading whitespace regex
- Include original regex from tokenizer.json as comment
- Add build_cvec call for control vector support

* no longer necessary to override set_vocab

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoCUDA: fix kernel selection logic for tile FA (#19686)
Johannes Gäßler [Thu, 19 Feb 2026 11:42:58 +0000 (12:42 +0100)]
CUDA: fix kernel selection logic for tile FA (#19686)

* CUDA: fix kernel selection logic for tile FA

* add comment

2 months agomtmd : chat : Fix extra \n between text and media marker (#19595)
Tarek Dakhran [Thu, 19 Feb 2026 11:18:57 +0000 (12:18 +0100)]
mtmd : chat : Fix extra \n between text and media marker (#19595)

* mtmd : chat : Fix extra \n between text and media marker

Thanks to @tugot17 for detecting and reporting the issue.

For vision models (e.g. LFM2.5-VL-1.6B and Qwen/Qwen3-VL-4B-Instruct) `llama-mtmd-cli` produces identical output to HF implementation.

However `llama-server` doesn't. I traced it down to extra newline
inserted after `<__media__>`.

This happens in `to_json_oaicompat`, that treats media markers as text
and joins all parts with `\n` separator.

PR introduces new type `media_marker` and uses it for media markers.
Extra logic is added to prevent insertion of newlines before and after
media markers.

With this change number of input tokens is identical to HF
implementation and as a result the output is also identical.

I explored other ways to address the issue
* remove completely `\n` between text parts in `to_json_oaicompat`
* merge text messages in server-common.cpp before sending them to `to_json_oaicompat`

Please propose alternative ways of fixing this issue.

* Refactor to use explicite per type ifs

* Update common/chat.cpp

Co-authored-by: Piotr Wilkin (ilintar) <redacted>
* Update common_chat_templates_apply_legacy

---------

Co-authored-by: Piotr Wilkin (ilintar) <redacted>
2 months agowebui: Fix Attachments not being included in completion request (#19731)
Aleksander Grygier [Thu, 19 Feb 2026 09:27:38 +0000 (10:27 +0100)]
webui: Fix Attachments not being included in completion request (#19731)

* fix: Add missing argument

* chore: update webui build output

2 months agomodel : add tokenizer from LFM2.5-Audio-1.5B (#19687)
Tarek Dakhran [Thu, 19 Feb 2026 08:54:48 +0000 (09:54 +0100)]
model : add tokenizer from LFM2.5-Audio-1.5B (#19687)

* model : Add tokenizer from LFM2.5-Audio-1.5B

[LFM2.5-Audio-1.5B](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B) introduced lightweight audio tokenizer.

Tokenizer based on LFM2 architecture and acts as "embedding" model with
different input `n_embd` and output `n_embd_out`.

To be used in https://github.com/ggml-org/llama.cpp/pull/18641.

To convert use

```shell
python3 convert_hf_to_gguf.py /path/to/LFM2.5-Audio-1.5B/audio_detokenizer
```

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Formatting

* Rework check for attention layers

* Add LFM2 SWA model support

* Address PR feedback

* Set vocab to none

* Move helper function definitions to cpp file

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agollama : use output_resolve_row() in get_logits_ith/get_embeddings_ith (#19663)
Daniel Bevenius [Thu, 19 Feb 2026 08:48:08 +0000 (09:48 +0100)]
llama : use output_resolve_row() in get_logits_ith/get_embeddings_ith (#19663)

This commit updates get_logits_ith(), and get_embeddings_ith() to use
output_resolve_row() to resolve the batch index to output row index.

The motivation for this is to remove some code duplication between these
functions.

2 months agomodel : full modern bert support (#18330)
Ryan Mangeno [Thu, 19 Feb 2026 07:52:21 +0000 (02:52 -0500)]
model : full modern bert support (#18330)

* full modern bert support

* added gelu op in rank pooling for modern bert

* still working on stuff, added mean calculation before classifier head

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* first layer is dense, as per modern bert research paper

* Update src/llama-graph.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* fixed set input for mean pooling to check if pooling type is ranking since modern bert does mean & rank

* Update src/llama-graph.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agollamafile: powerpc: add FP16 MMA path for Q4/Q8 matmul (#19709)
shalinib-ibm [Thu, 19 Feb 2026 06:28:53 +0000 (11:58 +0530)]
llamafile: powerpc: add FP16 MMA path for Q4/Q8 matmul (#19709)

Avoid xvi8ger4pp signed→unsigned bias correction by dequantizing Q4/Q8
inputs to FP16 and using FP16×FP16→FP32 MMA. This removes
post-processing overhead and improves performance.

Performance Impact:
1.5 ~ 2x improvement in PP_Speed for Q4 and Q8 Models,
measured with llama-bench and llama-batched-bench.
Q8 Model: granite-4.0-h-micro-Q8_0.gguf (from huggingface)
Q4 Model: Meta-Llama3-8b Q4 model (generated with llama-quantize from
f32 model)

llama-bench Q8 Model Results:
 model                                  size       params   backend      threads              test  Base t/s Patch t/s
 granitehybrid 3B Q8_0              3.16 GiB       3.19 B   CPU               10               pp8           64.48 ± 4.72           73.99 ± 0.27
 granitehybrid 3B Q8_0              3.16 GiB       3.19 B   CPU               10              pp16           80.11 ± 0.32          112.53 ± 0.40
 granitehybrid 3B Q8_0              3.16 GiB       3.19 B   CPU               10              pp32           89.10 ± 0.27          152.95 ± 0.68
 granitehybrid 3B Q8_0              3.16 GiB       3.19 B   CPU               10              pp64           93.65 ± 0.25          187.83 ± 0.83
 granitehybrid 3B Q8_0              3.16 GiB       3.19 B   CPU               10             pp128           99.93 ± 0.02          201.32 ± 0.11
 granitehybrid 3B Q8_0              3.16 GiB       3.19 B   CPU               10             pp256          102.32 ± 0.40          208.32 ± 0.41
 granitehybrid 3B Q8_0              3.16 GiB       3.19 B   CPU               10             pp512          103.42 ± 0.40          209.98 ± 0.14
 granitehybrid 3B Q8_0              3.16 GiB       3.19 B   CPU               10             tg128           20.35 ± 0.01           19.57 ± 0.01

llama-bench Q4 Model Results:
 model                                  size       params   backend      threads              test                Base    t/s                 Patch   t/s
 llama 8B Q4_0                      4.33 GiB       8.03 B   CPU               10               pp8           34.77 ± 0.10           41.23 ± 0.08
 llama 8B Q4_0                      4.33 GiB       8.03 B   CPU               10              pp16           40.81 ± 0.04           64.55 ± 0.15
 llama 8B Q4_0                      4.33 GiB       8.03 B   CPU               10              pp32           44.65 ± 0.05           90.84 ± 0.22
 llama 8B Q4_0                      4.33 GiB       8.03 B   CPU               10              pp64           47.49 ± 0.03          114.39 ± 0.11
 llama 8B Q4_0                      4.33 GiB       8.03 B   CPU               10             pp128           49.29 ± 0.24          120.13 ± 0.19
 llama 8B Q4_0                      4.33 GiB       8.03 B   CPU               10             pp256           49.77 ± 0.23          121.51 ± 0.11
 llama 8B Q4_0                      4.33 GiB       8.03 B   CPU               10             pp512           49.89 ± 0.23          117.52 ± 0.10
 llama 8B Q4_0                      4.33 GiB       8.03 B   CPU               10             tg128           13.40 ± 0.01           13.37 ± 0.00

Llama perplexity Results:

Model                     Base Final PPL Estimate Patch Final PPL Estimate
granite-4.0-h-micro-Q8_0    1.3862 +/- 0.04424         1.3868 +/- 0.04432
Meta-Llama3-8b Q4     1.3801 +/- 0.04116         1.3803 +/- 0.04116

Signed-off-by: Shalini.Salomi.Bodapati <redacted>
2 months agomodels : dedup qwen35 graphs (#19660)
Georgi Gerganov [Thu, 19 Feb 2026 06:17:49 +0000 (08:17 +0200)]
models : dedup qwen35 graphs (#19660)

* models : dedup qwen35 graphs

* cont : add missing sigmoid

2 months agomodels : dedup Kimi Linear delta net implementation (#19668)
ymcki [Thu, 19 Feb 2026 06:15:17 +0000 (14:15 +0800)]
models : dedup Kimi Linear delta net implementation (#19668)

* models : add llm_build_delta_net_base

* cont : keep qwen35 and qwen35moe graphs intact

* cont : add comments [no ci]

* add kimi linear to delta-net-base

* removed unnecessary ggml_cont from g_exp_t

* removed ggml_cont from g_diff_exp_t. moved ggml_cont for o to kimi-linear.cpp

* removed unnecessary diag mask

* cont : simplify

* cont : avoid graph splits

* scale q after mul instead of beginning

* scale q after mul instead of beginning

* identical ppl

* cont : fix scale and decay mask

* minor : remove TODO

---------

Co-authored-by: Georgi Gerganov <redacted>
2 months agoAdd Jinja support for "indent" string filter (#19529)
Piotr Wilkin (ilintar) [Wed, 18 Feb 2026 23:25:52 +0000 (00:25 +0100)]
Add Jinja support for "indent" string filter (#19529)

* Add partial Jinja support for "indent" string filter

* Fully implement indent

* Add tests for all width variants.

* Update tests/test-jinja.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Fix getline ignoring trailing newlines

* Update common/jinja/value.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix first indent condition

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoggml webgpu: Fix bug in dispatching large matrix-vector multiplication (#19535)
Reese Levine [Wed, 18 Feb 2026 23:06:29 +0000 (16:06 -0700)]
ggml webgpu: Fix bug in dispatching large matrix-vector multiplication (#19535)

* Fix bug in dispatching large matrix-vector multiplication

2 months agoserver: save generated text for the /slots endpoint (for LLAMA_SERVER_SLOTS_DEBUG...
matteo [Wed, 18 Feb 2026 17:53:37 +0000 (18:53 +0100)]
server: save generated text for the /slots endpoint (for LLAMA_SERVER_SLOTS_DEBUG=1) (#19622)

* save generated text for the /slots endpoint

* update debug_generated_text only when LLAMA_SERVER_SLOTS_DEBUG > 0

* Apply suggestions from code review

---------

Co-authored-by: Matteo <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
2 months agomodel: support GLM-OCR (#19677)
Xuan-Son Nguyen [Wed, 18 Feb 2026 16:51:40 +0000 (17:51 +0100)]
model: support GLM-OCR (#19677)

* model: support GLM-OCR

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agodocs: Fix broken links for preparing models in Backends (#19684)
Maciej Lisowski [Wed, 18 Feb 2026 15:50:23 +0000 (16:50 +0100)]
docs: Fix broken links for preparing models in Backends (#19684)

2 months agoggml webgpu: shader library organization (#19530)
Reese Levine [Wed, 18 Feb 2026 14:51:02 +0000 (07:51 -0700)]
ggml webgpu: shader library organization (#19530)

* Basic JIT compilation for mul_mat, get_rows, and scale (#17)

* scale jit working

* preliminary working jit for getrows and mulmat, needs refining

* simplified mul_mat preprocessing switch statement

* get_rows fixes, mul_mat refinement

* formatted + last edits

* removed some extraneous prints

* fixed get_rows, fixed workgroup dispatch in mul_mat. no gibberish

* small fix

* some changes, working

* get_rows and mul_mat jit fixed and working

* Update formatting

* formatting

* Add header

---------

Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
* Start work on all-encompassing shader library

* refactor argmax, set_rows

* Refactor all but flashattention, mat mul

* flashattention and matrix multiplication moved to new format

* clean up preprocessing

* Formatting

* remove duplicate constants

* Split large shaders into multiple static strings

---------

Co-authored-by: neha-ha <redacted>
2 months agoPre-MCP UI and architecture cleanup (#19689)
Aleksander Grygier [Wed, 18 Feb 2026 11:02:02 +0000 (12:02 +0100)]
Pre-MCP UI and architecture cleanup (#19689)

2 months agovulkan: split mul_mat into multiple dispatches to avoid overflow (#19509)
Jeff Bolz [Wed, 18 Feb 2026 09:47:10 +0000 (01:47 -0800)]
vulkan: split mul_mat into multiple dispatches to avoid overflow (#19509)

* vulkan: split mul_mat into multiple dispatches to avoid overflow

The batch dimensions can be greater than the max workgroup count limit,
in which case we need to split into multiple dispatches and pass the base
index through a push constant.

Fall back for the less common p021 and nc variants.

* address feedback

2 months agocommon : make small string helpers as inline functions (#19693)
Adrien Gallouët [Wed, 18 Feb 2026 07:03:01 +0000 (08:03 +0100)]
common : make small string helpers as inline functions (#19693)

Also use string_view when it make sense and fix some corner cases.

Signed-off-by: Adrien Gallouët <redacted>
2 months agoopencl: refactor expm1 and softplus (#19404)
shaofeiqi [Tue, 17 Feb 2026 22:47:18 +0000 (14:47 -0800)]
opencl: refactor expm1 and softplus (#19404)

* opencl: refactor expm1

* opencl: refactor softplus

* opencl: use h for half literals

---------

Co-authored-by: Li He <redacted>
2 months agoopencl: optimize mean and sum_row kernels (#19614)
shaofeiqi [Tue, 17 Feb 2026 21:56:09 +0000 (13:56 -0800)]
opencl: optimize mean and sum_row kernels (#19614)

* opencl: optimize mean and sum_row kernels

* opencl: add comment for max subgroups

* opencl: format

---------

Co-authored-by: Li He <redacted>
2 months agomodel-conversion : add option to print tensor values (#19692)
Daniel Bevenius [Tue, 17 Feb 2026 19:43:22 +0000 (20:43 +0100)]
model-conversion : add option to print tensor values (#19692)

This commit updates the tensor-info.py script to support the option to
print the first N values of a tensor when displaying its information.

The motivation for this is that it can be useful to inspect some actual
values in addition to the shapes of the tensors.

2 months agoPre-MCP UI and architecture cleanup (#19685)
Aleksander Grygier [Tue, 17 Feb 2026 12:47:45 +0000 (13:47 +0100)]
Pre-MCP UI and architecture cleanup (#19685)

* webui: extract non-MCP changes from mcp-mvp review split

* webui: extract additional pre-MCP UI and architecture cleanup

* chore: update webui build output

2 months agoggml: ggml-cpu: force-no-lto-for-cpu-feats (#19609)
Talha Can Havadar [Tue, 17 Feb 2026 11:22:46 +0000 (12:22 +0100)]
ggml: ggml-cpu: force-no-lto-for-cpu-feats (#19609)

When LTO enabled in build environments it forces all builds to have LTO
in place. But feature detection logic is fragile, and causing Illegal
instruction errors with lto. This disables LTO for the feature
detection code to prevent cross-module optimization from inlining
architecture-specific instructions into the score function. Without this,
LTO can cause SIGILL when loading backends on older CPUs (e.g., loading
power10 backend on power9 crashes before feature check runs).

2 months agocuda : enable CUDA graphs for MMID 1 <= BS <= 4 (#19645)
Georgi Gerganov [Tue, 17 Feb 2026 10:31:49 +0000 (12:31 +0200)]
cuda : enable CUDA graphs for MMID 1 <= BS <= 4 (#19645)

* cuda : enable CUDA graphs for MMID BS <= 4

* cont : add stream capture check

Co-authored-by: Oliver Simons <redacted>
* cont : add MMVQ_MMID_MAX_BATCH_SIZE

---------

Co-authored-by: Oliver Simons <redacted>
2 months agomodel-conversion : make printing of config values optional (#19681)
Daniel Bevenius [Tue, 17 Feb 2026 09:46:53 +0000 (10:46 +0100)]
model-conversion : make printing of config values optional (#19681)

* model-conversion : make printing of config values optional

This commit updates run-org-model.py to make the printing of model
configuration values optional.

The motivation for this change is that not all models have these
configuration values defined and those that do not will error when
running this script. With these changes we only print the values if they
exist or a default value.

We could optionally just remove them but it can be useful to see these
values when running the original model.

2 months agoci : bump komac version (#19682)
Sigbjørn Skjæret [Tue, 17 Feb 2026 08:30:31 +0000 (09:30 +0100)]
ci : bump komac version (#19682)

2 months agobuild : link ws2_32 as PUBLIC on Windows (#19666)
Adrien Gallouët [Tue, 17 Feb 2026 07:37:07 +0000 (08:37 +0100)]
build : link ws2_32 as PUBLIC on Windows (#19666)

Signed-off-by: Adrien Gallouët <redacted>
2 months agobuild : cleanup library linking logic (#19665)
Adrien Gallouët [Tue, 17 Feb 2026 07:36:45 +0000 (08:36 +0100)]
build : cleanup library linking logic (#19665)

Signed-off-by: Adrien Gallouët <redacted>
2 months agoconvert : add JoyAI-LLM-Flash (#19651)
DAN™ [Mon, 16 Feb 2026 21:49:57 +0000 (16:49 -0500)]
convert : add JoyAI-LLM-Flash (#19651)

* convert_hf_to_gguf: add JoyAI-LLM-Flash tokenizer hash mapping to deepseek-v3

* llama-vocab: create a new pre-tokenizer name for joyai-llm.

* add missing vocab type section

* Update convert_hf_to_gguf_update.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoperplexity: add proper batching (#19661)
AesSedai [Mon, 16 Feb 2026 16:44:44 +0000 (08:44 -0800)]
perplexity: add proper batching (#19661)

2 months agocommon : inline functions (#18639)
Ivan Chikish [Mon, 16 Feb 2026 15:52:24 +0000 (18:52 +0300)]
common : inline functions (#18639)

2 months agoggml : make `ggml_is_view` as API (#19539)
Judd [Mon, 16 Feb 2026 15:43:34 +0000 (23:43 +0800)]
ggml : make `ggml_is_view` as API (#19539)

* make `ggml_is_view` as API

* introduce `ggml_aux_is_view` as inline version for internal use.

* change `ggml_aux_is_view` to  `ggml_impl_is_view`

2 months agomodel: Add support for Tiny Aya Models (#19611)
Saurabh Dash [Mon, 16 Feb 2026 15:28:46 +0000 (10:28 -0500)]
model: Add support for Tiny Aya Models (#19611)

* changes for tiny aya

* changes to hash

* changes to vocab

* fix some tokenizer regex edge cases

* update comment

* add some comments for regex

* Apply suggestion from @ngxson

---------

Co-authored-by: Xuan-Son Nguyen <redacted>
2 months agobuild : rework llama_option_depr to handle LLAMA_CURL (#19658)
Adrien Gallouët [Mon, 16 Feb 2026 15:06:48 +0000 (16:06 +0100)]
build : rework llama_option_depr to handle LLAMA_CURL (#19658)

Signed-off-by: Adrien Gallouët <redacted>
2 months agoAdjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (#19591)
Mario Limonciello [Mon, 16 Feb 2026 13:46:08 +0000 (07:46 -0600)]
Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (#19591)

Avoids issues with ROCm 6.4.4.

Closes: https://github.com/ggml-org/llama.cpp/issues/19580
Fixes: 6845f7f87 ("Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)")
Signed-off-by: Mario Limonciello (AMD) <redacted>
2 months agomodels : deduplicate delta-net graphs for Qwen family (#19597)
Georgi Gerganov [Mon, 16 Feb 2026 12:35:04 +0000 (14:35 +0200)]
models : deduplicate delta-net graphs for Qwen family (#19597)

* models : add llm_build_delta_net_base

* cont : keep qwen35 and qwen35moe graphs intact

* cont : add comments

2 months agograph : fix KQ mask, lora, cvec reuse checks (#19644)
Georgi Gerganov [Mon, 16 Feb 2026 07:21:11 +0000 (09:21 +0200)]
graph : fix KQ mask, lora, cvec reuse checks (#19644)

* graph : fix KQ mask reuse condition

* cont : dedup KQ mask build and can_reuse

* cont : fix build

* graph : fix adapter check for reuse

2 months agoggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel (#19132)
abhijain1204fujitsu [Mon, 16 Feb 2026 06:38:43 +0000 (12:08 +0530)]
ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel  (#19132)

* Updated repack.cpp

* Updated repack.cpp

* Updated repack.cpp

* Added if condition to support only vector length 256.

* Changed the format removed comments and duplicate variable

* If SVE 256 not present then was using generic function to compute, hence slowing the performance.

So added code if SVE 256 is not present then use NEON code.

* Code format change suggestion

---------

Co-authored-by: Vithule, Prashant <redacted>
2 months agosync : ggml upstream/0.0.8067
Georgi Gerganov [Sun, 15 Feb 2026 20:23:13 +0000 (22:23 +0200)]
sync : ggml

2 months agoggml : bump version to 0.9.7 (ggml/1425)
Georgi Gerganov [Sun, 15 Feb 2026 20:21:04 +0000 (22:21 +0200)]
ggml : bump version to 0.9.7 (ggml/1425)

2 months agoggml : bump version to 0.9.6 (ggml/1423)
Georgi Gerganov [Sat, 7 Feb 2026 07:58:02 +0000 (09:58 +0200)]
ggml : bump version to 0.9.6 (ggml/1423)

2 months agocuda: optimize iq2xxs/iq2xs/iq3xxs dequantization (#19624)
David Friehs [Sun, 15 Feb 2026 17:08:42 +0000 (18:08 +0100)]
cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization (#19624)

* cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization

- load all 8 int8 for a grid position in one load
- calculate signs via popcnt instead of fetching from ksigns table
- broadcast signs to drop individual shift/mask

* cuda: iq2xxs: simplify sum scaling

express `(sum * scale + sum / 2) / 4` as `(sum * (scale * 2 + 1)) / 8`
express `((aux32 >> 28) * 2 + 1)` as `(aux32 >> 27 | 1)`

saves 3 registers for mul_mat_vec_q (152 -> 149) according to nsight
AFAICT no overflow can occur here as iq2xxs values are far too small

* uint -> uint32_t

error: identifier "uint" is undefined

2 months agodocs: update s390x build docs (#19643)
Aaron Teo [Sun, 15 Feb 2026 16:33:34 +0000 (00:33 +0800)]
docs: update s390x build docs (#19643)

2 months agobuild : remove LLAMA_HTTPLIB option (#19623)
Adrien Gallouët [Sun, 15 Feb 2026 14:38:50 +0000 (15:38 +0100)]
build : remove LLAMA_HTTPLIB option (#19623)

This option was introduced as a workaround because cpp-httplib could not
build on visionOS. Since it has been fixed and now compiles on all platforms,
we can remove it and simplify many things.

Signed-off-by: Adrien Gallouët <redacted>
2 months agocmake : check if KleidiAI API has been fetched (#19640)
Daniel Bevenius [Sun, 15 Feb 2026 12:59:38 +0000 (13:59 +0100)]
cmake : check if KleidiAI API has been fetched (#19640)

This commit addresses a build issue with the KleidiAI backend when
building multiple cpu backends. Commmit
3a00c98584e42a20675b6569d81beadb282b0952 ("cmake : fix KleidiAI install
target failure with EXCLUDE_FROM_ALL") introduced a change where
FetchContent_Populate is called instead of FetchContent_MakeAvailable,
where the latter does handle this case (it is idempotent but
FetchContent_Populate is not).

I missed this during my review and I should not have commited without
verifying the CI failure, sorry about that.

2 months agocontext : fix output reorder with backend sampling (#19638)
Georgi Gerganov [Sun, 15 Feb 2026 12:57:40 +0000 (14:57 +0200)]
context : fix output reorder with backend sampling (#19638)

2 months agoggml : avoid UB in gemm ukernel (#19642)
Georgi Gerganov [Sun, 15 Feb 2026 12:56:35 +0000 (14:56 +0200)]
ggml : avoid UB in gemm ukernel (#19642)

2 months agoggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399)
Aaron Teo [Sun, 15 Feb 2026 10:20:35 +0000 (18:20 +0800)]
ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399)