]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
7 weeks agoggml-cuda: add mem check for fusion (#19916)
Aman Gupta [Fri, 6 Mar 2026 16:05:43 +0000 (00:05 +0800)]
ggml-cuda: add mem check for fusion (#19916)

* ggml-cuda: add mem check for fusion

* Replace NaNs with -FLT_MAX

* fix typo

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
7 weeks agoggml: update comments for backends which have no memory to report (#20157)
Aaron Teo [Fri, 6 Mar 2026 15:24:38 +0000 (23:24 +0800)]
ggml: update comments for backends which have no memory to report (#20157)

Signed-off-by: Aaron Teo <redacted>
7 weeks agoggml-cpu: Fix gcc 15 ICE on ppc64le (#20083) (#20130)
shalinib-ibm [Fri, 6 Mar 2026 15:22:39 +0000 (20:52 +0530)]
ggml-cpu: Fix gcc 15 ICE on ppc64le (#20083) (#20130)

This patch addresses an Internal Compiler Error (Segmentation fault)
observed with gcc 15 by replacing the intrinsic + cast by doing
a cat on the data first and then calling the intrinsic. This bypasses the
buggy compiler path while maintaining identical instruction selection.

Performance Verification:
Assembly analysis on RHEL 9 (GCC 15.1.1) confirms that both the original
code and this fix generate the identical Power10 prefixed load instruction:
    `plxv 40, 2(14)`

This ensures zero performance regression while unblocking builds on
newer toolchains.

Reproduced on:
- Alpine Linux + GCC 15.2.0-r2
- RHEL 9  + GCC 15.1.1 (gcc-toolset-15)

Signed-off-by: Shalini Salomi Bodapati <redacted>
7 weeks agoCUDA: use shared mem for ssm_conv (#20128)
Aman Gupta [Fri, 6 Mar 2026 15:09:59 +0000 (23:09 +0800)]
CUDA: use shared mem for ssm_conv (#20128)

* CUDA: use shared mem for ssm_conv

* fuse silu + ssm_conv

* fuse unary + mul

* enable for fp16

* formatting

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
7 weeks agocontext: ignore zero scale LoRAs when checking sameness (#20166)
Tim Neumann [Fri, 6 Mar 2026 13:05:52 +0000 (14:05 +0100)]
context: ignore zero scale LoRAs when checking sameness (#20166)

7 weeks agoCheckpoint every n tokens: squash (#20087)
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 10:39:26 +0000 (11:39 +0100)]
Checkpoint every n tokens: squash (#20087)

7 weeks agowebui: Agentic Loop + MCP Client with support for Tools, Resources and Prompts (...
Aleksander Grygier [Fri, 6 Mar 2026 09:00:39 +0000 (10:00 +0100)]
webui: Agentic Loop + MCP Client with support for Tools, Resources and Prompts (#18655)

7 weeks agoggml-cpu: fix data race for debug asserts (#20148)
Johannes Gäßler [Fri, 6 Mar 2026 08:12:49 +0000 (09:12 +0100)]
ggml-cpu: fix data race for debug asserts (#20148)

7 weeks agokv-cache : fix M-RoPE checkpoints (#20132)
Georgi Gerganov [Fri, 6 Mar 2026 06:46:51 +0000 (08:46 +0200)]
kv-cache : fix M-RoPE checkpoints (#20132)

7 weeks agocli : Don't clear system prompt when using '/clear' (#20067)
Roj234 [Fri, 6 Mar 2026 05:41:11 +0000 (13:41 +0800)]
cli : Don't clear system prompt when using '/clear' (#20067)

* Enhance /clear command to include system prompt

Add system prompt to messages when clearing chat history.

* Use lambda

7 weeks agoopencl: add neg, exp and diag (#20127)
lhez [Fri, 6 Mar 2026 05:16:39 +0000 (21:16 -0800)]
opencl: add neg, exp and diag (#20127)

* opencl: add `neg`

* opencl: add `exp`

* opencl: add `diag`

7 weeks agohexagon: add fp16 support for binary ops: add,sub,mul,div (#20139)
YardenTal44 [Fri, 6 Mar 2026 02:29:13 +0000 (04:29 +0200)]
hexagon: add fp16 support for binary ops: add,sub,mul,div (#20139)

* hexagon: add fp16 support for binary ops: add,sub,mul,div

* hexagon: fix test-backend-ops failures for fp16 binary ops on older arches (<v79)

* hexagon: decide on n_threads (aka n_jobs) early to avoid overallocating scratchpad

* snapdragon: fix readme link

---------

Co-authored-by: Max Krasnyansky <redacted>
7 weeks agomodels : kda chunk size = 16 (#19827)
ymcki [Thu, 5 Mar 2026 15:01:23 +0000 (23:01 +0800)]
models : kda chunk size = 16 (#19827)

* models : add llm_build_delta_net_base

* cont : keep qwen35 and qwen35moe graphs intact

* cont : add comments [no ci]

* add kimi linear to delta-net-base

* removed unnecessary ggml_cont from g_exp_t

* removed ggml_cont from g_diff_exp_t. moved ggml_cont for o to kimi-linear.cpp

* removed unnecessary diag mask

* cont : simplify

* cont : avoid graph splits

* scale q after mul instead of beginning

* scale q after mul instead of beginning

* identical ppl

* cont : fix scale and decay mask

* minor : remove TODO

* block implementation for kda

* remove space at the end of line 101

* concat+pad

* pad+binary row concat

* chunk size 16 for kda

* removed minor differences to master

---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agoCUDA: Improve performance via less synchronizations between token (#17795)
Andreas Kieslinger [Thu, 5 Mar 2026 11:53:21 +0000 (12:53 +0100)]
CUDA:  Improve performance via less synchronizations between token (#17795)

* Adds CPU-to-CUDA copy capability to
ggml_backend_cuda_cpy_tensor_async()

* Adds function to relax sync requirements between input copies on
supported backends (CUDA for now)

* Exchanges synchronous copy with async copy function.

* Adds macro guards to allow compilation in non-CUDA builds

* Reworked backend detection in ggml-backend.cpp to avoid linking
conflicts

* Relax requirement of checks in async CUDA copies from backend and buffer type to just buffer type, to avoid linking issues

* Minor cleanup

* Makes opt-in to relax use of explicit syncs more general. Backends like
vulkan which require a synchronization between HtoD copies and graph
execution could also adopt this change now.

* Reintroduces stricter check for CPU->CUDA backend async copy via
GGML_DEVICE_TYPE_CPU.

* Corrects initialization of ggml_backend_sync_mode in
ggml_backend_sched_split initialization

* Simplifies synchronizations to adhere to `saaasg` pattern.

* Apply suggestion from @ggerganov (src->buffer to buf_src)

Co-authored-by: Georgi Gerganov <redacted>
* Apply suggestion from @ggerganov (src->buffer to buf_src) v2

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agomodel : update Qwen3.5 model type detection (#20126)
Eric Zhang [Thu, 5 Mar 2026 11:47:14 +0000 (19:47 +0800)]
model : update Qwen3.5 model type detection (#20126)

* model : fix Qwen3.5 model type detection

* Update src/llama-model.cpp

whoops, my bad

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
7 weeks agocli : add command and file auto-completion (#19985)
Sigbjørn Skjæret [Thu, 5 Mar 2026 09:47:28 +0000 (10:47 +0100)]
cli : add command and file auto-completion (#19985)

7 weeks agoconvert : register Qwen 3.5 ForCausalLM for text only (#20119)
Sigbjørn Skjæret [Thu, 5 Mar 2026 09:30:02 +0000 (10:30 +0100)]
convert : register Qwen 3.5 ForCausalLM for text only (#20119)

7 weeks agowebui: Improvements for Models Selector UI (#20066)
Aleksander Grygier [Thu, 5 Mar 2026 07:52:22 +0000 (08:52 +0100)]
webui: Improvements for Models Selector UI (#20066)

7 weeks agochore : correct typos [no ci] (#20041)
Marcel Petrick [Thu, 5 Mar 2026 07:50:21 +0000 (08:50 +0100)]
chore : correct typos [no ci] (#20041)

* fix(docs): correct typos found during code review

Non-functional changes only:
- Fixed minor spelling mistakes in comments
- Corrected typos in user-facing strings
- No variables, logic, or functional code was modified.

Signed-off-by: Marcel Petrick <redacted>
* Update docs/backend/CANN.md

Co-authored-by: Aaron Teo <redacted>
* Revert "Auxiliary commit to revert individual files from 846d1c301281178efbc6ce6060ad34c1ebe45af8"

This reverts commit 02fcf0c7db661d5ff3eff96b2b2db9fdb7213256.

* Update tests/test-backend-ops.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-backend-ops.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Signed-off-by: Marcel Petrick <redacted>
Co-authored-by: Aaron Teo <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
7 weeks agohexagon: Flash Attention optimizations (dma, mpyacc, multi-row) and MatMul updates...
Max Krasnyansky [Thu, 5 Mar 2026 05:55:29 +0000 (21:55 -0800)]
hexagon: Flash Attention optimizations (dma, mpyacc, multi-row) and MatMul updates (#20118)

* ggml-hexagon: enhance hvx_dot_f16_f16_aa_rx4 for improved performance by expanding vector handling and optimizing accumulation

# Conflicts:
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c

* ggml-hexagon: optimize hvx_dot_f16_f16_aa_rx4 and enhance hvx_vec_reduce_sum_f32x4 for improved performance and reduced complexity

* ggml-hexagon: add hvx_dot_f16_f16_aa_rx32 for enhanced vector processing in flash attention

# Conflicts:
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c

* optimize hvx_dot_f16_f16_aa_rx4 and hvx_dot_f16_f16_aa_rx32 by removing unused scale parameter and improving vector accumulation

# Conflicts:
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c

* ggml-hexagon: refactor hvx_dot_f16_f16_aa_rx4 for improved readability and return HVX_Vector for better integration

# Conflicts:
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c

* ggml-hexagon: initialize sums variable in hvx_dot_f16_f16_aa_rx32 for clarity

* ggml-hexagon: fix compiling error

* fix hvx_dot_f16_f16_aa_rx4 to handle leftover elements correctly using masking

* refactor hvx_dot_f16_f16_aa_rx4 to accept vector and leftover element counts as parameters for improved clarity and flexibility

* wip

* fa: instrumentation and dma reordering

* hex-fa: use block-size 64 to improve DMA pipelining

* hex-fa: optimize vec-dot for v79 and above

* hex-fa: use block size 64

* hex-fa: avoid scalar fp32->fp16 conversions

* hex-fa: simplify dot_f16 functions using optimized vec_mpyacc

* hex-fa: rewrite mad_f32_f16 using hvx_vec_mpyacc

* hex-mm: use mpyacc in matmul dot functions

---------

Co-authored-by: chraac <redacted>
7 weeks agoopencl: add `SET`, support i32 for `CPY`, minor refactor for cpy (#20101)
lhez [Thu, 5 Mar 2026 05:32:26 +0000 (21:32 -0800)]
opencl: add `SET`, support i32 for `CPY`, minor refactor for cpy (#20101)

7 weeks agohexagon: add llama-completion runner script (#20095)
Todor Boinovski [Wed, 4 Mar 2026 23:04:59 +0000 (15:04 -0800)]
hexagon: add llama-completion runner script (#20095)

7 weeks ago[WebGPU] Fix wait logic for inflight jobs (#20096)
Nikhil Jain [Wed, 4 Mar 2026 19:54:55 +0000 (11:54 -0800)]
[WebGPU] Fix wait logic for inflight jobs (#20096)

* Enable tmate debugging for investigating thread safety issue

* Refactor wait and submit to operate on vector<wgpu::FutureWaitInfo>, and fix wait to delete only the future that is completed.

* Cleanup

* Remove clear change and run clang-format

* Cleanup

7 weeks agoAdd concat op to webgpu. (#20068)
Masashi Yoshimura [Wed, 4 Mar 2026 19:19:00 +0000 (04:19 +0900)]
Add concat op to webgpu. (#20068)

7 weeks agotools : add missing clocale include in mtmd-cli [no ci] (#20107)
Sigbjørn Skjæret [Wed, 4 Mar 2026 13:18:04 +0000 (14:18 +0100)]
tools : add missing clocale include in mtmd-cli [no ci] (#20107)

7 weeks agoggml: fix ggml_is_contiguous_n for ne == 1 (#20092)
Johannes Gäßler [Wed, 4 Mar 2026 11:04:31 +0000 (12:04 +0100)]
ggml: fix ggml_is_contiguous_n for ne == 1 (#20092)

7 weeks agoggml : use a simple std::thread in AMX without OpenMP (#20074)
Adrien Gallouët [Wed, 4 Mar 2026 10:57:09 +0000 (11:57 +0100)]
ggml : use a simple std::thread in AMX without OpenMP (#20074)

Disabling OpenMP generally provides better inference performance (at
least in my testing) but the loading becomes slightly slower.

Benchmark results for `convert_B_packed_format()`:

Before this commit:

         N      K |  No OpenMP     OpenMP |    Diff |  Speedup
    ------------------------------------------------------------
       512   2880 |    640.9us    263.5us |  -58.9% |    0.41x
      2880   4096 |     2.55ms    261.7us |  -89.8% |    0.10x
    201088   2880 |   256.44ms    21.61ms |  -91.6% |    0.08x
    ------------------------------------------------------------

    Total: 325.43ms vs 31.05ms

After:

         N      K |  No OpenMP     OpenMP |    Diff |  Speedup
    ------------------------------------------------------------
       512   2880 |     1.49ms    263.5us |  -82.3% |    0.18x
      2880   4096 |     1.55ms    261.7us |  -83.1% |    0.17x
    201088   2880 |    24.03ms    21.61ms |  -10.1% |    0.90x
    ------------------------------------------------------------

    Total: 78.97ms vs 31.05ms

Tested with unsloth/gpt-oss-20b-GGUF:Q4_K_M.

Signed-off-by: Adrien Gallouët <redacted>
7 weeks agoimpl : use 6 digits for tensor dims (#20094)
ddh0 [Wed, 4 Mar 2026 08:53:38 +0000 (02:53 -0600)]
impl : use 6 digits for tensor dims (#20094)

Many models have vocabulary sizes, and thus tensor shapes, with more
than 5 digits (ex: Gemma 3's vocab size is 262,208).

I already fixed this for `llama_format_tensor_shape` but missed it for
`llama_format_tensor_shape` until now. Oops.

7 weeks agoFix locale-dependent float printing in GGUF metadata (#17331)
SamareshSingh [Wed, 4 Mar 2026 08:30:40 +0000 (02:30 -0600)]
Fix locale-dependent float printing in GGUF metadata (#17331)

* Set C locale for consistent float formatting across all binaries.

* Add C locale setting to all tools binaries

Add std::setlocale(LC_NUMERIC, "C") to all 16 binaries in the tools/
directory to ensure consistent floating-point formatting.

* Apply suggestion from @JohannesGaessler

---------

Co-authored-by: Johannes Gäßler <redacted>
7 weeks agocompletion : Fix a typo in warning message (#20082)
standby24x7 [Wed, 4 Mar 2026 05:44:49 +0000 (14:44 +0900)]
completion : Fix a typo in warning message (#20082)

resuse -> reuse

7 weeks agodocs: Fix intel documentation link (#20040)
Mickael Desgranges [Tue, 3 Mar 2026 13:50:00 +0000 (14:50 +0100)]
docs: Fix intel documentation link (#20040)

7 weeks agokleidiai : add sme fp16 compute path for q4_0 gemm on aarch64 (#20043)
Charles Xu [Tue, 3 Mar 2026 09:40:26 +0000 (10:40 +0100)]
kleidiai : add sme fp16 compute path for q4_0 gemm on aarch64 (#20043)

7 weeks agoopencl: add optimized q4_1 mm kernel for adreno (#19840)
shaofeiqi [Tue, 3 Mar 2026 03:49:41 +0000 (19:49 -0800)]
opencl: add optimized q4_1 mm kernel for adreno (#19840)

* Add Q4_1 OpenCL Kernels

* opencl: refactor transpose

* opencl: format

* opencl: refactor q4_1 unpack

* opencl: move `ggml_cl_mul_mat_q4_1_f32_adreno`

* opencl: refactor `ggml_cl_mul_mat_q4_1_f32_adreno` and kernels

* opencl: rename kernel files and kernes

* opencl: fix build for non adreno

* opencl: move code around and format

---------

Co-authored-by: Li He <redacted>
7 weeks agoggml webgpu: fix workgroup dispatch limit for large batch sizes (#19965)
Abhijit Ramesh [Tue, 3 Mar 2026 03:35:11 +0000 (19:35 -0800)]
ggml webgpu: fix workgroup dispatch limit for large batch sizes (#19965)

* ggml-webgpu: fix workgroup dispatch limit for large batch sizes

WebGPU limits workgroup sizes to 65535 per dimension. Large MUL_MAT
operations with batch sizes exceedeing this limi would fail.

* add compute_2d_workgroups() helper to split total workgroup ID across
X/Y dimensions

* update mul_mat_reg_tile.wgsl to reconstruct linear workgroup ID from 2D
   dispatch

* update mul_mat_subgroup_matrix.wgsl to reconstruct linear workgroup ID
  from 2D dispatch

* update mul_mat.wgsl to compute global index from 2D workgroup
  coordinates

* refactor all three mul_mat dispatch paths to use the shared helper

* ggml-webgpu: add bounds checking for over-dispatched workgroups

2D workgroup dispatch can over-dispatch when total workgroups don't
divide evenly into the 65535 per-dimension limit. Extra workgroups
would compute invalid batch indices, causing memory corruption.

* add batch_idx bound check to mul_mat_reg_tile.wgsl and
mul_mat_subgroup_matrix.wgsl to prevent over-dispatched workgroups
from accessing invalid memory

* fixes test failures with large batch sizes (eg., bs=[128, 1024])

* ggml-webgpu: add back TODO for spliting large sizes into batches

* Optimize 2d workgroup provisioning

* Set some parameters that increase speed

---------

Co-authored-by: Reese Levine <redacted>
7 weeks agoggml webgpu: Clean up per-thread parameter buffer pool and job submission logic ...
Nikhil Jain [Mon, 2 Mar 2026 18:23:34 +0000 (10:23 -0800)]
ggml webgpu: Clean up per-thread parameter buffer pool and job submission logic (#19772)

* Allow webgpu_buf_pool to resize if needed, remove inflight_threads, and replace inflight_threads with num_kernels for submission

* Run clang-format

* Keep track of num batched kernels that have not been submitted yet

* Run clang-format

* Increase buf pool max size

* Increase param buf pool init size

* Remove webgpu buf pool resizing

* Merge with master

* Add buffer pool growth

* Move buffer pool growth outside of lock

* Reduce max pool size to 32

* Run clang-format

* Only resize param buf pool

7 weeks agoggml-webgpu: Support non-contiguous `src0` and overlapping `src0/src1` in binary...
Masashi Yoshimura [Mon, 2 Mar 2026 15:59:53 +0000 (00:59 +0900)]
ggml-webgpu: Support non-contiguous `src0` and overlapping `src0/src1` in binary ops (#19850)

* ggml-webgpu: Add binary op support for overlapping and non-contiguous.

* Add newline to binary.wgsl

* Append the test of binary op for src overlapping  to test_bin_bcast.

* Remove unnecessary newline.

7 weeks agovulkan: tune MMVQ for Intel Windows (#19988)
Ruben Ortlam [Mon, 2 Mar 2026 14:58:25 +0000 (15:58 +0100)]
vulkan: tune MMVQ for Intel Windows (#19988)

7 weeks agoscripts : improve get-wikitext-2.sh (#19952)
Adrien Gallouët [Mon, 2 Mar 2026 14:40:49 +0000 (15:40 +0100)]
scripts : improve get-wikitext-2.sh (#19952)

* scripts : improve get-wikitext-2.sh

Switch to sh, add curl fallback, and avoid redundant downloads

Signed-off-by: Adrien Gallouët <redacted>
* fix indent

Signed-off-by: Adrien Gallouët <redacted>
---------

Signed-off-by: Adrien Gallouët <redacted>
Signed-off-by: Adrien Gallouët <redacted>
7 weeks agoggml-cpu: optimise s390x multiply extend instructions (#20032)
Aaron Teo [Mon, 2 Mar 2026 08:23:56 +0000 (16:23 +0800)]
ggml-cpu: optimise s390x multiply extend instructions (#20032)

8 weeks agovulkan: improve partial offloading performance on AMD (#19976)
Ruben Ortlam [Sun, 1 Mar 2026 16:32:14 +0000 (17:32 +0100)]
vulkan: improve partial offloading performance on AMD (#19976)

* vulkan: fix and enable cpy_tensor_async function

* use transfer_queue for async transfers on AMD, synchronize with timeline semaphore

* update offload_op logic

* fix missing transfer submission

* disable async transfer queue on AMD GCN

* revert op batch size change

* fix cpy_tensor_async checks

8 weeks agocuda: cap grid.y at 65535 in non-contiguous dequantize/convert kernels (#19999)
oobabooga [Sun, 1 Mar 2026 05:40:22 +0000 (02:40 -0300)]
cuda: cap grid.y at 65535 in non-contiguous dequantize/convert kernels (#19999)

8 weeks agovendors : update miniaudio library to 0.11.24 (#19914)
Dmitry Atamanov [Sat, 28 Feb 2026 15:10:01 +0000 (20:10 +0500)]
vendors : update miniaudio library to 0.11.24 (#19914)

8 weeks agovendor : update cpp-httplib to 0.35.0 (#19969)
Adrien Gallouët [Sat, 28 Feb 2026 12:53:56 +0000 (13:53 +0100)]
vendor : update cpp-httplib to 0.35.0 (#19969)

Signed-off-by: Adrien Gallouët <redacted>
8 weeks agotests : model metadata loading from huggingface (#19796)
Bartowski [Sat, 28 Feb 2026 09:44:38 +0000 (04:44 -0500)]
tests : model metadata loading from huggingface (#19796)

* Add model metadata loading from huggingface for use with other tests

* Add incremental chunking instead of full redownload, fix caching issue and add warning when it fails

* Add support for split models, load metadata from each individual split file, also avoid mmproj

* Code cleanup, revert incremental downloading

* Only compile when cpp-httplib has SSL support

* Fix formatting

8 weeks agoCUDA: add CDNA3 MFMA support for flash attention MMA kernel (#19806)
Jayant Lohia [Fri, 27 Feb 2026 18:37:26 +0000 (00:07 +0530)]
CUDA: add CDNA3 MFMA support for flash attention MMA kernel (#19806)

* CUDA: add CDNA3 MFMA support for flash attention MMA kernel

Add MI300X (gfx942) MFMA tensor core flash attention using
v_mfma_f32_16x16x16_f16 (FP16 in, FP32 accumulate).

- Add FATTN_WARP_SIZE=64 for CDNA wavefront64
- Add CDNA config for head sizes 64, 80, 96, 112, 128
- Add FP16 MFMA intrinsic path in mma.cuh
- Add manual V transpose load for MFMA register layout
- Route CDNA to MMA for prompt processing, VEC for token generation
- Fix Q loading and combine stride granularity for non-power-of-2 heads

Benchmarks (Qwen2.5-1.5B Q4_K_M, MI300X):
  pp512  +7%,  pp1024 +13%,  pp2048 +23%,  pp4096 +39%
  tg128  -10% (FA overhead, VEC used for both)

All 2480 flash attention tests pass.

Ref: https://github.com/ggml-org/llama.cpp/issues/17917

* address review: replace FATTN_WARP_SIZE with constexpr, improve dispatch

- Replace #define FATTN_WARP_SIZE with constexpr int warp_size =
  ggml_cuda_get_physical_warp_size() in each device function
- Use ne[1]*gqa_ratio threshold for MMA vs tile dispatch. Benchmarked
  crossover on MI300X @ d32768 with power-of-2 GQA models:
    hsk=64  (Llama 1B, gqa=4): MMA wins at eff >= 128 (+11%)
    hsk=128 (Llama 3B, gqa=4): MMA wins at eff >= 128 (+4%)
  Unified threshold: eff_nq >= 128 for all head sizes.
- Remove VEC fallback; small batches fall through to tile kernel

* Update ggml/src/ggml-cuda/fattn.cu

* use ggml_cuda_info().devices warp_size instead of hardcoded check

---------

Co-authored-by: Johannes Gäßler <redacted>
8 weeks agoserver: Add pragma once to server-context.h (#19944)
Roj234 [Fri, 27 Feb 2026 17:28:36 +0000 (01:28 +0800)]
server: Add pragma once to server-context.h (#19944)

8 weeks agoserver: Mirroring /v1/responses to /responses to match /v1/chat/completions pattern...
Sami Kama [Fri, 27 Feb 2026 16:44:42 +0000 (08:44 -0800)]
server: Mirroring /v1/responses to /responses to match /v1/chat/completions pattern (#19873)

8 weeks agoci : use ubuntu-latest for gguf-publish workflow (#19951)
Daniel Bevenius [Fri, 27 Feb 2026 13:42:24 +0000 (14:42 +0100)]
ci : use ubuntu-latest for gguf-publish workflow (#19951)

This commit changes the runner for the gguf-publish workflow from
ubuntu-slim back to ubuntu-latest, which was updated in Commit
142cbe2ac68978e5dec3a2e19c1b64ef1c5740b1 ("ci : use new 1vCPU runner for
lightweight jobs (#19107)").

The motivation for this is that the action used in the workflow depends
on the docker daemon, which does not seem not available in the
ubuntu-slim runner. This is currently causing an error in the workflow
and preventing the gguf-publish workflow from running successfully.
Today was the the first time since the original change (I think) that
publish task has been run which may be why the issue was not noticed
before.

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/22481900566

8 weeks agoggml-cpu: add repack for mxfp4 (#19738)
Aman Gupta [Fri, 27 Feb 2026 10:15:09 +0000 (18:15 +0800)]
ggml-cpu: add repack for mxfp4 (#19738)

8 weeks agogguf-py : dump version to 0.18.0 (#19950) gguf-v0.18.0
Daniel Bevenius [Fri, 27 Feb 2026 10:02:53 +0000 (11:02 +0100)]
gguf-py : dump version to 0.18.0 (#19950)

This commit updates the gguf-py package version to 0.18.0 in preperation
of a new release to PyPI.

Refs: https://github.com/ggml-org/llama.cpp/discussions/19948

8 weeks agoserver : support multiple model aliases via comma-separated --alias (#19926)
Pascal [Fri, 27 Feb 2026 06:05:23 +0000 (07:05 +0100)]
server : support multiple model aliases via comma-separated --alias (#19926)

* server : support multiple model aliases via comma-separated --alias

* server : update --alias description and regenerate docs

* server : multiple model aliases and tags

- address review feedback from ngxson
- --alias accepts comma-separated values (std::set, no duplicates)
- --tags for informational metadata (not used for routing)
- aliases resolve transparently in router via get_meta/has_model
- /v1/models exposes aliases and tags fields

* regenerate docs

* nits

* server : use first alias as model_name for backward compat

address review feedback from ngxson

* server : add single-model test for aliases and tags

8 weeks agotests : enable test-chat out of tree build (#19558)
Jan Patrick Lehr [Fri, 27 Feb 2026 04:37:54 +0000 (05:37 +0100)]
tests :  enable test-chat out of tree build (#19558)

The binary relies on model files that it tries to find. However, when
configuring the build directory to be parallel to the source tree those
heuristics fail.

This sets the working directory for the test executable to be the
source-tree which resolves this issue.

8 weeks agoreplace the magic nunber 768 by max work group size to support iGPU (#19920)
Neo Zhang [Fri, 27 Feb 2026 01:26:07 +0000 (09:26 +0800)]
replace the magic nunber 768 by max work group size to support iGPU (#19920)

Co-authored-by: Neo Zhang Jianyu <redacted>
8 weeks agoggml-zendnn: update code for latest ZenDNN API (#19923)
Vishal Singh [Fri, 27 Feb 2026 00:43:41 +0000 (06:13 +0530)]
ggml-zendnn: update code for latest ZenDNN API (#19923)

- adapt ggml-zendnn.cpp to the new lowoha::matmul interface
- update the ZenDNN git tag in CMake to the latest release (ZenDNN‑2026‑WW08)
- add static lib support in CMake

8 weeks agoggml : fix AMX and add batched support (#19925)
Adrien Gallouët [Thu, 26 Feb 2026 20:39:11 +0000 (21:39 +0100)]
ggml : fix AMX and add batched support (#19925)

llama-perplexity -hf ggml-org/Qwen3-0.6B-GGUF:Q4_0 -f wikitext-2-raw/wiki.test.raw -c 2048 -b 2048 --chunks 2

before this commit:

```
perplexity: calculating perplexity over 2 chunks, n_ctx=2048, batch_size=2048, n_seq=1
perplexity: 2.31 seconds per pass - ETA 0.07 minutes
[1]17.3868,[2]22.2199,
Final estimate: PPL = 22.2199 +/- 1.59692

llama_perf_context_print:        load time =     878.56 ms
llama_perf_context_print: prompt eval time =    2037.82 ms /  4096 tokens (    0.50 ms per token,  2009.99 tokens per second)
llama_perf_context_print:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_perf_context_print:       total time =    6403.17 ms /  4097 tokens
llama_perf_context_print:    graphs reused =          0
llama_memory_breakdown_print: | memory breakdown [MiB] | total   free    self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - Host               |                  845 =   318 +     224 +     302                |
llama_memory_breakdown_print: |   - CPU_REPACK         |                  288 =   288 +       0 +       0                |
llama_memory_breakdown_print: |   - AMX                |                   31 =    31 +       0 +       0                |
```

after this commit:

```
perplexity: calculating perplexity over 2 chunks, n_ctx=2048, batch_size=2048, n_seq=1
perplexity: 1.98 seconds per pass - ETA 0.05 minutes
[1]17.2005,[2]21.8220,
Final estimate: PPL = 21.8220 +/- 1.56485

llama_perf_context_print:        load time =     719.23 ms
llama_perf_context_print: prompt eval time =    1676.23 ms /  4096 tokens (    0.41 ms per token,  2443.58 tokens per second)
llama_perf_context_print:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_perf_context_print:       total time =    4258.74 ms /  4097 tokens
llama_perf_context_print:    graphs reused =          0
llama_memory_breakdown_print: | memory breakdown [MiB] | total   free    self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - Host               |                  845 =   318 +     224 +     302                |
llama_memory_breakdown_print: |   - AMX                |                  319 =   319 +       0 +       0                |
```
(no more CPU_REPACK)

after this commit, disabling amx:

```
perplexity: calculating perplexity over 2 chunks, n_ctx=2048, batch_size=2048, n_seq=1
perplexity: 2.34 seconds per pass - ETA 0.07 minutes
[1]17.2005,[2]21.8220,
Final estimate: PPL = 21.8220 +/- 1.56485

llama_perf_context_print:        load time =     841.91 ms
llama_perf_context_print: prompt eval time =    2057.28 ms /  4096 tokens (    0.50 ms per token,  1990.98 tokens per second)
llama_perf_context_print:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_perf_context_print:       total time =    6454.51 ms /  4097 tokens
llama_perf_context_print:    graphs reused =          0
llama_memory_breakdown_print: | memory breakdown [MiB] | total   free    self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - Host               |                  845 =   318 +     224 +     302                |
llama_memory_breakdown_print: |   - CPU_REPACK         |                  319 =   319 +       0 +       0                |
```
=> same perplexity.

Signed-off-by: Adrien Gallouët <redacted>
8 weeks agovulkan: fix fp16 Flash Attention on Windows AMD RDNA2 and below (#19921)
Ruben Ortlam [Thu, 26 Feb 2026 18:11:04 +0000 (19:11 +0100)]
vulkan: fix fp16 Flash Attention on Windows AMD RDNA2 and below (#19921)

8 weeks agomtmd : fix padding of n_tokens (#19930)
Georgi Gerganov [Thu, 26 Feb 2026 16:39:49 +0000 (18:39 +0200)]
mtmd : fix padding of n_tokens (#19930)

8 weeks agoserver : fix ctx checkpoint restore logic (#19924)
Georgi Gerganov [Thu, 26 Feb 2026 16:20:16 +0000 (18:20 +0200)]
server : fix ctx checkpoint restore logic (#19924)

8 weeks agokv-cache : fix can_shift() check to take into account M-RoPE (#19928)
Georgi Gerganov [Thu, 26 Feb 2026 16:08:54 +0000 (18:08 +0200)]
kv-cache : fix can_shift() check to take into account M-RoPE (#19928)

8 weeks agollama: Add option to merge gate and exp weights (#19139)
Aman Gupta [Thu, 26 Feb 2026 13:01:08 +0000 (21:01 +0800)]
llama: Add option to merge gate and exp weights (#19139)

* llama: Add option to merge gate and exp weights

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* update constants.py

* add gate_up for the all MoE models

* convert: simplify merge tensor condition

* update constants.py

* reduce number of models, add create_tensor_gate_up helper

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
8 weeks agoggml-virtgpu: improve the reliability of the code (#19846)
Kevin Pouget [Thu, 26 Feb 2026 12:00:57 +0000 (13:00 +0100)]
ggml-virtgpu: improve the reliability of the code (#19846)

* ggml-virtgpu-backend: validate the consistency of the received objects

This patch adds consistency checks in the
ggml-virtgpu-backend (running on the host side) to ensure that the
data received from the guest is consistent (valid pointers, valid
sizes and offsets).

* ggml-virtgpu-backend: add fallback/skips for optional ggml backend methods

```
  1. bck->iface.synchronize(bck)
  2. buft->iface.get_alloc_size(buft, op)
  3. buft->iface.get_max_size(buft)
```

these three methods are optional in the GGML interface. `get_max_size`
was already properly defaulted, but `backend sychronize` and `butf
get_max_size` would have segfaulted the backend if not implemented.

* ggml-virtgpu-backend: fix log format missing argument

* ggml-virtgpu-backend: improve the abort message

* ggml-virtgpu-backend: more safety checks

* ggml-virtgpu-backend: new error code

* ggml-virtgpu-backend: initialize all the error codes

* ggml-virtgpu: add a missing comment generated by the code generator

* ggml-virtgpu: add the '[virtgpu]' prefix to the device/buffer names

* ggml-virtgpu: apir_device_buffer_from_ptr: improve the error message

* ggml-virtgpu: shared: make it match the latest api_remoting.h of Virglrenderer APIR

(still unmerged)

* ggml-virtgpu: update the code generator to have dispatch_command_name in a host/guest shared file

* ggml-virtgpu: REMOTE_CALL: fail if the backend returns an error

* docs/backend/VirtGPU.md: indicate that the RAM+VRAM size is limed to 64 GB with libkrun

* ggml-virtgpu: turn off clang-format header ordering for some of the files

Compilation breaks when ordered alphabetically.

* ggml-virtgpu: clang-format

* ggml-virtgpu/backend/shared/api_remoting: better comments for the APIR return codes

8 weeks agoserver: fix load-on-startup not respected in ini file (#19897)
drrros [Thu, 26 Feb 2026 11:32:31 +0000 (14:32 +0300)]
server: fix load-on-startup not respected in ini file (#19897)

Co-authored-by: Roman Marchenko <redacted>
8 weeks agojinja : correct default size for string slices (#19913)
Eric Zhang [Thu, 26 Feb 2026 11:28:09 +0000 (19:28 +0800)]
jinja : correct default size for string slices (#19913)

8 weeks agomodel : add Jina Embeddings v5 Nano (partial EuroBERT) support (#19826)
Maximilian Werk [Thu, 26 Feb 2026 11:14:09 +0000 (12:14 +0100)]
model : add Jina Embeddings v5 Nano (partial EuroBERT) support (#19826)

* WIP: Add EuroBERT support with autoformatting changes

This commit includes:
- EuroBERT model implementation for GGUF conversion
- C++ backend support for EuroBERT architecture
- Unintended autoformatting changes to Python files

Saving before reverting formatting-only changes.

* feat: add back eos assert when not last token pooling

* feat: removed duplicated code and cleanup

* feat: removed not working architectures and unnecessary check

* fix: typo

* fix: dynamic pooling config

* feat: added an example model for eurobert

* feat: proper llama-vocab implementation for jina-v5

* fix: removed unnecessary comments

8 weeks agogguf : avoid too many file size calls (#19919)
Georgi Gerganov [Thu, 26 Feb 2026 10:46:32 +0000 (12:46 +0200)]
gguf : avoid too many file size calls (#19919)

8 weeks agoserver : fix typo in server README.md (#19900)
yggdrasil75 [Thu, 26 Feb 2026 10:26:16 +0000 (05:26 -0500)]
server : fix typo in server README.md (#19900)

fix typo

8 weeks agosupport permuted, remove check s0/s10 (#19889)
Neo Zhang [Thu, 26 Feb 2026 02:27:20 +0000 (10:27 +0800)]
support permuted, remove check s0/s10 (#19889)

Co-authored-by: Neo Zhang Jianyu <redacted>
8 weeks agovulkan: check for memory overlap before doing fusion (#19768)
Jeff Bolz [Wed, 25 Feb 2026 17:25:38 +0000 (11:25 -0600)]
vulkan: check for memory overlap before doing fusion (#19768)

* vulkan: check for memory overlap before doing fusion

* Update ggml/src/ggml-vulkan/ggml-vulkan.cpp

* address feedback

8 weeks agocommon : add more aliases for sampler CLI params (#19797)
ddh0 [Wed, 25 Feb 2026 15:34:25 +0000 (09:34 -0600)]
common : add more aliases for sampler CLI params (#19797)

* common : add more aliases for sampler CLI params

8 weeks agoci : update the ROCm/HIP toolchain versions [no ci] (#19891)
Slobodan Josic [Wed, 25 Feb 2026 14:54:49 +0000 (15:54 +0100)]
ci : update the ROCm/HIP toolchain versions [no ci] (#19891)

* [HIP] Update ROCm build container to rocm/dev-ubuntu-22.04:7.2 and HIP_SDK to 26.Q1

* revert container version

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
8 weeks agoserver : enable multi-modal prompt caching (#19877)
Georgi Gerganov [Wed, 25 Feb 2026 13:15:42 +0000 (15:15 +0200)]
server : enable multi-modal prompt caching (#19877)

8 weeks agoserver : support multi-modal context checkpoints (#19849)
Georgi Gerganov [Wed, 25 Feb 2026 13:14:27 +0000 (15:14 +0200)]
server : support multi-modal context checkpoints (#19849)

* Modify llama-memory-hybrid-iswa.cpp

* Modify llama-memory-recurrent.cpp

* Modify server-common.cpp

* Modify server-common.h

* Modify server-context.cpp

* Modify server-task.h

* Added comment to llama-memory-hybrid-iswa.cpp

* Remove comment from server-context.cpp

* Stylistic fix server-context.cpp

* Fix an issue when seqrm isn't called in server-context.cpp

* cont : alternative impl

* cont : cleanup

* cont : n_tokens -> int64_t

---------

Co-authored-by: timkhronos <redacted>
8 weeks agoscripts: update corpus of compare-logprobs (#19326)
Xuan-Son Nguyen [Wed, 25 Feb 2026 11:57:34 +0000 (12:57 +0100)]
scripts: update corpus of compare-logprobs (#19326)

* scripts: update corpus of compare-logprobs

* fix

8 weeks agoci : update Windows ROCm build to 26.Q1 [no ci] (#19810)
Mario Limonciello [Wed, 25 Feb 2026 11:30:19 +0000 (05:30 -0600)]
ci : update Windows ROCm build to 26.Q1 [no ci] (#19810)

* Update build command to build llama-* tools not just ggml-hip
* Update rocWMMA headers to 7.2
* Add GFX1150 target
* Correct library paths for AMD libraries in 26.Q1

8 weeks agogguf : fix ftell/fseek for Windows (#19870)
Aldehir Rojas [Wed, 25 Feb 2026 04:58:11 +0000 (22:58 -0600)]
gguf : fix ftell/fseek for Windows (#19870)

2 months agomodels : fix graph splits (#19866)
Georgi Gerganov [Tue, 24 Feb 2026 22:01:13 +0000 (00:01 +0200)]
models : fix graph splits (#19866)

2 months agoserver: fix query params lost when proxying requests in multi-model router mode ...
Pascal [Tue, 24 Feb 2026 20:46:06 +0000 (21:46 +0100)]
server: fix query params lost when proxying requests in multi-model router mode (#19854)

* server: fix query params lost when proxying requests in multi-model router mode

* server: re-encode query params using httplib::encode_query_component in proxy

2 months agoggml/gguf : prevent integer overflows (#19856)
Georgi Gerganov [Tue, 24 Feb 2026 18:17:11 +0000 (20:17 +0200)]
ggml/gguf : prevent integer overflows (#19856)

* gguf : prevent integer overflow for ggml_context mem size

* ggml : fix int overflows in ggml_new_object()

* gguf : prevent string exhaustion

* gguf : prevent array elements exhaustion

* ggml : fix negative tensor type oob

* py : assert that alignment is non-zero power of 2

* ggml : check int overflow in ggml_new_tensor_impl and ggml_new_object

* gguf-py : error on duplicate keys when reading

* py : restore tensor_fields

* enforce proper alignment in add_custom_alignment

* gguf : better name

* gguf : fix ctx size for no_alloc == true

* gguf : minor print fix

* ggml : print values when overflow

* ggml : remove deprecated ggml_type_sizef()

* ggml : relax ggml_type asserts to debug-only

* gguf : add mem_size overflow test

* gguf : add file size check for arrays

* ggml : relax asseerts for ggml_get_type_traits()

* flake8 fix

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agomodel : update label for LFM2-24B-A2B (#19848)
Tarek Dakhran [Tue, 24 Feb 2026 13:27:42 +0000 (14:27 +0100)]
model : update label for LFM2-24B-A2B (#19848)

* model : Update label for LFM2-24B-A2B

```
❯ build/bin/llama-bench -m /data/playground/checkpoints/LFM2-24B-A2B-Preview-Q4_0.gguf,/data/playground/checkpoints/LFM2-8B-A1B-Q4_0.gguf -p 1 -n 0
| model                          |       size |     params | backend    | threads |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| lfm2moe 24B.A2B Q4_0           |  12.54 GiB |    23.84 B | CPU        |      10 |             pp1 |         30.35 ± 2.49 |
| lfm2moe 8B.A1B Q4_0            |   4.41 GiB |     8.34 B | CPU        |      10 |             pp1 |         49.24 ± 1.93 |
```

* Remove extra line

2 months agoserver : support max_completion_tokens request property (#19831)
Radoslav Gerganov [Tue, 24 Feb 2026 08:30:00 +0000 (10:30 +0200)]
server : support max_completion_tokens request property (#19831)

"max_tokens" is deprectated in favor of "max_completion_tokens" which
sets the upper bound for reasoning+output token.

Closes: #13700
2 months agoVulkan Scalar Flash Attention Refactor (#19625)
Ruben Ortlam [Tue, 24 Feb 2026 07:35:48 +0000 (08:35 +0100)]
Vulkan Scalar Flash Attention Refactor (#19625)

* vulkan: allow using fp16 in scalar flash attention shader

* split rows inside of subgroups for faster synchronization

* use row_split when Br >= 4, change reductions to use shared memory if row_split == 1

* use f32 scalar FA if f16 is not supported by device

* fix amd workgroup size issue

* optimize masksh use

* add medium rows FA shader Br size

* fixes

* add padding to mask shmem buffer

* cache q values into registers for KQ

* fuse lf accumulation, pf and v accumulation into a loop

* stage K loads through shmem

* stage V loads through shmem

* only stage through shmem on Nvidia

* default to Bc 32

* also stage V through shmem when this is done for K

* dynamic subgroups for intel

* use vectorized stores

* use float_type for dequantize4 functions

* use smaller scalar rows size for smaller rows count

* relax flash attention split_k condition to allow non-gqa use

* use minimal subgroup size on Intel

* fix shmem support function

* fix rebase issues

* fixes

* Bc 4 for scalar FA is not a valid configuration

* Use wave32 on AMD RDNA for scalar FA

* add Intel shader core count lookup-table

* fix regressions

* device tuning

* tmpsh size fix

* fix editorconfig

* refactor fa tuning logic into a single place

* fix gqa opt logic

* fix block_rows with small n_rows

* amd tuning

* fix hsk=72/80 issue

* tuning

* allow condition skipping for column check

* use float16 for Of if available

* address feedback

* fix bad RDNA performance on head size <= 128 by limiting occupancy

* allow printing pipeline stats

* cleanup and fixes

* limit occupancy for GCN for small batch FA with large HSK

* disable f16 FA for GCN AMD GPUs on the proprietary driver

2 months agovulkan: fix coopmat1 without bf16 support (#19793)
Jeff Bolz [Tue, 24 Feb 2026 06:48:32 +0000 (00:48 -0600)]
vulkan: fix coopmat1 without bf16 support (#19793)

2 months agovulkan: fix data race in mul_mat_id shader (#19790)
Jeff Bolz [Tue, 24 Feb 2026 06:43:12 +0000 (00:43 -0600)]
vulkan: fix data race in mul_mat_id shader (#19790)

2 months agohexagon refactor all Ops to use local context struct (#19819)
Max Krasnyansky [Tue, 24 Feb 2026 00:32:14 +0000 (16:32 -0800)]
hexagon refactor all Ops to use local context struct (#19819)

* hexagon: refactor set/get/sum-rows ops to use local context

* hexagon: refactor ROPE and Softmax Ops to use local context

Improves performance a bit by precomputing things and saving in the context.

* hexagon: refactor activation ops to use local context struct

* hexagon: refactor unary ops to use local context struct and DMA/VTCM

* hexagon: use aligned hvx_scale function

* hexagon: remove unused fields from op_context

* hexagon: rewrite ROPE to use DMA and VTCM scratchpad

* hex-rope: keep N rows in scratchpad (instead of just two)

* hex-rope: introduce rowidx cache

* hex-rope: remove unused fields

* hex-rope: rewrite dma prefetch logic to allow for multi-row fetch/compute

also removes the need for fastdiv.

* hex-rope: minor formatting

* hex-rope: use indices and unroll the loops

* hex-rope: more updates to cleanup rope-block handling

* hexagon: cleanup supported type/dims checks

* hexagon: all reduce funcs replicated across lanes

There is no need to explicitly replicate the first value.

* snapdragon: update adb and windows scripts to use ubatch-size 256

Updated Ops support handles larger ubatches.

2 months agofeat: Add code blocks full height setting to parameter sync service (#19835)
Aleksander Grygier [Mon, 23 Feb 2026 21:30:13 +0000 (22:30 +0100)]
feat: Add code blocks full height setting to parameter sync service (#19835)

2 months agovendor : update cpp-httplib to 0.34.0 (#19830)
Adrien Gallouët [Mon, 23 Feb 2026 20:05:48 +0000 (21:05 +0100)]
vendor : update cpp-httplib to 0.34.0 (#19830)

Signed-off-by: Adrien Gallouët <redacted>
2 months agotests : fix typos in comments in test-backend-sampler [no ci] (#19824)
Daniel Bevenius [Mon, 23 Feb 2026 16:12:02 +0000 (17:12 +0100)]
tests : fix typos in comments in test-backend-sampler [no ci] (#19824)

* tests : fix typos in comments in test-backend-sampler [no ci]

2 months agowebui: Add setting to have full height Code Blocks in Chat Messages (#19829)
Aleksander Grygier [Mon, 23 Feb 2026 13:16:50 +0000 (14:16 +0100)]
webui: Add setting to have full height Code Blocks in Chat Messages (#19829)

2 months agomodel-conversion : merge inspect-org-model.py with tensor-info.py (#19823)
Daniel Bevenius [Mon, 23 Feb 2026 13:15:16 +0000 (14:15 +0100)]
model-conversion : merge inspect-org-model.py with tensor-info.py (#19823)

This commit replaces/merges the inspect-org-model.py script with the
contents tensor-info.py script. The merged script has also been updated
to also print tensor sizes which was the only thing that was not done
before (by tensor-info.py that is).

The motivation for this is that tensor-info.py does not load the tensor
weights which can be time consuming for larger models. And also now that
both are doing almost the same thing it makes sense to just have one and
not two scripts to maintain.

2 months agoggml-cpu: arm64: q5_K repack gemm and gemv (and generic) implementations (dotprod...
Alberto Cabrera Pérez [Mon, 23 Feb 2026 12:42:52 +0000 (12:42 +0000)]
ggml-cpu: arm64: q5_K repack gemm and gemv (and generic) implementations (dotprod) (#19356)

* Generic GEMV and boilerplate for q5_K dotprod
* Generic GEMM and boilerplate for q5_K dotprod
* ARM64 q5_K dotprod GEMM
* ARM64 q5_K dotprod GEMV

2 months agollama : remove write/read of output ids/logits/embeddings (#18862)
Daniel Bevenius [Mon, 23 Feb 2026 06:04:30 +0000 (07:04 +0100)]
llama : remove write/read of output ids/logits/embeddings (#18862)

* llama : remove write/read of output ids/logits/embeddings

This commit removes the write/read of output ids, logits and
embeddings from the llama context state.

Refs: https://github.com/ggml-org/llama.cpp/pull/18862#issuecomment-3756330941

* completion : add replying of session state

This commit updates the session handing in the completion tool to handle
the that logits are no longer stored in the session file. Instead, we
need to replay the last token to get the logits for sampling.

* common : add common_prompt_batch_decode function

This commit adds a new function which is responsible for decoding prompt
and optionally handle the saving for session data.

* update save-state.cpp to use llama_state_load_file

This commit updates the save-load-state example to utilize the new
llama_state_load_file function for loading the model state from a file.
And it also replays the last token after loading since this state is now
stored before the last token is processed.

* examples : set n_seq_max = 2 for ctx3

This commit updates the save-load-state example to set the n_seq_max
parameter to 2 when initializing the ctx3 context.

The motivation for this change is that using 1 as n_parallel/n_seq_max
the context only supports one sequence, but the test laster tries to
use a second sequence which results in the following error:
```console
main : loaded state with 4 tokens
main : seq 0 copied, 225760 bytes
main : kv cache cleared
find_slot: seq_id=1 >= n_seq_max=1 Try using a bigger --parallel value
state_read_meta: failed to find available cells in kv cache
```
This seems to only happen for recurrent/hybrid models.

2 months agocli : provide model with text filename (#19783)
Sigbjørn Skjæret [Sun, 22 Feb 2026 21:33:49 +0000 (22:33 +0100)]
cli : provide model with text filename (#19783)

2 months agojinja: correct stats for tojson and string filters (#19785)
Xuan-Son Nguyen [Sun, 22 Feb 2026 20:08:23 +0000 (21:08 +0100)]
jinja: correct stats for tojson and string filters (#19785)

2 months agocommon : fix improper trimming in XML parser on complete message (#19805)
Aldehir Rojas [Sun, 22 Feb 2026 16:34:54 +0000 (10:34 -0600)]
common : fix improper trimming in XML parser on complete message (#19805)

Co-authored-by: Jules LEIDELINGER <redacted>
2 months agoFix wrong cli-argument in documentation (#19804)
Kilian Krampf [Sun, 22 Feb 2026 15:26:33 +0000 (16:26 +0100)]
Fix wrong cli-argument in documentation (#19804)

2 months agomodel : add Kanana-2 model support (#19803)
HelloKS [Sun, 22 Feb 2026 15:15:02 +0000 (00:15 +0900)]
model : add Kanana-2 model support (#19803)

* model: Add Kanana-2 model support

* lint: adjust spacing

2 months agoci : fix rocm archive name [no ci] (#19808)
Sigbjørn Skjæret [Sun, 22 Feb 2026 15:14:37 +0000 (16:14 +0100)]
ci : fix rocm archive name [no ci] (#19808)

2 months agoserver : merge contiguous Responses input items into a single assistant message ...
Aldehir Rojas [Sun, 22 Feb 2026 13:11:31 +0000 (07:11 -0600)]
server : merge contiguous Responses input items into a single assistant message (#19773)

* server : merge contiguous input items into a single assistant message

* cont : simplify tool call msg

* cont : reduce and combine content

* cont : fix merging content items

2 months agoci : fix rocm release path [no ci] (#19784)
Sigbjørn Skjæret [Sun, 22 Feb 2026 07:07:46 +0000 (08:07 +0100)]
ci : fix rocm release path [no ci] (#19784)

2 months agoUpdate ROCm docker container to 7.2 release (#19418)
Mario Limonciello [Sat, 21 Feb 2026 20:53:39 +0000 (14:53 -0600)]
Update ROCm docker container to 7.2 release (#19418)

Also update architectures