]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
3 weeks agoggml-cpu: FA add GEMM microkernel (#19422)
Aman Gupta [Sun, 15 Feb 2026 05:39:24 +0000 (11:09 +0530)]
ggml-cpu: FA add GEMM microkernel (#19422)

* ggml-cpu: FA add GEMM microkernel

* add guard for sizeless vector types

* fix case where DV % GGML_F32_EPR !=0

* move memset out of the loop

* move another memset out of the loop

* use RM=4 for arm

* simd_gemm: convert everything to int

* convert everything to size_t to avoid warnings

* fixup

* add pragma for ignoring aggressive loop optimizations

3 weeks agocmake : fix KleidiAI install target failure with EXCLUDE_FROM_ALL (#19581)
SamareshSingh [Sun, 15 Feb 2026 05:22:53 +0000 (23:22 -0600)]
cmake : fix KleidiAI install target failure with EXCLUDE_FROM_ALL (#19581)

* cmake: fix KleidiAI install target failure with EXCLUDE_FROM_ALL

Fix for the bug #19501 by adding EXCLUDE_FROM_ALL to FetchContent_Declare. This properly excludes KleidiAI from both build and install targets, preventing install failures when GGML_CPU_KLEIDIAI=ON is used.

The KleidiAI source files are still compiled into libggml-cpu.so, preserving all functionality.

* addressed code review comments

3 weeks agoconvert : ensure all models handle new experts count (#19621)
Sigbjørn Skjæret [Sat, 14 Feb 2026 21:22:32 +0000 (22:22 +0100)]
convert : ensure all models handle new experts count (#19621)

* ensure all models handle new experts count

* revert removal for PhiMoeModel, does not inherit from base

3 weeks agomtmd : Add Nemotron Nano 12B v2 VL support (#19547)
Anav Prasad [Sat, 14 Feb 2026 13:07:00 +0000 (05:07 -0800)]
mtmd : Add Nemotron Nano 12B v2 VL support (#19547)

* nemotron nano v2 vlm support added

* simplified code; addressed reviews

* pre-downsample position embeddings during GGUF conversion for fixed input size

3 weeks agomodels : optimize qwen3next graph (#19375)
Georgi Gerganov [Sat, 14 Feb 2026 10:57:36 +0000 (12:57 +0200)]
models : optimize qwen3next graph (#19375)

* models : optimizing qwen3next graph

* cont

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* cont : remove redundant q, g chunking

* minor

* minor

* avoid passing masks around

* avoid concats during chunking

* naming + shapes

* update names and use prefix to disable CUDA graphs

3 weeks agoggml : fix GGML_DEBUG with OpenMP (#19599)
Adrien Gallouët [Sat, 14 Feb 2026 10:22:57 +0000 (11:22 +0100)]
ggml : fix GGML_DEBUG with OpenMP (#19599)

last_graph is only available without OpenMP, but
ggml_graph_compute_thread() is called in both cases.

Signed-off-by: Adrien Gallouët <redacted>
3 weeks agoNetBSD build support (#19589)
iMil [Sat, 14 Feb 2026 08:47:01 +0000 (09:47 +0100)]
NetBSD build support (#19589)

3 weeks agowebui: Architecture and UI improvements (#19596)
Aleksander Grygier [Sat, 14 Feb 2026 08:06:41 +0000 (09:06 +0100)]
webui: Architecture and UI improvements (#19596)

3 weeks agollama : update LoRA API. + fix excessive graph reserves (#19280)
agent-enemy-2 [Sat, 14 Feb 2026 08:06:27 +0000 (03:06 -0500)]
llama : update LoRA API. + fix excessive graph reserves (#19280)

* Refactoring to use new llama_put_adapter_loras

* cont : alternative lora API

---------

Co-authored-by: Jake Chavis <redacted>
Co-authored-by: Georgi Gerganov <redacted>
3 weeks agommap: Fix Windows handle lifetime (#19598)
George [Sat, 14 Feb 2026 08:05:12 +0000 (10:05 +0200)]
mmap: Fix Windows handle lifetime (#19598)

* ggml: added cleanups in ggml_quantize_free
Add missing cleanup calls for IQ2_S, IQ1_M quantization types and IQ3XS with 512 blocks during quantization cleanup.

* mmap: Fix Windows handle lifetime
Move hMapping from local variable to member variable so it stays alive for the entire lifetime of the mapping.
The file mapping handle must remain valid until UnmapViewOfFile is called.
Fixes cleanup order in destructor.

* Update llama-mmap.cpp

* Update llama-mmap.cpp

Remove trailing whitespace from line 567

3 weeks agometal : fix ACC op (#19427)
Georgi Gerganov [Sat, 14 Feb 2026 07:54:03 +0000 (09:54 +0200)]
metal : fix ACC op (#19427)

3 weeks agoscripts : use official split.py for cpp-httplib (#19588)
Adrien Gallouët [Sat, 14 Feb 2026 07:41:16 +0000 (08:41 +0100)]
scripts : use official split.py for cpp-httplib (#19588)

* scripts : use official split.py for cpp-httplib

Using the official script is safer and ensures the generated code aligns
with the library's standards.

Signed-off-by: Adrien Gallouët <redacted>
* Catch generic errors

Signed-off-by: Adrien Gallouët <redacted>
* Allow print()

Signed-off-by: Adrien Gallouët <redacted>
* Ensure robust cleanup

Signed-off-by: Adrien Gallouët <redacted>
---------

Signed-off-by: Adrien Gallouët <redacted>
3 weeks agoconvert : store ffn_gate_inp_shexp as F32 (#19606)
Sigbjørn Skjæret [Sat, 14 Feb 2026 07:17:43 +0000 (08:17 +0100)]
convert : store ffn_gate_inp_shexp as F32 (#19606)

3 weeks agobuild : fix libtool call in build-xcframework.sh (#19605)
Adrien Gallouët [Sat, 14 Feb 2026 05:48:37 +0000 (06:48 +0100)]
build : fix libtool call in build-xcframework.sh (#19605)

Run libtool via xcrun like strip and dsymutil, to have proper tool resolution.

Signed-off-by: Adrien Gallouët <redacted>
3 weeks agovulkan: support L2_NORM with contiguous rows (#19604)
Jeff Bolz [Sat, 14 Feb 2026 05:42:04 +0000 (21:42 -0800)]
vulkan: support L2_NORM with contiguous rows (#19604)

3 weeks agovulkan: support GGML_OP_SET (#19584)
Jeff Bolz [Sat, 14 Feb 2026 05:36:38 +0000 (21:36 -0800)]
vulkan: support GGML_OP_SET (#19584)

3 weeks agovulkan: Add vendor id for Qualcomm drivers (#19569)
Sophon [Sat, 14 Feb 2026 05:29:17 +0000 (13:29 +0800)]
vulkan: Add vendor id for Qualcomm drivers (#19569)

This commit allows Qualcomm native vulkan driver to be used on Windows
instead of Mesa Dozen.

3 weeks agohexagon: further optimizations and refactoring for flash attention (#19583)
Max Krasnyansky [Sat, 14 Feb 2026 00:27:30 +0000 (16:27 -0800)]
hexagon: further optimizations and refactoring for flash attention (#19583)

* ggml-hexagon: fa improvements

ggml-hexagon: optimize flash attention calculations with improved variable handling

ggml-hexagon: streamline flash attention operations by removing redundant checks for FP32

ggml-hexagon: optimize hvx_dot_f16_f16_aa_rx2 by simplifying variable handling for unused elements

ggml-hexagon: optimize flash attention by changing slope vector type to F16

* hexfa: fixed test-backend-ops failurs due to leftover element handling

* hexagon: refactor and optimize fa to use local context struct

* ggml-hexagon: optimize flash-attention using hvx_vec_expf

Use HVX for online softmax.

---------

Co-authored-by: chraac <redacted>
3 weeks agogithub : add missing backends to issue templates (#19603)
Mengsheng Wu [Fri, 13 Feb 2026 23:56:53 +0000 (15:56 -0800)]
github : add missing backends to issue templates (#19603)

3 weeks agovulkan: restore -inf check in FA shaders (#19582)
Jeff Bolz [Fri, 13 Feb 2026 19:35:29 +0000 (11:35 -0800)]
vulkan: restore -inf check in FA shaders (#19582)

3 weeks agocommon : update download code (#19573)
Adrien Gallouët [Fri, 13 Feb 2026 14:10:46 +0000 (15:10 +0100)]
common : update download code (#19573)

* common : remove legacy .json to .etag migration code

Signed-off-by: Adrien Gallouët <redacted>
* common : simplify common_download_file_single_online

This commit also force a redownload if the file exists
but has no .etag file.

Signed-off-by: Adrien Gallouët <redacted>
---------

Signed-off-by: Adrien Gallouët <redacted>
3 weeks agomodel: support GLM MoE DSA arch (NOTE: indexer is not yet supported) (#19460)
Xuan-Son Nguyen [Fri, 13 Feb 2026 13:56:53 +0000 (14:56 +0100)]
model: support GLM MoE DSA arch (NOTE: indexer is not yet supported) (#19460)

* model: support GLM MoE DSA arch

* working version

* pyright

* keep indexer tensors

* add indexer gguf params

* loaded now

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* update

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* minor fix and cleanup

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agoFix wrong memcpy length for block_interleave == 4 (#19575)
Alberto Cabrera Pérez [Fri, 13 Feb 2026 12:32:14 +0000 (12:32 +0000)]
Fix wrong memcpy length for block_interleave == 4 (#19575)

3 weeks agofix vulkan ggml_acc only works in 3d but not 4d (#19426)
ymcki [Fri, 13 Feb 2026 12:31:37 +0000 (20:31 +0800)]
fix vulkan ggml_acc only works in 3d but not 4d (#19426)

* fix vulkan ggml_acc only works in 3d but not 4d

* removed clamp in test_acc_block

* use the correct stride and its test case

* cuda : fix "supports op" condition

* change src0 to src1 in ggml_vk_acc. Update acc.comp with jeffbolznv\'s suggestion except to keep the boundary check

* version without boundary check

* revert back to boundary check version

---------

Co-authored-by: Georgi Gerganov <redacted>
3 weeks agosupport --verbose-prompt (#19576)
Sigbjørn Skjæret [Fri, 13 Feb 2026 11:49:10 +0000 (12:49 +0100)]
support --verbose-prompt (#19576)

3 weeks agoCUDA: loop over ne2*ne3 in case it overflows (#19538)
Aman Gupta [Fri, 13 Feb 2026 11:31:40 +0000 (17:01 +0530)]
CUDA: loop over ne2*ne3 in case it overflows (#19538)

* CUDA: loop over ne2*ne3 in case it overflows

* use fastdiv

3 weeks agowebui: UI and routing fixes (#19586)
Aleksander Grygier [Fri, 13 Feb 2026 11:31:00 +0000 (12:31 +0100)]
webui: UI and routing fixes (#19586)

* chore: update webui build output

* chore: update webui build output

* fix: Scroll issues in DropdownMenuSearchable

* webui: fix redirect to root ignoring base path

* fix: Word wrapping

* fix: remove obsolete modality UI tests causing CI failures

- Remove VisionModality/AudioModality test stories
- Remove mockServerProps usage and imports
- Simplify Default test (remove dropdown interaction checks)
- Simplify FileAttachments test (remove mocks)

* feat: Improve formatting performance time

---------

Co-authored-by: Pascal <redacted>
3 weeks agoCUDA: Do not mutate cgraph for fused ADDs (#19566)
Oliver Simons [Fri, 13 Feb 2026 09:37:55 +0000 (10:37 +0100)]
CUDA: Do not mutate cgraph for fused ADDs (#19566)

* Do not mutate cgraph for fused ADDs

1. We should try to minimize in-place changes to the incoming
   ggml_cgraph where possible (those should happen in graph_optimize)
2. Modifying in-place leads to an additional, unnecessary graph capture
   step as we store the properties before modifying the graph in-place
   in the cuda-backend

* Assert ggml_tensor is trivially copyable

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Aman Gupta <redacted>
3 weeks agodocs : fix broken link and typo (#19560)
Pavan Shinde [Fri, 13 Feb 2026 08:38:09 +0000 (14:08 +0530)]
docs : fix broken link and typo (#19560)

3 weeks agomodel : Kimi Linear fix conv state update (#19531)
ymcki [Fri, 13 Feb 2026 08:10:18 +0000 (16:10 +0800)]
model : Kimi Linear fix conv state update (#19531)

* fix conv state update for llama-server parallel serving

---------

Co-authored-by: Piotr Wilkin (ilintar) <redacted>
3 weeks agollama : remove deprecated codecvt (#19565)
Adrien Gallouët [Fri, 13 Feb 2026 05:43:53 +0000 (06:43 +0100)]
llama : remove deprecated codecvt (#19565)

Using the same conversion function ensures a consistent matching between
the regex pattern and the text.

Signed-off-by: Adrien Gallouët <redacted>
3 weeks agovendor : update BoringSSL to 0.20260211.0 (#19562)
Adrien Gallouët [Fri, 13 Feb 2026 05:43:26 +0000 (06:43 +0100)]
vendor : update BoringSSL to 0.20260211.0 (#19562)

Signed-off-by: Adrien Gallouët <redacted>
3 weeks agomemory : fix kv cache size for hybrid models (#19559)
Georgi Gerganov [Fri, 13 Feb 2026 05:36:24 +0000 (07:36 +0200)]
memory : fix kv cache size for hybrid models (#19559)

3 weeks agometal : improve concurrency (#19555)
Georgi Gerganov [Fri, 13 Feb 2026 05:35:57 +0000 (07:35 +0200)]
metal : improve concurrency (#19555)

3 weeks agometal : support GGML_OP_SET (#19548)
Georgi Gerganov [Fri, 13 Feb 2026 05:34:52 +0000 (07:34 +0200)]
metal : support GGML_OP_SET (#19548)

3 weeks agohexagon: fix typo in vtcm_needs_release (#19545)
Shupei Fan [Thu, 12 Feb 2026 23:07:49 +0000 (07:07 +0800)]
hexagon: fix typo in vtcm_needs_release (#19545)

3 weeks agoopencl: add basic support for q4_1 (#19534)
lhez [Thu, 12 Feb 2026 22:52:37 +0000 (14:52 -0800)]
opencl: add basic support for q4_1 (#19534)

* opencl: add q4_1 mv

* opencl: clean up

* opencl: add flattened q4_1 mv

* opencl: clean up

* opencl: add basic q4_1 mm

* opencl: fix whitespace

* opencl: add general q4_0 mm

3 weeks agoargs : add -kvu to llama-parallel (#19577)
Georgi Gerganov [Thu, 12 Feb 2026 19:52:41 +0000 (21:52 +0200)]
args : add -kvu to llama-parallel (#19577)

3 weeks agowebui: Add switcher to Chat Message UI to show raw LLM output (#19571)
Aleksander Grygier [Thu, 12 Feb 2026 18:55:51 +0000 (19:55 +0100)]
webui: Add switcher to Chat Message UI to show raw LLM output (#19571)

3 weeks agovendor : update cpp-httplib (#19537)
Adrien Gallouët [Thu, 12 Feb 2026 15:11:22 +0000 (16:11 +0100)]
vendor : update cpp-httplib (#19537)

Signed-off-by: Adrien Gallouët <redacted>
3 weeks agollama : update outdated comment in llama.h (#19428)
Christian Schmitz [Thu, 12 Feb 2026 14:52:57 +0000 (15:52 +0100)]
llama : update outdated comment in llama.h (#19428)

* Updated documentation

Model is no longer a parameter

* llama : fix trailing whitespace in comment

---------

Co-authored-by: Daniel Bevenius <redacted>
3 weeks ago(webui) FEATURE: Enable adding or injecting System Message into chat (#19556)
Aleksander Grygier [Thu, 12 Feb 2026 12:56:08 +0000 (13:56 +0100)]
(webui) FEATURE: Enable adding or injecting System Message into chat (#19556)

* feat: Enable adding System Prompt per-chat

* fix: Save draft message in Chat Form when adding System Prompt from new chat view

* fix: Proper system message deletion logic

* chore: Formatting

* chore: update webui build output

3 weeks agoscripts : add support for forks in pr2wt.sh (#19540)
Daniel Bevenius [Thu, 12 Feb 2026 12:14:28 +0000 (13:14 +0100)]
scripts : add support for forks in pr2wt.sh (#19540)

This commit adds support for using the pr2wt.sh (pull request to
workspace) script with forks of upstream llama.cpp.

3 weeks ago(webui) REFACTOR: UI primitives and polish (#19551)
Aleksander Grygier [Thu, 12 Feb 2026 11:21:00 +0000 (12:21 +0100)]
(webui) REFACTOR: UI primitives and polish (#19551)

* webui: UI primitives and polish (non-MCP)

* chore: update webui build output

3 weeks agoWebUI Architecture Cleanup (#19541)
Aleksander Grygier [Thu, 12 Feb 2026 10:22:27 +0000 (11:22 +0100)]
WebUI Architecture Cleanup (#19541)

* webui: architecture foundation (non-MCP core refactors)

* chore: update webui build output

3 weeks agometal : update sum_rows kernel to support float4 (#19524)
Georgi Gerganov [Thu, 12 Feb 2026 09:35:28 +0000 (11:35 +0200)]
metal : update sum_rows kernel to support float4 (#19524)

3 weeks agoAdd a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)
Mario Limonciello [Thu, 12 Feb 2026 08:38:35 +0000 (02:38 -0600)]
Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)

There is an upstream problem [1] with AMD's LLVM 22 fork and
rocWMMA 2.2.0 causing compilation issues on devices without
native fp16 support (CDNA devices).

The specialized types aren't resolved properly:
```
/opt/rocm/include/rocwmma/internal/mfma_impl.hpp:2549:37: error: ambiguous partial specializations of 'amdgcn_mfma<__half, __half, __half, 16, 16, 16>'
 2549 |             using ARegsT = typename Impl::ARegsT;
```

Add a workaround to explicitly declare the types and cast when
compiling with HIP and ROCWMMA_FATTN [2].  When this is actually
fixed upstream some guards can be used to detect and wrap the
version that has the fix to only apply when necessary.

Link: https://github.com/ROCm/rocm-libraries/issues/4398
Link: https://github.com/ggml-org/llama.cpp/issues/19269
Signed-off-by: Mario Limonciello <redacted>
3 weeks agoserver : fix typo in README.md for features list (#19510)
RichardScottOZ [Thu, 12 Feb 2026 07:56:25 +0000 (18:26 +1030)]
server : fix typo in README.md for features list (#19510)

extra l for full

3 weeks agodocs : update path in snapdragon README.md (#19533)
TriDefender [Thu, 12 Feb 2026 07:13:51 +0000 (15:13 +0800)]
docs : update path in snapdragon README.md (#19533)

paths changed so original example didn't work

3 weeks agohexagon: further optimization and tuning of matmul and dot kernels (#19407)
Max Krasnyansky [Thu, 12 Feb 2026 07:04:27 +0000 (23:04 -0800)]
hexagon: further optimization and tuning of matmul and dot kernels (#19407)

* ggml-hexagon: implement 2x2 matmul kernel

* hexmm: implement vec_dot_rx2x2 for Q8_0 and MXFP4

* hexagon: fix editor config failures

* hexagon: refactor matmul ops to use context struct and remove wrappers

Also implement vec_dot_f16 2x2

* hexagon: refactor dyn quantizers to use mmctx

* hexagon: remove mm fastdiv from op_ctx

* hexagon: refactor matmul entry point to reduce code duplication

---------

Co-authored-by: Trivikram Reddy <redacted>
3 weeks agocommon : replace deprecated codecvt using parse_utf8_codepoint (#19517)
Adrien Gallouët [Thu, 12 Feb 2026 06:27:52 +0000 (07:27 +0100)]
common : replace deprecated codecvt using parse_utf8_codepoint (#19517)

Signed-off-by: Adrien Gallouët <redacted>
4 weeks agoopencl: add general Q6_K mm and Q4_K mv (#19347)
lhez [Wed, 11 Feb 2026 18:33:13 +0000 (10:33 -0800)]
opencl: add general Q6_K mm and Q4_K mv (#19347)

* opencl: add general q6_k mm

* opencl: refine condition for q6_K mm

* opencl: add general q4_K mv

* opencl: fix whitespace

4 weeks agoggml : unary ops support non-cont src0 + metal F16 unary ops (#19511)
Georgi Gerganov [Wed, 11 Feb 2026 16:58:43 +0000 (18:58 +0200)]
ggml : unary ops support non-cont src0 + metal F16 unary ops (#19511)

* ggml : unary ops support non-cont src0

* metal : support F16 unary ops + fix ELU

4 weeks agocommon : remove unused token util functions (#19506)
Daniel Bevenius [Wed, 11 Feb 2026 16:41:35 +0000 (17:41 +0100)]
common : remove unused token util functions (#19506)

This commit removes two unused functions `common_lcp` and `common_lcs`.
The last usage of these functions was removed in
Commit 33eff4024084d1f0c8441b79f7208a52fad79858 ("server : vision support
via libmtmd") and are no longer used anywhere in the codebase.

4 weeks agomodel: Add Kimi-K2.5 support (#19170)
AesSedai [Wed, 11 Feb 2026 15:47:30 +0000 (07:47 -0800)]
model: Add Kimi-K2.5 support (#19170)

* Move dequant_model to after the text_config merge
Add new kimi-k2.5 keys to mtmd convert
Update V_MMPROJ tensor mapping for new mm_projector.proj keys
Update V_M_IMP_NORM for new mm_projector.pre_norm key

* Fix a couple of oversights

* Add image support for Kimi-K2.5

* Revert changes to KimiVLForConditionalGeneration

* Fix an assert crash

* Fix permute swapping w / h on accident

* Kimi-K2.5: Use merged QKV for vision

* Kimi-K2.5: pre-convert vision QK to use build_rope_2d

* Kimi-K2.5: support non-interleaved rope for vision

* Kimi-K2.5: fix min / max pixel

* Kimi-K2.5: remove v/o permutes, unnecessary

* Kimi-K2.5: update permute name to match

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Kimi-K2.5: replace build_rope_2d ggml_cont with ggml_view_3d pointers

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agobuild : fix case in dSYMs path for build-macos [no ci] (#19515)
Daniel Bevenius [Wed, 11 Feb 2026 13:02:29 +0000 (14:02 +0100)]
build : fix case in dSYMs path for build-macos [no ci] (#19515)

This commit updates an incorrect dSYMs where the the 's' was uppercase
by mistake.

The motivation for fixing this is that this can cause issues on case
sensitive operating systems.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3630

4 weeks agometal : extend l2_norm support for non-cont src0 (#19502)
Georgi Gerganov [Wed, 11 Feb 2026 12:53:19 +0000 (14:53 +0200)]
metal : extend l2_norm support for non-cont src0 (#19502)

4 weeks agodocs: ban AI for issues and discussions [no CI] (#19512)
Johannes Gäßler [Wed, 11 Feb 2026 11:49:40 +0000 (12:49 +0100)]
docs: ban AI for issues and discussions [no CI] (#19512)

4 weeks agocommon : improve download error reporting (#19491)
Adrien Gallouët [Wed, 11 Feb 2026 08:27:55 +0000 (09:27 +0100)]
common : improve download error reporting (#19491)

Signed-off-by: Adrien Gallouët <redacted>
4 weeks agohexagon: Add ARGSORT, DIV, SQR, SQRT, SUM_ROWS, GEGLU (#19406)
Max Krasnyansky [Wed, 11 Feb 2026 07:21:12 +0000 (23:21 -0800)]
hexagon: Add ARGSORT, DIV, SQR, SQRT, SUM_ROWS, GEGLU (#19406)

* hexagon: add ARGSORT op

Co-authored-by: Yarden Tal <redacted>
* hexagon: argsort reject tensors with huge rows for now

* Adding support for DIV,SQR,SQRT,SUM_ROWS ops in hexagon backend

* hexagon : Add GEGLU op

* hexagon: fix editor config check

* hexagon: rewrite and optimize binary ops ADD/SUB/MUL/DIV/ADD_ID to use DMA

---------

Co-authored-by: Yarden Tal <redacted>
Co-authored-by: Manohara Hosakoppa Krishnamurthy <redacted>
4 weeks agollama : correct typos 'occured' and 'occurences' (#19414)
thecaptain789 [Wed, 11 Feb 2026 06:05:31 +0000 (06:05 +0000)]
llama : correct typos 'occured' and 'occurences' (#19414)

Co-authored-by: thecaptain789 <redacted>
4 weeks agomodel : fix wavtokenizer embedding notions (#19479)
Georgi Gerganov [Wed, 11 Feb 2026 05:52:20 +0000 (07:52 +0200)]
model : fix wavtokenizer embedding notions (#19479)

4 weeks agoggml : extend bin bcast for permuted src1 (#19484)
Georgi Gerganov [Wed, 11 Feb 2026 05:52:00 +0000 (07:52 +0200)]
ggml : extend bin bcast for permuted src1 (#19484)

* tests : extend bin bcast for permuted src1

* cont : extend bin support

* cont : s0 is always 1

* tests : simplify

4 weeks agometal : consolidate unary ops (#19490)
Georgi Gerganov [Wed, 11 Feb 2026 05:51:12 +0000 (07:51 +0200)]
metal : consolidate unary ops (#19490)

4 weeks agollama : refactor sampling_info to use buffer_view template (#19368)
Daniel Bevenius [Wed, 11 Feb 2026 04:38:13 +0000 (05:38 +0100)]
llama : refactor sampling_info to use buffer_view template (#19368)

* llama : refactor sampling_info to use buffer_view template

This commit updates the sampling_info struct in llama-context to use a
buffer_view template for the logits, probs, sampled tokens, and
candidates buffers.

The motivation for this is to simplify the code, improve type safety
and readability.

4 weeks agoCUDA : Update CCCL-tag for 3.2 to final release from RC (#19486)
Oliver Simons [Tue, 10 Feb 2026 21:31:19 +0000 (22:31 +0100)]
CUDA : Update CCCL-tag for 3.2 to final release from RC (#19486)

CCCL 3.2 has been released since it was added to llama.cpp as part of
the backend-sampling PR, and it makes sense to update from RC to final
released version.

https://github.com/NVIDIA/cccl/releases/tag/v3.2.0

4 weeks ago[WebGPU] Plug memory leaks and free resources on shutdown (#19315)
Nikhil Jain [Tue, 10 Feb 2026 16:04:00 +0000 (08:04 -0800)]
[WebGPU] Plug memory leaks and free resources on shutdown (#19315)

* Fix memory leaks in shader lib, backend, backend_context, buffer_context, and webgpu_buf_pool

* Free pools

* Cleanup

* More cleanup

* Run clang-format

* Fix arg-parser and tokenizer test errors that free an unallocated buffer

* Fix device lost callback to not print on device teardown

* Fix include and run clang-format

* remove unused unused

* Update binary ops

---------

Co-authored-by: Reese Levine <redacted>
4 weeks agomodels : support qwen3.5 series (#19468)
JJJYmmm [Tue, 10 Feb 2026 16:00:26 +0000 (00:00 +0800)]
models : support qwen3.5 series (#19468)

* support qwen3.5 series

* remove deepstack for now, and some code clean

* code clean

* add FULL_ATTENTION_INTERVAL metadata

* code clean

* reorder v heads for linear attention to avoid expensive interleaved repeat

4 weeks agotest: fix IMROPE perf test case (#19465)
Xuan-Son Nguyen [Tue, 10 Feb 2026 13:37:50 +0000 (14:37 +0100)]
test: fix IMROPE perf test case (#19465)

4 weeks agoggml-cpu: arm64: q6_K repack gemm and gemv (and generic) implementations (dotprod...
Alberto Cabrera Pérez [Tue, 10 Feb 2026 10:47:45 +0000 (10:47 +0000)]
ggml-cpu: arm64: q6_K repack gemm and gemv (and generic) implementations (dotprod) (#19360)

* First working version of GEMM and GEMV

* interleave loads and compute

* Clang-format

* Added missing fallback. Removed tested TODO.

* Swap M and N to be consistent with the repack template convention

4 weeks agoggml : use noexcept overload for is_regular_file in backend registration (#19452)
k4ss4n [Tue, 10 Feb 2026 09:57:48 +0000 (10:57 +0100)]
ggml : use noexcept overload for is_regular_file in backend registration (#19452)

using noexcept std::filesystem::directory_entry::is_regular_file
overload prevents abnormal termination upon throwing an error
(as caused by symlinks to non-existent folders on linux)

Resolves: #18560

4 weeks agoconvert : move experts permutation from Qwen2MoeModel to Qwen3VLMoeTextModel (#19445)
Piotr Wilkin (ilintar) [Tue, 10 Feb 2026 08:01:37 +0000 (09:01 +0100)]
convert : move experts permutation from Qwen2MoeModel to Qwen3VLMoeTextModel (#19445)

* Add special case for Qwen3VLMoe

* Fix down path, remove arrows and checkmarks

* ws

* Moved to Qwen3VL

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agotts : fix typos in README.md [no ci] (#19463)
Daniel Bevenius [Tue, 10 Feb 2026 06:30:41 +0000 (07:30 +0100)]
tts : fix typos in README.md [no ci] (#19463)

4 weeks agoCANN: Remove unnecessary wrapper for `gml_backend_buft_is_cann` (#18968)
Raul Torres [Tue, 10 Feb 2026 06:19:30 +0000 (06:19 +0000)]
CANN: Remove unnecessary wrapper for `gml_backend_buft_is_cann` (#18968)

4 weeks agoCANN: implement quantized MUL_MAT_ID for MoE models (#19228)
hipudding [Tue, 10 Feb 2026 06:18:59 +0000 (14:18 +0800)]
CANN: implement quantized MUL_MAT_ID for MoE models (#19228)

Implement ggml_cann_mul_mat_id_quant function to support quantized matrix
multiplication for Mixture of Experts (MoE) architectures on CANN backend.

Key features:
- Support Q4_0 and Q8_0 quantized weight formats
- Use IndexSelect to dynamically route expert-specific weights based on indices
- Leverage WeightQuantBatchMatmulV2 for efficient quantized computation
- Handle automatic F16 type conversion for hardware compatibility
- Support both per-expert and broadcast input modes

Implementation details:
- Extract expert weights and scales using CANN IndexSelect operation
- Process each batch and expert combination independently
- Create proper tensor views with correct stride for matmul operations
- Automatic input/output type casting to/from F16 as needed

Testing: All test cases passed for supported types (F32, F16, Q4_0, Q8_0).

4 weeks agocuda : extend GGML_OP_PAD to work with non-cont src0 (#19429)
Georgi Gerganov [Tue, 10 Feb 2026 06:07:16 +0000 (08:07 +0200)]
cuda : extend GGML_OP_PAD to work with non-cont src0 (#19429)

* cuda : extend GGML_OP_PAD to work with non-cont src0

* tests : add permuted pad

4 weeks agochat: fix case where template accepts type content only (#19419)
Xuan-Son Nguyen [Mon, 9 Feb 2026 21:14:12 +0000 (22:14 +0100)]
chat: fix case where template accepts type content only (#19419)

* chat: fix case where template accepts type content only

* rm stray log

* reuse render_message_to_json

4 weeks agomtmd: Implement tiling for LFM2-VL (#19454)
Tarek Dakhran [Mon, 9 Feb 2026 16:30:32 +0000 (17:30 +0100)]
mtmd: Implement tiling for LFM2-VL (#19454)

4 weeks agoServer: log when converting requests to chat completions format (#19457)
손희준 [Mon, 9 Feb 2026 15:22:57 +0000 (00:22 +0900)]
Server: log when converting requests to chat completions format (#19457)

* Log converting requests

* Print as debug instead of info [no ci]

---------

Co-authored-by: openingnow <>
4 weeks agospec : remove check rate (#19377)
Sascha Rogmann [Mon, 9 Feb 2026 13:30:50 +0000 (14:30 +0100)]
spec : remove check rate (#19377)

* spec: remove parameter spec-ngram-check-rate

* spec : renamed statistics vars

* spec : add n_call_begin, n_call_accept

* spec : don't enable key-map-stats

4 weeks agoci : add metal server workflows (#19293)
Georgi Gerganov [Mon, 9 Feb 2026 13:09:30 +0000 (15:09 +0200)]
ci : add metal server workflows (#19293)

* ci : add metal server workflows

* cont : try fix python init

* cont : move to a separate workflow that runs only on master

* cont : fix num jobs

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agorevert : "[Model] Qwen3.5 dense and MoE support (no vision) (#19435)" (#19453)
Georgi Gerganov [Mon, 9 Feb 2026 12:57:51 +0000 (14:57 +0200)]
revert : "[Model] Qwen3.5 dense and MoE support (no vision) (#19435)" (#19453)

This reverts commit 39bf692af1cba2a1072e4a42425611bf1ec2807d.

4 weeks agoggml-virtgpu: add backend documentation (#19354)
Kevin Pouget [Mon, 9 Feb 2026 12:15:42 +0000 (13:15 +0100)]
ggml-virtgpu: add backend documentation (#19354)

* ggml-virtgpu: add backend documentation

Assisted-by-AI: Claude Code

* CODEOWNERS: add /docs/backend/GGML-VirtGPU/ -> kpouget

* README: add the link to docs/backend/GGML-VirtGPU/ggml-virt.md

* docs/ggml-virt: add link to testing + configuration

* Revert "CODEOWNERS: add /docs/backend/GGML-VirtGPU/ -> kpouget"

This reverts commit 8ece8e72e24d305f308505c08ebb75804546374e.

* drop the ggml- prefix

* s/ggerganov/ggml-org

* Relocate VirtGPU.md

* reorganize the text

* turn turn the ascii diagram into a mermaid

* README.md: update the link to the main doc

4 weeks agocmake : add variable to skip installing tests (#19370)
Hugo [Mon, 9 Feb 2026 06:12:02 +0000 (06:12 +0000)]
cmake : add variable to skip installing tests (#19370)

When packaging downstream, there's usually little point in installing
test. The default behaviour remains the same.

4 weeks ago[Model] Qwen3.5 dense and MoE support (no vision) (#19435)
Piotr Wilkin (ilintar) [Sun, 8 Feb 2026 23:24:08 +0000 (00:24 +0100)]
[Model] Qwen3.5 dense and MoE support (no vision) (#19435)

* Unified delta net handling

* Remove old methods.

* Refactor and optimize

* Adapt autoregressive version from @ymcki

* Change to decay mask approach

* Fix bad permute

* Qwen 3.5 support

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Further fixes

* Use inheritance, remove unneeded conts

* Not like this!

* Remove ggml.h explicit import

* Remove transformers, fix the views

* ACTUALLY fix views, make super calls explicit in conversion.

* Fix conversion again

* Remove extra ggml.h imports

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoCUDA: Fix non-contig rope (#19338)
Oliver Simons [Sun, 8 Feb 2026 13:12:51 +0000 (14:12 +0100)]
CUDA: Fix non-contig rope (#19338)

* Rename variables + fix rope_neox

Seems memory layout is shared with Vulkan so we can port fix from
https://github.com/ggml-org/llama.cpp/pull/19299

* Fix rope_multi

* Fix rope_vision

* Fix rope_norm

* Rename ne* to ne0* for consistent variable naming

* cont : consistent stride names

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agorpc : update from common.cpp (#19400)
Adrien Gallouët [Sun, 8 Feb 2026 08:06:45 +0000 (09:06 +0100)]
rpc : update from common.cpp (#19400)

Signed-off-by: Adrien Gallouët <redacted>
4 weeks agoserver : improve context checkpoint logic (#19408)
Georgi Gerganov [Sun, 8 Feb 2026 07:40:04 +0000 (09:40 +0200)]
server : improve context checkpoint logic (#19408)

4 weeks agollama-quantize : cleanup `--help` output (#19317)
ddh0 [Sun, 8 Feb 2026 07:22:38 +0000 (01:22 -0600)]
llama-quantize : cleanup `--help` output (#19317)

* cleanup `llama-quantize --help` output

some much needed TLC

* remove future argument

oops, spoiler

* cleanup of cleanup

4 weeks agoci : remove server job from webui and move slow test (#19424)
Sigbjørn Skjæret [Sun, 8 Feb 2026 00:20:00 +0000 (01:20 +0100)]
ci : remove server job from webui and move slow test (#19424)

* remove server job from webui and move slow test

* use pip-install option

4 weeks agoci : use -j param correctly when building with sanitizers (#19411)
Georgi Gerganov [Sat, 7 Feb 2026 22:50:47 +0000 (00:50 +0200)]
ci : use -j param correctly when building with sanitizers (#19411)

* ci : use less jobs when building with sanitizers

* cont : fix nproc

* cont : fix the fix

* cont : simplify

4 weeks agometal : consolidate bin kernels (#19390)
Georgi Gerganov [Sat, 7 Feb 2026 08:35:56 +0000 (10:35 +0200)]
metal : consolidate bin kernels (#19390)

* metal : refactor bin kernels

* cont

* cont : fix cv

4 weeks agometal : fix event synchronization in cpy_tensor_async (#19402)
Georgi Gerganov [Sat, 7 Feb 2026 05:37:15 +0000 (07:37 +0200)]
metal : fix event synchronization in cpy_tensor_async (#19402)

4 weeks agomodel : support Step3.5-Flash (#19283)
forforever73 [Fri, 6 Feb 2026 20:06:14 +0000 (04:06 +0800)]
model : support Step3.5-Flash (#19283)

* Support Step3.5-Flash

* fix: norm.weight + 1 (HF zero_centered=true)

* step35: simplify GGUF conversion + drop redundant rope KVs

* Address review feedback

* rename limits -> clamp

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <redacted>
* rename swiglu limits -> swiglu clamp in LLM_KV

* avoid CI fail

* Apply suggestions from code review

* Apply suggestions from code review

* disabled KV shifting for LLM_ARCH_STEP35

* Apply suggestions from code review

* mistakenly removed cmath

* add model size && apply missed suggestion

* assert partial_rotary_factors

* fix CI errors:

* load freq_base_swa

---------

Co-authored-by: lvyichen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agogguf-py : bump sentencepiece version (#19319)
Alex Trotta [Fri, 6 Feb 2026 20:05:19 +0000 (15:05 -0500)]
gguf-py : bump sentencepiece version (#19319)

* gguf-py: Bump sentencepiece version

There's a new version that's been out for a while that addresses the issues mentioned in https://github.com/ggml-org/llama.cpp/pull/14200. There's a long chain of reasons I would like this change, but the short version is that it allows people who use both `sentencepiece` and `gguf` to take advantage of these fixes. On conda-forge, currently, it locks the version (since there is no notion of optional dependencies).

Regardless, I don't think this should be too controversial.

* review feedback

4 weeks agoggml-webgpu: JIT compile binary operators and handle binding overlaps (#19310)
Abhijit Ramesh [Fri, 6 Feb 2026 18:33:30 +0000 (10:33 -0800)]
ggml-webgpu: JIT compile binary operators and handle binding overlaps (#19310)

* ggml webgpu: port binary operators to use pre-wgsl

* Add binary.wgsl: unified shader with conditionals for all 4 ops

* Add gen_binary_shaders.cpp: build tool for using pre_wgsl preprocessor

* Remove bin_op.tmpl.wgsl and binary.wgsl (Python template)

* Update CMake to generate binary operator shaders at build time

* ggml-webgpu: migrate binary ops to JIT compilation with overlap handling

* port binary operators from AOT to pre-wgsl JIT compilation

* add src1=dst overlap handling for binary ops

* use compile-time workgroup size defines instead of runtime overrides

* ggml-webgpu: complete overlap handling for binary ops

* add support for inplace & overlap case in binding setup

* restructure conditional logic to handle all overlap cases

* ensure all buffer bindings are correctly assigned for edge cases

* ggml-webgpu: remove unused binary overlap cases

Remove src0==src1 binary overlap case that never occurs in practice.

* keep INPLACE (src0==dst), OVERLAP (src1==dst), DEFAULT

* remove unused src0==src1 and all-same variant

* refactor wgsl to eliminate duplication

4 weeks agosycl: add F16 support for GGML_OP_CEIL (#19306)
Nechama Krashinski [Fri, 6 Feb 2026 15:13:44 +0000 (17:13 +0200)]
sycl: add F16 support for GGML_OP_CEIL (#19306)

* Fix SYCL CEIL operator

* sycl: implement GGML_OP_CEIL

4 weeks agotests: reduce number of FA test permutations (#19381)
Jeff Bolz [Fri, 6 Feb 2026 14:50:30 +0000 (08:50 -0600)]
tests: reduce number of FA test permutations (#19381)

Only test non-F16 for head size 64 and 72 (one a multiple of QK, one not).

4 weeks agocommon : add common_speculative_is_compat() (#19270)
Georgi Gerganov [Fri, 6 Feb 2026 14:47:22 +0000 (16:47 +0200)]
common : add common_speculative_is_compat() (#19270)

* llama : add llama_memory_can_rm_suffix()

* Revert "llama : add llama_memory_can_rm_suffix()"

This reverts commit d30e59b62a15ef4266a6503e3f4eba770aec001b.

* spec : check if the target context is compatible for spec decoding

4 weeks agounicode : MSVC regex fix (#19340)
Lasse Lauwerys [Fri, 6 Feb 2026 13:56:13 +0000 (14:56 +0100)]
unicode : MSVC regex fix (#19340)

* Fix model loading regex error

* Change comments

* Use const_iterator and remove specializations

---------

Co-authored-by: Alde Rojas <redacted>