]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Reese Levine [Wed, 18 Feb 2026 14:51:02 +0000 (07:51 -0700)]
ggml webgpu: shader library organization (#19530)
* Basic JIT compilation for mul_mat, get_rows, and scale (#17)
* scale jit working
* preliminary working jit for getrows and mulmat, needs refining
* simplified mul_mat preprocessing switch statement
* get_rows fixes, mul_mat refinement
* formatted + last edits
* removed some extraneous prints
* fixed get_rows, fixed workgroup dispatch in mul_mat. no gibberish
* small fix
* some changes, working
* get_rows and mul_mat jit fixed and working
* Update formatting
* formatting
* Add header
---------
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
* Start work on all-encompassing shader library
* refactor argmax, set_rows
* Refactor all but flashattention, mat mul
* flashattention and matrix multiplication moved to new format
* clean up preprocessing
* Formatting
* remove duplicate constants
* Split large shaders into multiple static strings
---------
Co-authored-by: neha-ha <redacted>
Aleksander Grygier [Wed, 18 Feb 2026 11:02:02 +0000 (12:02 +0100)]
Pre-MCP UI and architecture cleanup (#19689)
Jeff Bolz [Wed, 18 Feb 2026 09:47:10 +0000 (01:47 -0800)]
vulkan: split mul_mat into multiple dispatches to avoid overflow (#19509)
* vulkan: split mul_mat into multiple dispatches to avoid overflow
The batch dimensions can be greater than the max workgroup count limit,
in which case we need to split into multiple dispatches and pass the base
index through a push constant.
Fall back for the less common p021 and nc variants.
* address feedback
Adrien Gallouët [Wed, 18 Feb 2026 07:03:01 +0000 (08:03 +0100)]
common : make small string helpers as inline functions (#19693)
Also use string_view when it make sense and fix some corner cases.
Signed-off-by: Adrien Gallouët <redacted>
shaofeiqi [Tue, 17 Feb 2026 22:47:18 +0000 (14:47 -0800)]
opencl: refactor expm1 and softplus (#19404)
* opencl: refactor expm1
* opencl: refactor softplus
* opencl: use h for half literals
---------
Co-authored-by: Li He <redacted>
shaofeiqi [Tue, 17 Feb 2026 21:56:09 +0000 (13:56 -0800)]
opencl: optimize mean and sum_row kernels (#19614)
* opencl: optimize mean and sum_row kernels
* opencl: add comment for max subgroups
* opencl: format
---------
Co-authored-by: Li He <redacted>
Daniel Bevenius [Tue, 17 Feb 2026 19:43:22 +0000 (20:43 +0100)]
model-conversion : add option to print tensor values (#19692)
This commit updates the tensor-info.py script to support the option to
print the first N values of a tensor when displaying its information.
The motivation for this is that it can be useful to inspect some actual
values in addition to the shapes of the tensors.
Aleksander Grygier [Tue, 17 Feb 2026 12:47:45 +0000 (13:47 +0100)]
Pre-MCP UI and architecture cleanup (#19685)
* webui: extract non-MCP changes from mcp-mvp review split
* webui: extract additional pre-MCP UI and architecture cleanup
* chore: update webui build output
Talha Can Havadar [Tue, 17 Feb 2026 11:22:46 +0000 (12:22 +0100)]
ggml: ggml-cpu: force-no-lto-for-cpu-feats (#19609)
When LTO enabled in build environments it forces all builds to have LTO
in place. But feature detection logic is fragile, and causing Illegal
instruction errors with lto. This disables LTO for the feature
detection code to prevent cross-module optimization from inlining
architecture-specific instructions into the score function. Without this,
LTO can cause SIGILL when loading backends on older CPUs (e.g., loading
power10 backend on power9 crashes before feature check runs).
Georgi Gerganov [Tue, 17 Feb 2026 10:31:49 +0000 (12:31 +0200)]
cuda : enable CUDA graphs for MMID 1 <= BS <= 4 (#19645)
* cuda : enable CUDA graphs for MMID BS <= 4
* cont : add stream capture check
Co-authored-by: Oliver Simons <redacted>
* cont : add MMVQ_MMID_MAX_BATCH_SIZE
---------
Co-authored-by: Oliver Simons <redacted>
Daniel Bevenius [Tue, 17 Feb 2026 09:46:53 +0000 (10:46 +0100)]
model-conversion : make printing of config values optional (#19681)
* model-conversion : make printing of config values optional
This commit updates run-org-model.py to make the printing of model
configuration values optional.
The motivation for this change is that not all models have these
configuration values defined and those that do not will error when
running this script. With these changes we only print the values if they
exist or a default value.
We could optionally just remove them but it can be useful to see these
values when running the original model.
Sigbjørn Skjæret [Tue, 17 Feb 2026 08:30:31 +0000 (09:30 +0100)]
ci : bump komac version (#19682)
Adrien Gallouët [Tue, 17 Feb 2026 07:37:07 +0000 (08:37 +0100)]
build : link ws2_32 as PUBLIC on Windows (#19666)
Signed-off-by: Adrien Gallouët <redacted>
Adrien Gallouët [Tue, 17 Feb 2026 07:36:45 +0000 (08:36 +0100)]
build : cleanup library linking logic (#19665)
Signed-off-by: Adrien Gallouët <redacted>
DAN™ [Mon, 16 Feb 2026 21:49:57 +0000 (16:49 -0500)]
convert : add JoyAI-LLM-Flash (#19651)
* convert_hf_to_gguf: add JoyAI-LLM-Flash tokenizer hash mapping to deepseek-v3
* llama-vocab: create a new pre-tokenizer name for joyai-llm.
* add missing vocab type section
* Update convert_hf_to_gguf_update.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
AesSedai [Mon, 16 Feb 2026 16:44:44 +0000 (08:44 -0800)]
perplexity: add proper batching (#19661)
Ivan Chikish [Mon, 16 Feb 2026 15:52:24 +0000 (18:52 +0300)]
common : inline functions (#18639)
Judd [Mon, 16 Feb 2026 15:43:34 +0000 (23:43 +0800)]
ggml : make `ggml_is_view` as API (#19539)
* make `ggml_is_view` as API
* introduce `ggml_aux_is_view` as inline version for internal use.
* change `ggml_aux_is_view` to `ggml_impl_is_view`
Saurabh Dash [Mon, 16 Feb 2026 15:28:46 +0000 (10:28 -0500)]
model: Add support for Tiny Aya Models (#19611)
* changes for tiny aya
* changes to hash
* changes to vocab
* fix some tokenizer regex edge cases
* update comment
* add some comments for regex
* Apply suggestion from @ngxson
---------
Co-authored-by: Xuan-Son Nguyen <redacted>
Adrien Gallouët [Mon, 16 Feb 2026 15:06:48 +0000 (16:06 +0100)]
build : rework llama_option_depr to handle LLAMA_CURL (#19658)
Signed-off-by: Adrien Gallouët <redacted>
Mario Limonciello [Mon, 16 Feb 2026 13:46:08 +0000 (07:46 -0600)]
Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (#19591)
Avoids issues with ROCm 6.4.4.
Closes: https://github.com/ggml-org/llama.cpp/issues/19580
Fixes: 6845f7f87 ("Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)")
Signed-off-by: Mario Limonciello (AMD) <redacted>
Georgi Gerganov [Mon, 16 Feb 2026 12:35:04 +0000 (14:35 +0200)]
models : deduplicate delta-net graphs for Qwen family (#19597)
* models : add llm_build_delta_net_base
* cont : keep qwen35 and qwen35moe graphs intact
* cont : add comments
Georgi Gerganov [Mon, 16 Feb 2026 07:21:11 +0000 (09:21 +0200)]
graph : fix KQ mask, lora, cvec reuse checks (#19644)
* graph : fix KQ mask reuse condition
* cont : dedup KQ mask build and can_reuse
* cont : fix build
* graph : fix adapter check for reuse
abhijain1204fujitsu [Mon, 16 Feb 2026 06:38:43 +0000 (12:08 +0530)]
ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel (#19132)
* Updated repack.cpp
* Updated repack.cpp
* Updated repack.cpp
* Added if condition to support only vector length 256.
* Changed the format removed comments and duplicate variable
* If SVE 256 not present then was using generic function to compute, hence slowing the performance.
So added code if SVE 256 is not present then use NEON code.
* Code format change suggestion
---------
Co-authored-by: Vithule, Prashant <redacted>
Georgi Gerganov [Sun, 15 Feb 2026 20:23:13 +0000 (22:23 +0200)]
sync : ggml
Georgi Gerganov [Sun, 15 Feb 2026 20:21:04 +0000 (22:21 +0200)]
ggml : bump version to 0.9.7 (ggml/1425)
Georgi Gerganov [Sat, 7 Feb 2026 07:58:02 +0000 (09:58 +0200)]
ggml : bump version to 0.9.6 (ggml/1423)
David Friehs [Sun, 15 Feb 2026 17:08:42 +0000 (18:08 +0100)]
cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization (#19624)
* cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization
- load all 8 int8 for a grid position in one load
- calculate signs via popcnt instead of fetching from ksigns table
- broadcast signs to drop individual shift/mask
* cuda: iq2xxs: simplify sum scaling
express `(sum * scale + sum / 2) / 4` as `(sum * (scale * 2 + 1)) / 8`
express `((aux32 >> 28) * 2 + 1)` as `(aux32 >> 27 | 1)`
saves 3 registers for mul_mat_vec_q (152 -> 149) according to nsight
AFAICT no overflow can occur here as iq2xxs values are far too small
* uint -> uint32_t
error: identifier "uint" is undefined
Aaron Teo [Sun, 15 Feb 2026 16:33:34 +0000 (00:33 +0800)]
docs: update s390x build docs (#19643)
Adrien Gallouët [Sun, 15 Feb 2026 14:38:50 +0000 (15:38 +0100)]
build : remove LLAMA_HTTPLIB option (#19623)
This option was introduced as a workaround because cpp-httplib could not
build on visionOS. Since it has been fixed and now compiles on all platforms,
we can remove it and simplify many things.
Signed-off-by: Adrien Gallouët <redacted>
Daniel Bevenius [Sun, 15 Feb 2026 12:59:38 +0000 (13:59 +0100)]
cmake : check if KleidiAI API has been fetched (#19640)
This commit addresses a build issue with the KleidiAI backend when
building multiple cpu backends. Commmit
3a00c98584e42a20675b6569d81beadb282b0952 ("cmake : fix KleidiAI install
target failure with EXCLUDE_FROM_ALL") introduced a change where
FetchContent_Populate is called instead of FetchContent_MakeAvailable,
where the latter does handle this case (it is idempotent but
FetchContent_Populate is not).
I missed this during my review and I should not have commited without
verifying the CI failure, sorry about that.
Georgi Gerganov [Sun, 15 Feb 2026 12:57:40 +0000 (14:57 +0200)]
context : fix output reorder with backend sampling (#19638)
Georgi Gerganov [Sun, 15 Feb 2026 12:56:35 +0000 (14:56 +0200)]
ggml : avoid UB in gemm ukernel (#19642)
Aaron Teo [Sun, 15 Feb 2026 10:20:35 +0000 (18:20 +0800)]
ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399)
Aman Gupta [Sun, 15 Feb 2026 05:39:24 +0000 (11:09 +0530)]
ggml-cpu: FA add GEMM microkernel (#19422)
* ggml-cpu: FA add GEMM microkernel
* add guard for sizeless vector types
* fix case where DV % GGML_F32_EPR !=0
* move memset out of the loop
* move another memset out of the loop
* use RM=4 for arm
* simd_gemm: convert everything to int
* convert everything to size_t to avoid warnings
* fixup
* add pragma for ignoring aggressive loop optimizations
SamareshSingh [Sun, 15 Feb 2026 05:22:53 +0000 (23:22 -0600)]
cmake : fix KleidiAI install target failure with EXCLUDE_FROM_ALL (#19581)
* cmake: fix KleidiAI install target failure with EXCLUDE_FROM_ALL
Fix for the bug #19501 by adding EXCLUDE_FROM_ALL to FetchContent_Declare. This properly excludes KleidiAI from both build and install targets, preventing install failures when GGML_CPU_KLEIDIAI=ON is used.
The KleidiAI source files are still compiled into libggml-cpu.so, preserving all functionality.
* addressed code review comments
Sigbjørn Skjæret [Sat, 14 Feb 2026 21:22:32 +0000 (22:22 +0100)]
convert : ensure all models handle new experts count (#19621)
* ensure all models handle new experts count
* revert removal for PhiMoeModel, does not inherit from base
Anav Prasad [Sat, 14 Feb 2026 13:07:00 +0000 (05:07 -0800)]
mtmd : Add Nemotron Nano 12B v2 VL support (#19547)
* nemotron nano v2 vlm support added
* simplified code; addressed reviews
* pre-downsample position embeddings during GGUF conversion for fixed input size
Georgi Gerganov [Sat, 14 Feb 2026 10:57:36 +0000 (12:57 +0200)]
models : optimize qwen3next graph (#19375)
* models : optimizing qwen3next graph
* cont
* wip
* wip
* wip
* wip
* wip
* wip
* wip
* wip
* wip
* wip
* cont : remove redundant q, g chunking
* minor
* minor
* avoid passing masks around
* avoid concats during chunking
* naming + shapes
* update names and use prefix to disable CUDA graphs
Adrien Gallouët [Sat, 14 Feb 2026 10:22:57 +0000 (11:22 +0100)]
ggml : fix GGML_DEBUG with OpenMP (#19599)
last_graph is only available without OpenMP, but
ggml_graph_compute_thread() is called in both cases.
Signed-off-by: Adrien Gallouët <redacted>
iMil [Sat, 14 Feb 2026 08:47:01 +0000 (09:47 +0100)]
NetBSD build support (#19589)
Aleksander Grygier [Sat, 14 Feb 2026 08:06:41 +0000 (09:06 +0100)]
webui: Architecture and UI improvements (#19596)
agent-enemy-2 [Sat, 14 Feb 2026 08:06:27 +0000 (03:06 -0500)]
llama : update LoRA API. + fix excessive graph reserves (#19280)
* Refactoring to use new llama_put_adapter_loras
* cont : alternative lora API
---------
Co-authored-by: Jake Chavis <redacted>
Co-authored-by: Georgi Gerganov <redacted>
George [Sat, 14 Feb 2026 08:05:12 +0000 (10:05 +0200)]
mmap: Fix Windows handle lifetime (#19598)
* ggml: added cleanups in ggml_quantize_free
Add missing cleanup calls for IQ2_S, IQ1_M quantization types and IQ3XS with 512 blocks during quantization cleanup.
* mmap: Fix Windows handle lifetime
Move hMapping from local variable to member variable so it stays alive for the entire lifetime of the mapping.
The file mapping handle must remain valid until UnmapViewOfFile is called.
Fixes cleanup order in destructor.
* Update llama-mmap.cpp
* Update llama-mmap.cpp
Remove trailing whitespace from line 567
Georgi Gerganov [Sat, 14 Feb 2026 07:54:03 +0000 (09:54 +0200)]
metal : fix ACC op (#19427)
Adrien Gallouët [Sat, 14 Feb 2026 07:41:16 +0000 (08:41 +0100)]
scripts : use official split.py for cpp-httplib (#19588)
* scripts : use official split.py for cpp-httplib
Using the official script is safer and ensures the generated code aligns
with the library's standards.
Signed-off-by: Adrien Gallouët <redacted>
* Catch generic errors
Signed-off-by: Adrien Gallouët <redacted>
* Allow print()
Signed-off-by: Adrien Gallouët <redacted>
* Ensure robust cleanup
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Sigbjørn Skjæret [Sat, 14 Feb 2026 07:17:43 +0000 (08:17 +0100)]
convert : store ffn_gate_inp_shexp as F32 (#19606)
Adrien Gallouët [Sat, 14 Feb 2026 05:48:37 +0000 (06:48 +0100)]
build : fix libtool call in build-xcframework.sh (#19605)
Run libtool via xcrun like strip and dsymutil, to have proper tool resolution.
Signed-off-by: Adrien Gallouët <redacted>
Jeff Bolz [Sat, 14 Feb 2026 05:42:04 +0000 (21:42 -0800)]
vulkan: support L2_NORM with contiguous rows (#19604)
Jeff Bolz [Sat, 14 Feb 2026 05:36:38 +0000 (21:36 -0800)]
vulkan: support GGML_OP_SET (#19584)
Sophon [Sat, 14 Feb 2026 05:29:17 +0000 (13:29 +0800)]
vulkan: Add vendor id for Qualcomm drivers (#19569)
This commit allows Qualcomm native vulkan driver to be used on Windows
instead of Mesa Dozen.
Max Krasnyansky [Sat, 14 Feb 2026 00:27:30 +0000 (16:27 -0800)]
hexagon: further optimizations and refactoring for flash attention (#19583)
* ggml-hexagon: fa improvements
ggml-hexagon: optimize flash attention calculations with improved variable handling
ggml-hexagon: streamline flash attention operations by removing redundant checks for FP32
ggml-hexagon: optimize hvx_dot_f16_f16_aa_rx2 by simplifying variable handling for unused elements
ggml-hexagon: optimize flash attention by changing slope vector type to F16
* hexfa: fixed test-backend-ops failurs due to leftover element handling
* hexagon: refactor and optimize fa to use local context struct
* ggml-hexagon: optimize flash-attention using hvx_vec_expf
Use HVX for online softmax.
---------
Co-authored-by: chraac <redacted>
Mengsheng Wu [Fri, 13 Feb 2026 23:56:53 +0000 (15:56 -0800)]
github : add missing backends to issue templates (#19603)
Jeff Bolz [Fri, 13 Feb 2026 19:35:29 +0000 (11:35 -0800)]
vulkan: restore -inf check in FA shaders (#19582)
Adrien Gallouët [Fri, 13 Feb 2026 14:10:46 +0000 (15:10 +0100)]
common : update download code (#19573)
* common : remove legacy .json to .etag migration code
Signed-off-by: Adrien Gallouët <redacted>
* common : simplify common_download_file_single_online
This commit also force a redownload if the file exists
but has no .etag file.
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Xuan-Son Nguyen [Fri, 13 Feb 2026 13:56:53 +0000 (14:56 +0100)]
model: support GLM MoE DSA arch (NOTE: indexer is not yet supported) (#19460)
* model: support GLM MoE DSA arch
* working version
* pyright
* keep indexer tensors
* add indexer gguf params
* loaded now
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
* update
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* minor fix and cleanup
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Alberto Cabrera Pérez [Fri, 13 Feb 2026 12:32:14 +0000 (12:32 +0000)]
Fix wrong memcpy length for block_interleave == 4 (#19575)
ymcki [Fri, 13 Feb 2026 12:31:37 +0000 (20:31 +0800)]
fix vulkan ggml_acc only works in 3d but not 4d (#19426)
* fix vulkan ggml_acc only works in 3d but not 4d
* removed clamp in test_acc_block
* use the correct stride and its test case
* cuda : fix "supports op" condition
* change src0 to src1 in ggml_vk_acc. Update acc.comp with jeffbolznv\'s suggestion except to keep the boundary check
* version without boundary check
* revert back to boundary check version
---------
Co-authored-by: Georgi Gerganov <redacted>
Sigbjørn Skjæret [Fri, 13 Feb 2026 11:49:10 +0000 (12:49 +0100)]
support --verbose-prompt (#19576)
Aman Gupta [Fri, 13 Feb 2026 11:31:40 +0000 (17:01 +0530)]
CUDA: loop over ne2*ne3 in case it overflows (#19538)
* CUDA: loop over ne2*ne3 in case it overflows
* use fastdiv
Aleksander Grygier [Fri, 13 Feb 2026 11:31:00 +0000 (12:31 +0100)]
webui: UI and routing fixes (#19586)
* chore: update webui build output
* chore: update webui build output
* fix: Scroll issues in DropdownMenuSearchable
* webui: fix redirect to root ignoring base path
* fix: Word wrapping
* fix: remove obsolete modality UI tests causing CI failures
- Remove VisionModality/AudioModality test stories
- Remove mockServerProps usage and imports
- Simplify Default test (remove dropdown interaction checks)
- Simplify FileAttachments test (remove mocks)
* feat: Improve formatting performance time
---------
Co-authored-by: Pascal <redacted>
Oliver Simons [Fri, 13 Feb 2026 09:37:55 +0000 (10:37 +0100)]
CUDA: Do not mutate cgraph for fused ADDs (#19566)
* Do not mutate cgraph for fused ADDs
1. We should try to minimize in-place changes to the incoming
ggml_cgraph where possible (those should happen in graph_optimize)
2. Modifying in-place leads to an additional, unnecessary graph capture
step as we store the properties before modifying the graph in-place
in the cuda-backend
* Assert ggml_tensor is trivially copyable
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Co-authored-by: Aman Gupta <redacted>
---------
Co-authored-by: Aman Gupta <redacted>
Pavan Shinde [Fri, 13 Feb 2026 08:38:09 +0000 (14:08 +0530)]
docs : fix broken link and typo (#19560)
ymcki [Fri, 13 Feb 2026 08:10:18 +0000 (16:10 +0800)]
model : Kimi Linear fix conv state update (#19531)
* fix conv state update for llama-server parallel serving
---------
Co-authored-by: Piotr Wilkin (ilintar) <redacted>
Adrien Gallouët [Fri, 13 Feb 2026 05:43:53 +0000 (06:43 +0100)]
llama : remove deprecated codecvt (#19565)
Using the same conversion function ensures a consistent matching between
the regex pattern and the text.
Signed-off-by: Adrien Gallouët <redacted>
Adrien Gallouët [Fri, 13 Feb 2026 05:43:26 +0000 (06:43 +0100)]
vendor : update BoringSSL to 0.
20260211 .0 (#19562)
Signed-off-by: Adrien Gallouët <redacted>
Georgi Gerganov [Fri, 13 Feb 2026 05:36:24 +0000 (07:36 +0200)]
memory : fix kv cache size for hybrid models (#19559)
Georgi Gerganov [Fri, 13 Feb 2026 05:35:57 +0000 (07:35 +0200)]
metal : improve concurrency (#19555)
Georgi Gerganov [Fri, 13 Feb 2026 05:34:52 +0000 (07:34 +0200)]
metal : support GGML_OP_SET (#19548)
Shupei Fan [Thu, 12 Feb 2026 23:07:49 +0000 (07:07 +0800)]
hexagon: fix typo in vtcm_needs_release (#19545)
lhez [Thu, 12 Feb 2026 22:52:37 +0000 (14:52 -0800)]
opencl: add basic support for q4_1 (#19534)
* opencl: add q4_1 mv
* opencl: clean up
* opencl: add flattened q4_1 mv
* opencl: clean up
* opencl: add basic q4_1 mm
* opencl: fix whitespace
* opencl: add general q4_0 mm
Georgi Gerganov [Thu, 12 Feb 2026 19:52:41 +0000 (21:52 +0200)]
args : add -kvu to llama-parallel (#19577)
Aleksander Grygier [Thu, 12 Feb 2026 18:55:51 +0000 (19:55 +0100)]
webui: Add switcher to Chat Message UI to show raw LLM output (#19571)
Adrien Gallouët [Thu, 12 Feb 2026 15:11:22 +0000 (16:11 +0100)]
vendor : update cpp-httplib (#19537)
Signed-off-by: Adrien Gallouët <redacted>
Christian Schmitz [Thu, 12 Feb 2026 14:52:57 +0000 (15:52 +0100)]
llama : update outdated comment in llama.h (#19428)
* Updated documentation
Model is no longer a parameter
* llama : fix trailing whitespace in comment
---------
Co-authored-by: Daniel Bevenius <redacted>
Aleksander Grygier [Thu, 12 Feb 2026 12:56:08 +0000 (13:56 +0100)]
(webui) FEATURE: Enable adding or injecting System Message into chat (#19556)
* feat: Enable adding System Prompt per-chat
* fix: Save draft message in Chat Form when adding System Prompt from new chat view
* fix: Proper system message deletion logic
* chore: Formatting
* chore: update webui build output
Daniel Bevenius [Thu, 12 Feb 2026 12:14:28 +0000 (13:14 +0100)]
scripts : add support for forks in pr2wt.sh (#19540)
This commit adds support for using the pr2wt.sh (pull request to
workspace) script with forks of upstream llama.cpp.
Aleksander Grygier [Thu, 12 Feb 2026 11:21:00 +0000 (12:21 +0100)]
(webui) REFACTOR: UI primitives and polish (#19551)
* webui: UI primitives and polish (non-MCP)
* chore: update webui build output
Aleksander Grygier [Thu, 12 Feb 2026 10:22:27 +0000 (11:22 +0100)]
WebUI Architecture Cleanup (#19541)
* webui: architecture foundation (non-MCP core refactors)
* chore: update webui build output
Georgi Gerganov [Thu, 12 Feb 2026 09:35:28 +0000 (11:35 +0200)]
metal : update sum_rows kernel to support float4 (#19524)
Mario Limonciello [Thu, 12 Feb 2026 08:38:35 +0000 (02:38 -0600)]
Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)
There is an upstream problem [1] with AMD's LLVM 22 fork and
rocWMMA 2.2.0 causing compilation issues on devices without
native fp16 support (CDNA devices).
The specialized types aren't resolved properly:
```
/opt/rocm/include/rocwmma/internal/mfma_impl.hpp:2549:37: error: ambiguous partial specializations of 'amdgcn_mfma<__half, __half, __half, 16, 16, 16>'
2549 | using ARegsT = typename Impl::ARegsT;
```
Add a workaround to explicitly declare the types and cast when
compiling with HIP and ROCWMMA_FATTN [2]. When this is actually
fixed upstream some guards can be used to detect and wrap the
version that has the fix to only apply when necessary.
Link: https://github.com/ROCm/rocm-libraries/issues/4398
Link: https://github.com/ggml-org/llama.cpp/issues/19269
Signed-off-by: Mario Limonciello <redacted>
RichardScottOZ [Thu, 12 Feb 2026 07:56:25 +0000 (18:26 +1030)]
server : fix typo in README.md for features list (#19510)
extra l for full
TriDefender [Thu, 12 Feb 2026 07:13:51 +0000 (15:13 +0800)]
docs : update path in snapdragon README.md (#19533)
paths changed so original example didn't work
Max Krasnyansky [Thu, 12 Feb 2026 07:04:27 +0000 (23:04 -0800)]
hexagon: further optimization and tuning of matmul and dot kernels (#19407)
* ggml-hexagon: implement 2x2 matmul kernel
* hexmm: implement vec_dot_rx2x2 for Q8_0 and MXFP4
* hexagon: fix editor config failures
* hexagon: refactor matmul ops to use context struct and remove wrappers
Also implement vec_dot_f16 2x2
* hexagon: refactor dyn quantizers to use mmctx
* hexagon: remove mm fastdiv from op_ctx
* hexagon: refactor matmul entry point to reduce code duplication
---------
Co-authored-by: Trivikram Reddy <redacted>
Adrien Gallouët [Thu, 12 Feb 2026 06:27:52 +0000 (07:27 +0100)]
common : replace deprecated codecvt using parse_utf8_codepoint (#19517)
Signed-off-by: Adrien Gallouët <redacted>
lhez [Wed, 11 Feb 2026 18:33:13 +0000 (10:33 -0800)]
opencl: add general Q6_K mm and Q4_K mv (#19347)
* opencl: add general q6_k mm
* opencl: refine condition for q6_K mm
* opencl: add general q4_K mv
* opencl: fix whitespace
Georgi Gerganov [Wed, 11 Feb 2026 16:58:43 +0000 (18:58 +0200)]
ggml : unary ops support non-cont src0 + metal F16 unary ops (#19511)
* ggml : unary ops support non-cont src0
* metal : support F16 unary ops + fix ELU
Daniel Bevenius [Wed, 11 Feb 2026 16:41:35 +0000 (17:41 +0100)]
common : remove unused token util functions (#19506)
This commit removes two unused functions `common_lcp` and `common_lcs`.
The last usage of these functions was removed in
Commit
33eff4024084d1f0c8441b79f7208a52fad79858 ("server : vision support
via libmtmd") and are no longer used anywhere in the codebase.
AesSedai [Wed, 11 Feb 2026 15:47:30 +0000 (07:47 -0800)]
model: Add Kimi-K2.5 support (#19170)
* Move dequant_model to after the text_config merge
Add new kimi-k2.5 keys to mtmd convert
Update V_MMPROJ tensor mapping for new mm_projector.proj keys
Update V_M_IMP_NORM for new mm_projector.pre_norm key
* Fix a couple of oversights
* Add image support for Kimi-K2.5
* Revert changes to KimiVLForConditionalGeneration
* Fix an assert crash
* Fix permute swapping w / h on accident
* Kimi-K2.5: Use merged QKV for vision
* Kimi-K2.5: pre-convert vision QK to use build_rope_2d
* Kimi-K2.5: support non-interleaved rope for vision
* Kimi-K2.5: fix min / max pixel
* Kimi-K2.5: remove v/o permutes, unnecessary
* Kimi-K2.5: update permute name to match
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Kimi-K2.5: replace build_rope_2d ggml_cont with ggml_view_3d pointers
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Daniel Bevenius [Wed, 11 Feb 2026 13:02:29 +0000 (14:02 +0100)]
build : fix case in dSYMs path for build-macos [no ci] (#19515)
This commit updates an incorrect dSYMs where the the 's' was uppercase
by mistake.
The motivation for fixing this is that this can cause issues on case
sensitive operating systems.
Refs: https://github.com/ggml-org/whisper.cpp/pull/3630
Georgi Gerganov [Wed, 11 Feb 2026 12:53:19 +0000 (14:53 +0200)]
metal : extend l2_norm support for non-cont src0 (#19502)
Johannes Gäßler [Wed, 11 Feb 2026 11:49:40 +0000 (12:49 +0100)]
docs: ban AI for issues and discussions [no CI] (#19512)
Adrien Gallouët [Wed, 11 Feb 2026 08:27:55 +0000 (09:27 +0100)]
common : improve download error reporting (#19491)
Signed-off-by: Adrien Gallouët <redacted>
Max Krasnyansky [Wed, 11 Feb 2026 07:21:12 +0000 (23:21 -0800)]
hexagon: Add ARGSORT, DIV, SQR, SQRT, SUM_ROWS, GEGLU (#19406)
* hexagon: add ARGSORT op
Co-authored-by: Yarden Tal <redacted>
* hexagon: argsort reject tensors with huge rows for now
* Adding support for DIV,SQR,SQRT,SUM_ROWS ops in hexagon backend
* hexagon : Add GEGLU op
* hexagon: fix editor config check
* hexagon: rewrite and optimize binary ops ADD/SUB/MUL/DIV/ADD_ID to use DMA
---------
Co-authored-by: Yarden Tal <redacted>
Co-authored-by: Manohara Hosakoppa Krishnamurthy <redacted>
thecaptain789 [Wed, 11 Feb 2026 06:05:31 +0000 (06:05 +0000)]
llama : correct typos 'occured' and 'occurences' (#19414)
Co-authored-by: thecaptain789 <redacted>
Georgi Gerganov [Wed, 11 Feb 2026 05:52:20 +0000 (07:52 +0200)]
model : fix wavtokenizer embedding notions (#19479)
Georgi Gerganov [Wed, 11 Feb 2026 05:52:00 +0000 (07:52 +0200)]
ggml : extend bin bcast for permuted src1 (#19484)
* tests : extend bin bcast for permuted src1
* cont : extend bin support
* cont : s0 is always 1
* tests : simplify
Georgi Gerganov [Wed, 11 Feb 2026 05:51:12 +0000 (07:51 +0200)]
metal : consolidate unary ops (#19490)
Daniel Bevenius [Wed, 11 Feb 2026 04:38:13 +0000 (05:38 +0100)]
llama : refactor sampling_info to use buffer_view template (#19368)
* llama : refactor sampling_info to use buffer_view template
This commit updates the sampling_info struct in llama-context to use a
buffer_view template for the logits, probs, sampled tokens, and
candidates buffers.
The motivation for this is to simplify the code, improve type safety
and readability.
Oliver Simons [Tue, 10 Feb 2026 21:31:19 +0000 (22:31 +0100)]
CUDA : Update CCCL-tag for 3.2 to final release from RC (#19486)
CCCL 3.2 has been released since it was added to llama.cpp as part of
the backend-sampling PR, and it makes sense to update from RC to final
released version.
https://github.com/NVIDIA/cccl/releases/tag/v3.2.0