]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
4 weeks agomtmd: Implement tiling for LFM2-VL (#19454)
Tarek Dakhran [Mon, 9 Feb 2026 16:30:32 +0000 (17:30 +0100)]
mtmd: Implement tiling for LFM2-VL (#19454)

4 weeks agoServer: log when converting requests to chat completions format (#19457)
손희준 [Mon, 9 Feb 2026 15:22:57 +0000 (00:22 +0900)]
Server: log when converting requests to chat completions format (#19457)

* Log converting requests

* Print as debug instead of info [no ci]

---------

Co-authored-by: openingnow <>
4 weeks agospec : remove check rate (#19377)
Sascha Rogmann [Mon, 9 Feb 2026 13:30:50 +0000 (14:30 +0100)]
spec : remove check rate (#19377)

* spec: remove parameter spec-ngram-check-rate

* spec : renamed statistics vars

* spec : add n_call_begin, n_call_accept

* spec : don't enable key-map-stats

4 weeks agoci : add metal server workflows (#19293)
Georgi Gerganov [Mon, 9 Feb 2026 13:09:30 +0000 (15:09 +0200)]
ci : add metal server workflows (#19293)

* ci : add metal server workflows

* cont : try fix python init

* cont : move to a separate workflow that runs only on master

* cont : fix num jobs

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agorevert : "[Model] Qwen3.5 dense and MoE support (no vision) (#19435)" (#19453)
Georgi Gerganov [Mon, 9 Feb 2026 12:57:51 +0000 (14:57 +0200)]
revert : "[Model] Qwen3.5 dense and MoE support (no vision) (#19435)" (#19453)

This reverts commit 39bf692af1cba2a1072e4a42425611bf1ec2807d.

4 weeks agoggml-virtgpu: add backend documentation (#19354)
Kevin Pouget [Mon, 9 Feb 2026 12:15:42 +0000 (13:15 +0100)]
ggml-virtgpu: add backend documentation (#19354)

* ggml-virtgpu: add backend documentation

Assisted-by-AI: Claude Code

* CODEOWNERS: add /docs/backend/GGML-VirtGPU/ -> kpouget

* README: add the link to docs/backend/GGML-VirtGPU/ggml-virt.md

* docs/ggml-virt: add link to testing + configuration

* Revert "CODEOWNERS: add /docs/backend/GGML-VirtGPU/ -> kpouget"

This reverts commit 8ece8e72e24d305f308505c08ebb75804546374e.

* drop the ggml- prefix

* s/ggerganov/ggml-org

* Relocate VirtGPU.md

* reorganize the text

* turn turn the ascii diagram into a mermaid

* README.md: update the link to the main doc

4 weeks agocmake : add variable to skip installing tests (#19370)
Hugo [Mon, 9 Feb 2026 06:12:02 +0000 (06:12 +0000)]
cmake : add variable to skip installing tests (#19370)

When packaging downstream, there's usually little point in installing
test. The default behaviour remains the same.

4 weeks ago[Model] Qwen3.5 dense and MoE support (no vision) (#19435)
Piotr Wilkin (ilintar) [Sun, 8 Feb 2026 23:24:08 +0000 (00:24 +0100)]
[Model] Qwen3.5 dense and MoE support (no vision) (#19435)

* Unified delta net handling

* Remove old methods.

* Refactor and optimize

* Adapt autoregressive version from @ymcki

* Change to decay mask approach

* Fix bad permute

* Qwen 3.5 support

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Further fixes

* Use inheritance, remove unneeded conts

* Not like this!

* Remove ggml.h explicit import

* Remove transformers, fix the views

* ACTUALLY fix views, make super calls explicit in conversion.

* Fix conversion again

* Remove extra ggml.h imports

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoCUDA: Fix non-contig rope (#19338)
Oliver Simons [Sun, 8 Feb 2026 13:12:51 +0000 (14:12 +0100)]
CUDA: Fix non-contig rope (#19338)

* Rename variables + fix rope_neox

Seems memory layout is shared with Vulkan so we can port fix from
https://github.com/ggml-org/llama.cpp/pull/19299

* Fix rope_multi

* Fix rope_vision

* Fix rope_norm

* Rename ne* to ne0* for consistent variable naming

* cont : consistent stride names

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agorpc : update from common.cpp (#19400)
Adrien Gallouët [Sun, 8 Feb 2026 08:06:45 +0000 (09:06 +0100)]
rpc : update from common.cpp (#19400)

Signed-off-by: Adrien Gallouët <redacted>
4 weeks agoserver : improve context checkpoint logic (#19408)
Georgi Gerganov [Sun, 8 Feb 2026 07:40:04 +0000 (09:40 +0200)]
server : improve context checkpoint logic (#19408)

4 weeks agollama-quantize : cleanup `--help` output (#19317)
ddh0 [Sun, 8 Feb 2026 07:22:38 +0000 (01:22 -0600)]
llama-quantize : cleanup `--help` output (#19317)

* cleanup `llama-quantize --help` output

some much needed TLC

* remove future argument

oops, spoiler

* cleanup of cleanup

4 weeks agoci : remove server job from webui and move slow test (#19424)
Sigbjørn Skjæret [Sun, 8 Feb 2026 00:20:00 +0000 (01:20 +0100)]
ci : remove server job from webui and move slow test (#19424)

* remove server job from webui and move slow test

* use pip-install option

4 weeks agoci : use -j param correctly when building with sanitizers (#19411)
Georgi Gerganov [Sat, 7 Feb 2026 22:50:47 +0000 (00:50 +0200)]
ci : use -j param correctly when building with sanitizers (#19411)

* ci : use less jobs when building with sanitizers

* cont : fix nproc

* cont : fix the fix

* cont : simplify

4 weeks agometal : consolidate bin kernels (#19390)
Georgi Gerganov [Sat, 7 Feb 2026 08:35:56 +0000 (10:35 +0200)]
metal : consolidate bin kernels (#19390)

* metal : refactor bin kernels

* cont

* cont : fix cv

4 weeks agometal : fix event synchronization in cpy_tensor_async (#19402)
Georgi Gerganov [Sat, 7 Feb 2026 05:37:15 +0000 (07:37 +0200)]
metal : fix event synchronization in cpy_tensor_async (#19402)

4 weeks agomodel : support Step3.5-Flash (#19283)
forforever73 [Fri, 6 Feb 2026 20:06:14 +0000 (04:06 +0800)]
model : support Step3.5-Flash (#19283)

* Support Step3.5-Flash

* fix: norm.weight + 1 (HF zero_centered=true)

* step35: simplify GGUF conversion + drop redundant rope KVs

* Address review feedback

* rename limits -> clamp

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <redacted>
* rename swiglu limits -> swiglu clamp in LLM_KV

* avoid CI fail

* Apply suggestions from code review

* Apply suggestions from code review

* disabled KV shifting for LLM_ARCH_STEP35

* Apply suggestions from code review

* mistakenly removed cmath

* add model size && apply missed suggestion

* assert partial_rotary_factors

* fix CI errors:

* load freq_base_swa

---------

Co-authored-by: lvyichen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agogguf-py : bump sentencepiece version (#19319)
Alex Trotta [Fri, 6 Feb 2026 20:05:19 +0000 (15:05 -0500)]
gguf-py : bump sentencepiece version (#19319)

* gguf-py: Bump sentencepiece version

There's a new version that's been out for a while that addresses the issues mentioned in https://github.com/ggml-org/llama.cpp/pull/14200. There's a long chain of reasons I would like this change, but the short version is that it allows people who use both `sentencepiece` and `gguf` to take advantage of these fixes. On conda-forge, currently, it locks the version (since there is no notion of optional dependencies).

Regardless, I don't think this should be too controversial.

* review feedback

4 weeks agoggml-webgpu: JIT compile binary operators and handle binding overlaps (#19310)
Abhijit Ramesh [Fri, 6 Feb 2026 18:33:30 +0000 (10:33 -0800)]
ggml-webgpu: JIT compile binary operators and handle binding overlaps (#19310)

* ggml webgpu: port binary operators to use pre-wgsl

* Add binary.wgsl: unified shader with conditionals for all 4 ops

* Add gen_binary_shaders.cpp: build tool for using pre_wgsl preprocessor

* Remove bin_op.tmpl.wgsl and binary.wgsl (Python template)

* Update CMake to generate binary operator shaders at build time

* ggml-webgpu: migrate binary ops to JIT compilation with overlap handling

* port binary operators from AOT to pre-wgsl JIT compilation

* add src1=dst overlap handling for binary ops

* use compile-time workgroup size defines instead of runtime overrides

* ggml-webgpu: complete overlap handling for binary ops

* add support for inplace & overlap case in binding setup

* restructure conditional logic to handle all overlap cases

* ensure all buffer bindings are correctly assigned for edge cases

* ggml-webgpu: remove unused binary overlap cases

Remove src0==src1 binary overlap case that never occurs in practice.

* keep INPLACE (src0==dst), OVERLAP (src1==dst), DEFAULT

* remove unused src0==src1 and all-same variant

* refactor wgsl to eliminate duplication

4 weeks agosycl: add F16 support for GGML_OP_CEIL (#19306)
Nechama Krashinski [Fri, 6 Feb 2026 15:13:44 +0000 (17:13 +0200)]
sycl: add F16 support for GGML_OP_CEIL (#19306)

* Fix SYCL CEIL operator

* sycl: implement GGML_OP_CEIL

4 weeks agotests: reduce number of FA test permutations (#19381)
Jeff Bolz [Fri, 6 Feb 2026 14:50:30 +0000 (08:50 -0600)]
tests: reduce number of FA test permutations (#19381)

Only test non-F16 for head size 64 and 72 (one a multiple of QK, one not).

4 weeks agocommon : add common_speculative_is_compat() (#19270)
Georgi Gerganov [Fri, 6 Feb 2026 14:47:22 +0000 (16:47 +0200)]
common : add common_speculative_is_compat() (#19270)

* llama : add llama_memory_can_rm_suffix()

* Revert "llama : add llama_memory_can_rm_suffix()"

This reverts commit d30e59b62a15ef4266a6503e3f4eba770aec001b.

* spec : check if the target context is compatible for spec decoding

4 weeks agounicode : MSVC regex fix (#19340)
Lasse Lauwerys [Fri, 6 Feb 2026 13:56:13 +0000 (14:56 +0100)]
unicode : MSVC regex fix (#19340)

* Fix model loading regex error

* Change comments

* Use const_iterator and remove specializations

---------

Co-authored-by: Alde Rojas <redacted>
4 weeks agoKimi-Linear support (backend agnostic + MLA KV cache) (#18755)
ymcki [Fri, 6 Feb 2026 10:39:58 +0000 (18:39 +0800)]
Kimi-Linear support (backend agnostic + MLA KV cache) (#18755)

* kimi linear model implementation

* kimi linear convert_hf_to_gguf

* kimi linear constants.py tensor_mapping.py

* Kimi Linear ggml.h

* kimi linear ggml-cpu

* Kimi Linear ggml-cuda

* Kimi Linear ggml.c

* kimi linear src/llama

* remove "const int64_t n_seq_tokens = q->ne[2];" to get rid of unused variable warning

* remove type mismatch warning

* read MoE params

* removed some hard coded code

* removed all hard code

* use DeepseekV2 tokenizer

* removed unnecessary internal methods called by the old set_vocab of KimiLinear

* rewrite get_vocab for KimiLinear. Removed all kda_scan code

* removed all traces of kda_scan

* reduce OP count by 1 due to removal of kda_scan

* Move KIMI_LINEAR to llm_arch_is_hybrid to enable KV cache

* set n_embd_head_k/v to ensure kv cache works

* don't quantize conv1d of Kimi Linear

* Kimi Linear backend agnostic

* removed LOG_INFO

* naive chunking form implemented

* fixed some comments

* add Kimi-K2 specific tokens to be recognized as EOG

* build_kda_autoregressive is implemented to replace build_kda_recurrent for faster inference. sync'd to b7682

* replaced Akk and Aqk with mul_mat and clamp

* no clamp version

* Moved Aqk computation out of the loop

* fixed typo and split wkv_b into wk_b and wv_b

* MLA KV cache support

* fix trailing spaces

* moved const llama_model & model; around to follow qwen3next format and see if it cna pass the -Wunused-private-field error

* fix trailing whitespace

* removed traling whitespaces in empty line + make sure indentation is multiple of 4

* try to make lint happy

* remove blank lines to make lint happy

* removed at least blank line containing white space

* fixed flake8 complaints locally

* return ggml_tensor * pair in kda_autoregressive and kda_chunking as in ngxson's Qwen3Next improvement

* removed Kimi-Linear specific change that causes failure at server-windows

* removed private: from kimi_linear to make build checks happy

* removed unnecessary ggml_cont before ggml_reshape

* created static function causal_conv1d to abtract similar code for q/k/v

* merged dt_bias to SSM_DT. Do -exp(log_A) in convert_hf_to_gguf.py.

* reverted to original

* fixed find_hparam calls. Fixed e_score_correction_bias to use bias instead of weight. Removed all ssm_conv bias terms.

* remove DT_B from constants.py. remove one comment line in llama-model.cpp

* new class llm_graph_input_mem_hybrid_k to get around the new MLA change. switch the concat order of ggml_concat calls in kimi-linear.cpp to accommodate MLA changes. Removed support for exp_probs_b.weight

* remove ssm_o_norm_b

* remove ssm_o_norm_b

* changed hparams.kda_head_dim to hparams.n_embd_head_kda. added TODO comment for class llama_graph_mem_hybrid_k

* removed all ggml_cont b4 ggml_reshape_4d

* Whitespace

* replaced all hparams.get with find_hparams

* added new names for n_experts, n_experts_used and score_func in TextModel and removed their code in KimiLinear in convert_hf_to_gguf.py. Removed unnecessary ggml_cont and GGML_ASSERT in kimi-linear.cpp

* use is_mla to switch between different mem_hybrid types

* fixed logical errors in convert_hf_to_gguf.py pointed out by CISC

* removed if else for required parameters kv_lora_rank and qk_rope_head_dim

* add back ggml_cont for Vcur

* minor changes

* removed extra line in llama-vocab.cpp. Added back the comment in llama-graph.cpp

* f16 gguf cannot run without context length

* made a mistake of adding back n_ctx parsing

---------

Co-authored-by: Piotr Wilkin (ilintar) <redacted>
4 weeks agovulkan: For coopmat2 FA, use fp16 accumulators for the final result (#19376)
Jeff Bolz [Fri, 6 Feb 2026 08:15:13 +0000 (02:15 -0600)]
vulkan: For coopmat2 FA, use fp16 accumulators for the final result (#19376)

The cpu and cuda backends use fp16 for the VKQ accumulator type, this change
does the same for vulkan. This helps particularly with large head sizes which
are very register-limited.

I tried this for the coopmat1 path and it slowed down a bit. I didn't try for
scalar.

I applied the softmax bias that the cuda backend uses to avoid overflow,
although I was not able to reproduce the original bug without it.

4 weeks agovulkan: make FA mask/softcap enables spec constants (#19309)
Jeff Bolz [Fri, 6 Feb 2026 07:49:58 +0000 (01:49 -0600)]
vulkan: make FA mask/softcap enables spec constants (#19309)

* vulkan: make FA mask/softcap enables spec constants

* don't specialize for sinks

* bump timeout a little bit

4 weeks agometal : skip loading all-zero mask (#19337)
Georgi Gerganov [Fri, 6 Feb 2026 07:25:11 +0000 (09:25 +0200)]
metal : skip loading all-zero mask (#19337)

* metal : skip loading all-zero mask

* cont : minor

4 weeks agollama : rename llama-sampling to llama-sampler (#19363)
Daniel Bevenius [Fri, 6 Feb 2026 06:26:54 +0000 (07:26 +0100)]
llama : rename llama-sampling to llama-sampler (#19363)

This commit addresses the TODO in llama-sampling.h to rename that header
and the implementation to llama-sampler.

4 weeks agocuda : cuda graphs now compare all node params (#19383)
Georgi Gerganov [Fri, 6 Feb 2026 05:55:06 +0000 (07:55 +0200)]
cuda : cuda graphs now compare all node params (#19383)

4 weeks agometal : adaptive CPU/GPU interleave based on number of nodes (#19369)
Georgi Gerganov [Thu, 5 Feb 2026 17:07:22 +0000 (19:07 +0200)]
metal : adaptive CPU/GPU interleave based on number of nodes (#19369)

4 weeks agovulkan: Preprocess FA mask to detect all-neg-inf and all-zero. (#19281)
Jeff Bolz [Thu, 5 Feb 2026 15:26:38 +0000 (09:26 -0600)]
vulkan: Preprocess FA mask to detect all-neg-inf and all-zero. (#19281)

Write out a 2-bit code per block and avoid loading the mask when it
matches these two common cases.

Apply this optimization when the mask is relatively large (i.e. prompt
processing).

4 weeks agobenches : update models + numbers (#19359)
Georgi Gerganov [Thu, 5 Feb 2026 12:34:07 +0000 (14:34 +0200)]
benches : update models + numbers (#19359)

* bench : update script

* benches : update numbers

4 weeks agodocker : fix vulkan build (#19352)
Sigbjørn Skjæret [Thu, 5 Feb 2026 10:10:39 +0000 (11:10 +0100)]
docker : fix vulkan build (#19352)

4 weeks agovendor : update BoringSSL to 0.20260204.0 (#19333)
Adrien Gallouët [Thu, 5 Feb 2026 08:53:35 +0000 (09:53 +0100)]
vendor : update BoringSSL to 0.20260204.0 (#19333)

Signed-off-by: Adrien Gallouët <redacted>
4 weeks agometal : add diag (#19330)
Georgi Gerganov [Thu, 5 Feb 2026 08:08:45 +0000 (10:08 +0200)]
metal : add diag (#19330)

4 weeks agovulkan: fix GPU deduplication logic. (#19222)
Oleksandr Kuvshynov [Thu, 5 Feb 2026 08:06:59 +0000 (03:06 -0500)]
vulkan: fix GPU deduplication logic. (#19222)

* vulkan: fix GPU deduplication logic.

As reported in https://github.com/ggml-org/llama.cpp/issues/19221, the
(same uuid, same driver) logic is problematic for windows+intel igpu.

Let's just avoid filtering for MoltenVK which is apple-specific, and
keep the logic the  same as before 88d23ad5 - just dedup based on UUID.

Verified that MacOS + 4xVega still reports 4 GPUs with this version.

* vulkan: only skip dedup when both drivers are moltenVk

4 weeks agovulkan: Set k_load_shmem to false when K is too large (#19301)
Jeff Bolz [Thu, 5 Feb 2026 07:48:33 +0000 (01:48 -0600)]
vulkan: Set k_load_shmem to false when K is too large (#19301)

4 weeks agovulkan: fix non-contig rope (#19299)
Jeff Bolz [Thu, 5 Feb 2026 07:38:59 +0000 (01:38 -0600)]
vulkan: fix non-contig rope (#19299)

4 weeks agometal : add missing includes (#19348)
will-lms [Thu, 5 Feb 2026 06:05:09 +0000 (01:05 -0500)]
metal : add missing includes (#19348)

4 weeks agovendor : add missing llama_add_compile_flags (#19322)
Sigbjørn Skjæret [Thu, 5 Feb 2026 01:27:38 +0000 (02:27 +0100)]
vendor : add missing llama_add_compile_flags (#19322)

* add missing llama_add_compile_flags

* disable all warnings for ssl, crypto and fipsmodule

5 weeks agovendor: update cpp-httplib version (#19313)
Aaron Teo [Wed, 4 Feb 2026 21:15:03 +0000 (05:15 +0800)]
vendor: update cpp-httplib version (#19313)

Signed-off-by: Aaron Teo <redacted>
5 weeks agocodeowners : add danbev for examples/debug (#19332)
Daniel Bevenius [Wed, 4 Feb 2026 19:20:40 +0000 (20:20 +0100)]
codeowners : add danbev for examples/debug (#19332)

* codeowners : add danbev for examples/debug

* Add @pwilkin to CODEOWNERS for debug

---------

Co-authored-by: Piotr Wilkin (ilintar) <redacted>
5 weeks agodebug: make common_debug_print_tensor readable (#19331)
Xuan-Son Nguyen [Wed, 4 Feb 2026 16:55:31 +0000 (17:55 +0100)]
debug: make common_debug_print_tensor readable (#19331)

* debug: make common_debug_print_tensor readable

* editorconfig

5 weeks agoci : fix sanitize workflow to enable ggml sanitizers too (#19323)
Georgi Gerganov [Wed, 4 Feb 2026 13:12:03 +0000 (15:12 +0200)]
ci : fix sanitize workflow to enable ggml sanitizers too (#19323)

5 weeks agomodel: (qwen3next) correct vectorized key_gdiff calculation (#19324)
Xuan-Son Nguyen [Wed, 4 Feb 2026 12:09:58 +0000 (13:09 +0100)]
model: (qwen3next) correct vectorized key_gdiff calculation (#19324)

* model: (qwen3next) correct vectorized key_gdiff calculation

* move transpose to outside of loop

5 weeks agotests : add non-cont, inplace rope tests (#19296)
Georgi Gerganov [Wed, 4 Feb 2026 10:45:21 +0000 (12:45 +0200)]
tests : add non-cont, inplace rope tests (#19296)

* tests : add non-cont, inplace rope tests

* cont : exercise dim 3

Co-authored-by: Jeff Bolz <redacted>
* cont : more dim3 exercises

---------

Co-authored-by: Jeff Bolz <redacted>
5 weeks agomodel-conversion : add tensor-info.py utility (#18954)
Daniel Bevenius [Wed, 4 Feb 2026 09:40:53 +0000 (10:40 +0100)]
model-conversion : add tensor-info.py utility (#18954)

This commit adds a new python script that can be used to print tensors
information from a tensor in a safetensors model.

The motivation for this is that during model conversion work it can
sometimes be useful to verify the shape of tensors in the original
model. While it is possible to print the tensors when loading the model
this can be slow when working with larger models.
With this script it is possible to quickly query tensor shapes.

Example usage:
```console
(venv) $ ./scripts/utils/tensor-info.py --help
usage: tensor-info.py [-h] [-m MODEL_PATH] [-l] [tensor_name]

Print tensor information from a safetensors model

positional arguments:
  tensor_name           Name of the tensor to inspect

options:
  -h, --help            show this help message and exit
  -m MODEL_PATH, --model-path MODEL_PATH
                        Path to the model directory (default: MODEL_PATH environment variable)
  -l, --list            List unique tensor patterns in the model (layer numbers replaced with #)
```

Listing tensor names:
```console
(venv) $ ./scripts/utils/tensor-info.py -m ~/work/ai/models/google/embeddinggemma-300m -l
embed_tokens.weight
layers.#.input_layernorm.weight
layers.#.mlp.down_proj.weight
layers.#.mlp.gate_proj.weight
layers.#.mlp.up_proj.weight
layers.#.post_attention_layernorm.weight
layers.#.post_feedforward_layernorm.weight
layers.#.pre_feedforward_layernorm.weight
layers.#.self_attn.k_norm.weight
layers.#.self_attn.k_proj.weight
layers.#.self_attn.o_proj.weight
layers.#.self_attn.q_norm.weight
layers.#.self_attn.q_proj.weight
layers.#.self_attn.v_proj.weight
norm.weight
```

Printing a specific tensor's information:
```console
(venv) $ ./scripts/utils/tensor-info.py -m ~/work/ai/models/google/embeddinggemma-300m layers.0.input_layernorm.weight
Tensor: layers.0.input_layernorm.weight
File:   model.safetensors
Shape:  [768]
```

5 weeks agospec : fix the check-rate logic of ngram-simple (#19261)
Georgi Gerganov [Wed, 4 Feb 2026 08:39:53 +0000 (10:39 +0200)]
spec : fix the check-rate logic of ngram-simple (#19261)

* spec : fix the check-rate logic of ngram-simple

* cont : refactor + fix checks

5 weeks agocompletion : simplify batch (embd) processing (#19286)
Daniel Bevenius [Wed, 4 Feb 2026 04:43:28 +0000 (05:43 +0100)]
completion : simplify batch (embd) processing (#19286)

* completion : simplify batch (embd) processing

This commit simplifies the processing of embd by removing the for loop
that currently exists which uses params.n_batch as its increment. This
commit also removes the clamping of n_eval as the size of embd is always
at most the size of params.n_batch.

The motivation is to clarify the code as it is currently a little
confusing when looking at this for loop in isolation and thinking that
it can process multiple batches.

* add an assert to verify n_eval is not greater than n_batch

5 weeks agoggml-virtgpu: make the code thread safe (#19204)
Kevin Pouget [Wed, 4 Feb 2026 02:46:18 +0000 (03:46 +0100)]
ggml-virtgpu: make the code thread safe (#19204)

* ggml-virtgpu: regenerate_remoting.py: add the ability to deprecate a function

* ggml-virtgpu: deprecate buffer_type is_host remoting

not necessary

* ggml-virtgpu: stop using static vars as cache

The static init isn't thread safe.

* ggml-virtgpu: protect the use of the shared memory to transfer data

* ggml-virtgpu: make the remote calls thread-safe

* ggml-virtgpu: backend: don't continue if couldn't allocate the tensor memory

* ggml-virtgpu: add a cleanup function for consistency

* ggml-virtgpu: backend: don't crash if buft->iface.get_max_size is missing

* fix style and ordering

* Remove the static variable in apir_device_get_count

* ggml-virtgpu: improve the logging

* fix review minor formatting changes

5 weeks agoggml-cpu: use LUT for converting e8->f32 scales on x86 (#19288)
Aman Gupta [Wed, 4 Feb 2026 01:43:29 +0000 (09:43 +0800)]
ggml-cpu: use LUT for converting e8->f32 scales on x86 (#19288)

* ggml-cpu: use LUT for converting e8->f32 scales on x86

* add dispatch based on macro

5 weeks agometal : add solve_tri (#19302)
Georgi Gerganov [Tue, 3 Feb 2026 21:43:14 +0000 (23:43 +0200)]
metal : add solve_tri (#19302)

5 weeks agoci : add sanitizer runs for server (#19291)
Georgi Gerganov [Tue, 3 Feb 2026 20:41:20 +0000 (22:41 +0200)]
ci : add sanitizer runs for server (#19291)

5 weeks agosampling : delegate input allocation to the scheduler (#19266)
Georgi Gerganov [Tue, 3 Feb 2026 20:16:16 +0000 (22:16 +0200)]
sampling : delegate input allocation to the scheduler (#19266)

* sampling : delegate input allocation to the scheduler

* graph : compute backend samplers only if needed

5 weeks agovulkan: disable coopmat1 fa on Nvidia Turing (#19290)
Ruben Ortlam [Tue, 3 Feb 2026 16:37:32 +0000 (17:37 +0100)]
vulkan: disable coopmat1 fa on Nvidia Turing (#19290)

5 weeks agoCUDA: use mmvq for mul-mat-id for small batch sizes (#18958)
Aman Gupta [Tue, 3 Feb 2026 15:31:23 +0000 (23:31 +0800)]
CUDA: use mmvq for mul-mat-id for small batch sizes (#18958)

* CUDA: use mmvq for mul-mat-id for small batch sizes

* add mmvq too

* Fix perf issue on ampere. Use mmvf mm-id only for non-nvidia GPUs

* templatize multi_token_path

5 weeks agomodels : remove unnecessary cont in openelm (#19289)
Sigbjørn Skjæret [Tue, 3 Feb 2026 13:20:57 +0000 (14:20 +0100)]
models : remove unnecessary cont in openelm (#19289)

5 weeks agometal : minor cleanup (#19251)
Georgi Gerganov [Tue, 3 Feb 2026 11:43:29 +0000 (13:43 +0200)]
metal : minor cleanup (#19251)

5 weeks agoCUDA: Fix loop unrolling for BW in mul_mat_q_stream_k_fixup (#19053)
Oliver Simons [Tue, 3 Feb 2026 10:33:14 +0000 (11:33 +0100)]
CUDA: Fix loop unrolling for BW in mul_mat_q_stream_k_fixup (#19053)

By providing stride_* variables as size_t (i.e., 64-bit) the compiler can
correctly unroll the [two for-loops](https://github.com/ggml-org/llama.cpp/blob/557515be1e93ed8939dd8a7c7d08765fdbe8be31/ggml/src/ggml-cuda/mmq.cuh#L3789-L3816)
on BW. This gives some perf for prefill/pp phase on BW, while not affecting
other SMs:

| GPU                                                     | Model                 | Test   |   t/s master |   t/s osimons/fix_bw_mmq_fixup_kernel |   Speedup |
|:--------------------------------------------------------|:----------------------|:-------|-------------:|--------------------------------------:|----------:|
| NVIDIA RTX 6000 Ada Generation                          | gpt-oss 20B MXFP4 MoE | pp8096 |      8404.05 |                               8375.79 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | llama 3B Q4_K_M       | pp8096 |     16148.93 |                              16019.60 |      0.99 |
| NVIDIA RTX 6000 Ada Generation                          | llama 8B Q4_0         | pp8096 |      8008.29 |                               7978.80 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | nemotron_h 9B BF16    | pp8096 |      4263.16 |                               4248.53 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | nemotron_h 9B Q4_K_M  | pp8096 |      5165.11 |                               5157.43 |      1.00 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | gpt-oss 20B MXFP4 MoE | pp8096 |     12582.80 |                              12758.37 |      1.01 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | llama 3B Q4_K_M       | pp8096 |     16879.10 |                              17619.47 |      1.04 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | llama 8B Q4_0         | pp8096 |     10649.90 |                              10982.65 |      1.03 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | nemotron_h 9B BF16    | pp8096 |      7717.73 |                               7716.22 |      1.00 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | nemotron_h 9B Q4_K_M  | pp8096 |      7301.90 |                               7370.38 |      1.01 |

5 weeks agoggml: added cleanups in ggml_quantize_free (#19278)
George [Tue, 3 Feb 2026 06:43:39 +0000 (08:43 +0200)]
ggml: added cleanups in ggml_quantize_free (#19278)

Add missing cleanup calls for IQ2_S, IQ1_M quantization types and IQ3XS with 512 blocks during quantization cleanup.

5 weeks agocuda : revert CUDA_SCALE_LAUNCH_QUEUES override until investigated (#19227)
Gaurav Garg [Tue, 3 Feb 2026 06:41:02 +0000 (12:11 +0530)]
cuda : revert CUDA_SCALE_LAUNCH_QUEUES override until investigated (#19227)

Hangs were reported on Jetson Orin AGX if we set CUDA_SCALE_LAUNCH_QUEUES=4x. Reverting the previous PR (#19042) and updating the document to consider setting CUDA_SCALE_LAUNCH_QUEUES=4x for faster throughput on multi-GPU systems.

5 weeks agovocab: add Falcon-H1-Tiny-Coder FIM tokens (#19249)
Alexey Dubrov [Tue, 3 Feb 2026 06:31:01 +0000 (09:31 +0300)]
vocab: add Falcon-H1-Tiny-Coder FIM tokens (#19249)

5 weeks agospec : simplify time measurement using common_time_meas (#19262)
Georgi Gerganov [Tue, 3 Feb 2026 06:20:15 +0000 (08:20 +0200)]
spec : simplify time measurement using common_time_meas (#19262)

5 weeks agoopencl: refactor some ops, concat, repeat, tanh and scale (#19226)
lhez [Mon, 2 Feb 2026 23:54:43 +0000 (15:54 -0800)]
opencl: refactor some ops, concat, repeat, tanh and scale (#19226)

* opencl: refactor concat

* opencl: refactor repeat

* opencl: refactor tanh

* opencl: enable fp16 for tanh

* opencl: refactor scale

* opencl: fix unused variables

5 weeks agojinja : add missing 'in' test to template engine (#19004) (#19239)
Sid Mohan [Mon, 2 Feb 2026 20:00:55 +0000 (12:00 -0800)]
jinja : add missing 'in' test to template engine (#19004) (#19239)

* jinja : add missing 'in' test to template engine (#19004)

The jinja template parser was missing the 'in' test from
global_builtins(), causing templates using reject("in", ...),
select("in", ...), or 'x is in(y)' to fail with
"selectattr: unknown test 'in'".

This broke tool-calling for Qwen3-Coder and any other model
whose chat template uses the 'in' test.

Added test_is_in supporting array, string, and object containment
checks, mirroring the existing 'in' operator logic in runtime.cpp.

Includes test cases for all three containment types plus
reject/select filter usage.

Co-Authored-By: Claude Opus 4.5 <redacted>
* reuse test_is_in in binary op

---------

Co-authored-by: Sid Mohan <redacted>
Co-authored-by: Claude Opus 4.5 <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
5 weeks agomtmd: add min/max pixels gguf metadata (#19273)
Xuan-Son Nguyen [Mon, 2 Feb 2026 19:59:06 +0000 (20:59 +0100)]
mtmd: add min/max pixels gguf metadata (#19273)

5 weeks agoggml-cpu: FA split across kv for faster TG (#19209)
Aman Gupta [Mon, 2 Feb 2026 17:19:55 +0000 (01:19 +0800)]
ggml-cpu: FA split across kv for faster TG (#19209)

* ggml-cpu: split across kv for faster TG

* simplify sinks application

* add ref impl

5 weeks agoserver: print actual model name in 'model not found" error (#19117)
Matthieu Coudron [Mon, 2 Feb 2026 15:55:27 +0000 (16:55 +0100)]
server: print actual model name in 'model not found" error (#19117)

Experimenting with AI, my environment gets messy fast and it's not
always easy to know what model my software is trying to load. This helps
with troubleshooting.

before:

Error: {
  code = 400,
  message = "model not found",
  type = "invalid_request_error"
}

After:

Error: {
  code = 400,
  message = "model 'toto' not found",
  type = "invalid_request_error"
}

5 weeks agoci: add test-backend-ops test for CPU (#19268)
Aman Gupta [Mon, 2 Feb 2026 14:40:28 +0000 (22:40 +0800)]
ci: add test-backend-ops test for CPU (#19268)

5 weeks agoRemove support for Nvidia & AMD GPU, because the oneAPI plugin for Nvidia & AMD GPU...
Neo Zhang [Mon, 2 Feb 2026 13:06:21 +0000 (21:06 +0800)]
Remove support for Nvidia & AMD GPU, because the oneAPI plugin for Nvidia & AMD GPU is unavailable: download/installation channels are out of work. (#19246)

User can't build up the software for Nvidia & AMD GPU.
rm the oneMath since it is only used in NV and AMD code path.

5 weeks agosycl: implement GGML_OP_TOP_K (#19242)
Tamar [Mon, 2 Feb 2026 13:05:51 +0000 (15:05 +0200)]
sycl: implement GGML_OP_TOP_K (#19242)

5 weeks agometal : support virtual devices (#18919)
Georgi Gerganov [Mon, 2 Feb 2026 12:29:44 +0000 (14:29 +0200)]
metal : support virtual devices (#18919)

* metal : support virtual devices

* cont : manage buffer type context memory

* metal : add events

* cont : implement cpy_tensor_async

5 weeks agomodel-conversion : add debug option to conversion script (#19265)
Daniel Bevenius [Mon, 2 Feb 2026 10:29:57 +0000 (11:29 +0100)]
model-conversion : add debug option to conversion script (#19265)

This commit adds a debug option to the model conversion script to enable
using the Python debugger (pdb) during model conversion.

The motivation for this is that I've found myself adding this a few
times now and it would be quicker to have this flag as an option and a
makefile target/recipe for it.

5 weeks agoggml-backend: fix async set/get fallback sync (#19179)
Johannes Gäßler [Mon, 2 Feb 2026 09:00:05 +0000 (10:00 +0100)]
ggml-backend: fix async set/get fallback sync (#19179)

5 weeks agoauthors : update (#19263)
Georgi Gerganov [Mon, 2 Feb 2026 06:51:25 +0000 (08:51 +0200)]
authors : update (#19263)

[no ci]

5 weeks agodocs : Minor cleanups (#19252)
Christian Kastner [Mon, 2 Feb 2026 06:38:55 +0000 (07:38 +0100)]
docs : Minor cleanups (#19252)

* Update old URLs to github.com/ggml-org/

* Bump copyrights

5 weeks agospec : various improvements ton ngram-map + docs (#19253)
Sascha Rogmann [Mon, 2 Feb 2026 06:26:58 +0000 (07:26 +0100)]
spec : various improvements ton ngram-map + docs (#19253)

* spec: ngram-map and reasoning chats

* spec: add t_begin and t_accept

* ngram-map : add internal hash map

* docs : update ngram-map, add ngram-mod

* docs : fix ngram-map-k

* docs : differences between implementations

5 weeks agoRemove pipeline cache mutexes (#19195)
Nikhil Jain [Mon, 2 Feb 2026 02:47:29 +0000 (18:47 -0800)]
Remove pipeline cache mutexes (#19195)

* Remove mutex for pipeline caches, since they are now per-thread.

* Add comment

* Run clang-format

* Cleanup

* Run CI again

* Run CI once more

* Run clang-format

5 weeks agoBump cmake max version (needed for Windows on Snapdragon builds) (#19188)
Max Krasnyansky [Sun, 1 Feb 2026 22:13:38 +0000 (14:13 -0800)]
Bump cmake max version (needed for Windows on Snapdragon builds) (#19188)

* Bump max cmake version (needed for Windows on Snapdragon builds)

* cmake: move max version setting into ggml/CMakeLists

5 weeks agonix: fix allowUnfreePredicate for packages with multiple licenses (#19237)
Alexis Williams [Sun, 1 Feb 2026 20:10:48 +0000 (12:10 -0800)]
nix: fix allowUnfreePredicate for packages with multiple licenses (#19237)

The allowUnfreePredicate in pkgsCuda was wrapping p.meta.license in a
list unconditionally. This fails when meta.license is already a list
of licenses, as it creates a nested list and then tries to access
.free and .shortName on the inner list.

Use lib.toList instead, which correctly handles both cases:
- Single license attrset -> wraps in list
- List of licenses -> returns unchanged

5 weeks agocreate test.sh to enhance the parameters for testing, update the guide, rm useless...
Neo Zhang [Sun, 1 Feb 2026 10:24:00 +0000 (18:24 +0800)]
create test.sh to enhance the parameters for testing, update the guide, rm useless script (#19243)

5 weeks agonix: fix nix develop .#python-scripts (#19218)
Matthieu Coudron [Sat, 31 Jan 2026 16:01:46 +0000 (17:01 +0100)]
nix: fix nix develop .#python-scripts (#19218)

Without this I get:

> * Getting build dependencies for wheel...
> * Building wheel...
> Successfully built gguf-0.17.1-py3-none-any.whl
> Finished creating a wheel...
> Finished executing pypaBuildPhase
> Running phase: pythonRuntimeDepsCheckHook
> Executing pythonRuntimeDepsCheck
> Checking runtime dependencies for gguf-0.17.1-py3-none-any.whl
>   - requests not installed
For full logs, run:
    nix log /nix/store/x0c4a251l68bvdgang9d8v2fsmqay8a4-python3.12-gguf-0.0.0.drv

I changed a bit the style to make it more terse ~> more elegant in my
opinion.

5 weeks agoggml-hexagon: flash-attention and reduce-sum optimizations (#19141)
nullname [Sat, 31 Jan 2026 05:14:20 +0000 (13:14 +0800)]
ggml-hexagon: flash-attention and reduce-sum optimizations (#19141)

* wip

* ggml-hexagon: add vectorized dot product function for FP32 and FP16 accumulation

* ggml-hexagon: optimize dot product functions for FP16 and FP32 with new vectorized implementations

* wip

* ggml-hexagon: optimize hvx_vec_dump_f32_n and hvx_vec_reduce_sum_qf32x2 functions for improved performance

* ggml-hexagon: refactor dot product functions to use a common loading function for improved readability

* optimize vector dot product functions to use unified reduction for improved performance

* wip

* ggml-hexagon: add vectorized dot product function for FP32 and FP16 accumulation

* ggml-hexagon: optimize dot product functions for FP16 and FP32 with new vectorized implementations

* wip

* ggml-hexagon: optimize hvx_vec_dump_f32_n and hvx_vec_reduce_sum_qf32x2 functions for improved performance

* ggml-hexagon: refactor dot product functions to use a common loading function for improved readability

* optimize vector dot product functions to use unified reduction for improved performance

* hexagon: optimize reduce-sum for v75+

* hexagon: always keep row_sums in sf/fp32

* ggml-hexagon: enhance directory checks for HEXAGON_SDK_ROOT and HEXAGON_TOOLS_ROOT

* fix compiling error after rebase

---------

Co-authored-by: Max Krasnyansky <redacted>
5 weeks agoquantize: add option --tensor-type-file to llama-quantize (#18572)
EugeoSynthesisThirtyTwo [Sat, 31 Jan 2026 03:39:21 +0000 (04:39 +0100)]
quantize: add option --tensor-type-file to llama-quantize (#18572)

* add option --tensor-type-file to llama-quantize, but it raises an error.

* add error message when file not found

* quantize: update help menu, fix CI

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Your Name <redacted>
Co-authored-by: Aaron Teo <redacted>
5 weeks agomtmd: support MiniCPM-o 4.5(vision only) (#19211)
tc-mb [Fri, 30 Jan 2026 22:19:30 +0000 (06:19 +0800)]
mtmd: support MiniCPM-o 4.5(vision only) (#19211)

Signed-off-by: tc-mb <redacted>
5 weeks agolookup, lookahead: fix crash when n_ctx not specified (#18729)
Daniele Pinna [Fri, 30 Jan 2026 20:10:24 +0000 (21:10 +0100)]
lookup, lookahead: fix crash when n_ctx not specified (#18729)

* lookup, lookahead: fix crash when n_ctx not specified

Since PR #16653 (Dec 15, 2025), the default n_ctx is 0 to enable automatic
GPU memory fitting. This causes llama-lookup and llama-lookahead to crash
when run without explicit -c flag:

    GGML_ASSERT(batch.seq_id[batch.n_tokens] && "llama_batch size exceeded")

Root cause: Both examples use params.n_ctx directly for batch initialization,
but params.n_ctx remains 0 even after the context is properly initialized
to n_ctx_train internally.

Bug history:
- Nov 2023: lookahead.cpp created (PR #4207) with params.n_ctx pattern
- Dec 2023: lookup.cpp created (PR #4484) with same pattern
- Nov 2024: default n_ctx changed to 4096 (PR #10136) - bug dormant
- Dec 2025: default n_ctx changed to 0 (PR #16653) - bug activated

The bug was dormant for 2+ years because params.n_ctx defaulted to 512,
then 4096. PR #16653 changed it to 0 for GPU auto-fitting, triggering
the crash.

Fix: Use llama_n_ctx(ctx) to get the actual runtime context size, matching
the pattern already used elsewhere in lookup.cpp (line 72) and in
speculative.cpp/speculative-simple.cpp.

Tested: llama-lookup now works without -c flag (12.5% acceptance on
Gemma-3-1B).

Note: llama-lookahead has a separate pre-existing issue with sequence
initialization (n_seq_max=1 vs W+G+1 needed) that is unrelated to this fix.

* lookahead: fix n_seq_max and kv_unified configuration

Lookahead decoding requires:
- W + G + 1 = 31 sequences for parallel Jacobi decoding
- Unified KV cache for coupled sequences in batch splitting

These requirements were broken after PR #14482 changed validation logic.

Consolidates fix from PR #18730 per maintainer request.

Commit message drafted with Claude.

5 weeks agongram-mod : fix build [no ci] (#19216)
Georgi Gerganov [Fri, 30 Jan 2026 19:27:27 +0000 (21:27 +0200)]
ngram-mod : fix build [no ci] (#19216)

5 weeks agoopencl: add optimized q8_0 mm kernel for adreno (#18871)
shaofeiqi [Fri, 30 Jan 2026 18:19:27 +0000 (10:19 -0800)]
opencl: add optimized q8_0 mm kernel for adreno (#18871)

* Add Q8_0 OpenCL kernel

Co-authored-by: yunjie <redacted>
* opencl: fix build for non-adreno

* opencl: refactor q8_0

* opencl: enforce subgroup size of 64 for adreno for q8_0

* For A750 and older generations, subgroup size can be 64 or 128.
  This kernel assumes subgroup size 64.

* opencl: suppress warning when adreno kernels are disabled

---------

Co-authored-by: yunjie <redacted>
Co-authored-by: Li He <redacted>
5 weeks agosync : ggml
Georgi Gerganov [Fri, 30 Jan 2026 14:27:14 +0000 (16:27 +0200)]
sync : ggml

5 weeks agocuda : fix compile warnings (whisper/0)
Georgi Gerganov [Fri, 30 Jan 2026 13:56:15 +0000 (15:56 +0200)]
cuda : fix compile warnings (whisper/0)

5 weeks agoserver : wrap around the "id_slot" parameter (#19207)
Georgi Gerganov [Fri, 30 Jan 2026 17:46:10 +0000 (19:46 +0200)]
server : wrap around the "id_slot" parameter (#19207)

* server : wrap around the "id_slot" parameter

* cont : minor

5 weeks agoCorrectly fetch q8_1 quantize pipeline in test as needed by 8a3519b (#19194)
Simon Redman [Fri, 30 Jan 2026 16:27:16 +0000 (11:27 -0500)]
Correctly fetch q8_1 quantize pipeline in test as needed by 8a3519b (#19194)

5 weeks agospec : add ngram-mod (#19164)
Georgi Gerganov [Fri, 30 Jan 2026 16:21:48 +0000 (18:21 +0200)]
spec : add ngram-mod (#19164)

* spec : add ngram-mod

* cont : simplify + keep track of occupancy

* cont : cleanup

* cont : move initialization to common/speculative

* cont : cleanup

* cont : cleanup

* cont : fix

5 weeks agojinja : add unordered_map include to value.h [no ci] (#19205)
Marcello Seri [Fri, 30 Jan 2026 15:09:44 +0000 (16:09 +0100)]
jinja : add unordered_map include to value.h [no ci] (#19205)

On macos Sequoia 15.7.3, x86_64, the build has recently started failing with
```
In file included from .../code/cpp/llama.cpp/common/jinja/string.cpp:2:
.../code/cpp/llama.cpp/common/./jinja/value.h:478:10: error: no template named 'unordered_map' in namespace 'std'
  478 |     std::unordered_map<value, value, value_hasher, value_equivalence> unordered;
      |     ~~~~~^
In file included from .../code/cpp/llama.cpp/common/jinja/caps.cpp:1:
.../code/cpp/llama.cpp/common/jinja/value.h:478:10: error: no template named 'unordered_map' in namespace 'std'
  478 |     std::unordered_map<value, value, value_hasher, value_equivalence> unordered;
      |     ~~~~~^
In file included from .../code/cpp/llama.cpp/common/jinja/value.cpp:1:
In file included from .../code/cpp/llama.cpp/common/jinja/runtime.h:4:
.../code/cpp/llama.cpp/common/jinja/value.h:478:10: error: no template named 'unordered_map' in namespace 'std'
  478 |     std::unordered_map<value, value, value_hasher, value_equivalence> unordered;
[...]
```

After a bit of digging to make sure all the appropriate flags were used, I notifced that the necessary header was not included. This fixes the build for me and should not affect negatively other builds that for some reasons were already succeeding

5 weeks agomemory : clarify comments for r_l and s_l tensors [no ci] (#19203)
Daniel Bevenius [Fri, 30 Jan 2026 14:18:41 +0000 (15:18 +0100)]
memory : clarify comments for r_l and s_l tensors [no ci] (#19203)

This commit updates the comments in state_write_data to clarify that it
is handling the R and S tensors and not Key and Value tensors.

5 weeks agotests : add GQA=20 FA test (#19095)
Georgi Gerganov [Fri, 30 Jan 2026 11:52:57 +0000 (13:52 +0200)]
tests : add GQA=20 FA test (#19095)

5 weeks agoconvert : add missing return statement for GraniteMoeModel (#19202)
Daniel Bevenius [Fri, 30 Jan 2026 10:12:53 +0000 (11:12 +0100)]
convert : add missing return statement for GraniteMoeModel (#19202)

This commit adds a missing return statement to the GraniteMoeModel class
to fix an issue in the model conversion process.

Resolves: https://github.com/ggml-org/llama.cpp/issues/19201

5 weeks agomemory : remove unused tmp_buf (#19199)
Daniel Bevenius [Fri, 30 Jan 2026 09:37:06 +0000 (10:37 +0100)]
memory : remove unused tmp_buf (#19199)

This commit removes the unused tmp_buf variable from llama-kv-cache.cpp
and llama-memory-recurrent.cpp.

The tmp_buf variable was declared but never used but since it has a
non-trivial constructor/desctuctor we don't get an unused variable
warning about it.

5 weeks agodocs: Add LlamaLib to UI projects (#19181)
Antonis Makropoulos [Fri, 30 Jan 2026 06:54:28 +0000 (08:54 +0200)]
docs: Add LlamaLib to UI projects (#19181)

5 weeks agoadd tensor type checking as part of cuda graph properties (#19186)
bssrdf [Fri, 30 Jan 2026 04:57:52 +0000 (23:57 -0500)]
add tensor type checking as part of cuda graph properties (#19186)