]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
3 months agomain : honor --verbose-prompt on interactive prompts (#14350)
Sigbjørn Skjæret [Tue, 24 Jun 2025 07:31:00 +0000 (09:31 +0200)]
main : honor --verbose-prompt on interactive prompts (#14350)

3 months agojinja : Add Mistral-Small-3.2-24B-Instruct-2506.jinja (#14349)
Bartowski [Tue, 24 Jun 2025 06:17:58 +0000 (02:17 -0400)]
jinja : Add Mistral-Small-3.2-24B-Instruct-2506.jinja (#14349)

This will allow the use of tools on the llama-server

3 months agoCUDA/HIP: optimize mmv paths taken for HIP devices (#14324)
uvos [Mon, 23 Jun 2025 23:12:56 +0000 (01:12 +0200)]
CUDA/HIP: optimize mmv paths taken for HIP devices (#14324)

Co-authored-by: Johannes Gäßler <redacted>
3 months agoci: add workflow for relocatable cmake package (#14346)
bandoti [Mon, 23 Jun 2025 18:30:51 +0000 (15:30 -0300)]
ci: add workflow for relocatable cmake package (#14346)

3 months agovulkan: update windows SDK in release.yml (#14344)
Jeff Bolz [Mon, 23 Jun 2025 13:44:48 +0000 (08:44 -0500)]
vulkan: update windows SDK in release.yml (#14344)

3 months agollama : better rwkv chat template and add missing `inputs.use_jinja` setting (#14336)
Molly Sophia [Mon, 23 Jun 2025 11:56:19 +0000 (19:56 +0800)]
llama : better rwkv chat template and add missing `inputs.use_jinja` setting (#14336)

* llama-cli : add missing `inputs.use_jinja` setting

Signed-off-by: Molly Sophia <redacted>
* llama : better legacy chat template for rwkv

Signed-off-by: Molly Sophia <redacted>
---------

Signed-off-by: Molly Sophia <redacted>
3 months agoCUDA: mul_mat_v support for batch sizes > 1 (#14262)
Johannes Gäßler [Mon, 23 Jun 2025 11:11:31 +0000 (13:11 +0200)]
CUDA: mul_mat_v support for batch sizes > 1 (#14262)

* CUDA: mul_mat_v support for batch sizes > 1

* use 64 bit math for initial offset calculation

3 months agokv-cells : fix tracking of seq_pos (#14339)
Georgi Gerganov [Mon, 23 Jun 2025 09:27:35 +0000 (12:27 +0300)]
kv-cells : fix tracking of seq_pos (#14339)

* kv-cells : fix tracking of seq_pos during cache reuse

ggml-ci

* cont : improve error message

ggml-ci

* cont : add more comments

3 months agovulkan: update windows SDK in CI (#14334)
Jeff Bolz [Mon, 23 Jun 2025 08:19:24 +0000 (03:19 -0500)]
vulkan: update windows SDK in CI (#14334)

3 months agoquantize : handle user-defined pruning of whole layers (blocks) (#13037)
Ed Addario [Sun, 22 Jun 2025 21:16:26 +0000 (22:16 +0100)]
quantize : handle user-defined pruning of whole layers (blocks) (#13037)

3 months agogguf-py : fix SpecialVocab parsing when post_processor is null (#14330)
Sigbjørn Skjæret [Sun, 22 Jun 2025 17:46:17 +0000 (19:46 +0200)]
gguf-py : fix SpecialVocab parsing when post_processor is null (#14330)

3 months agorun : avoid double tokenization (#14327)
Ruikai Peng [Sun, 22 Jun 2025 17:28:06 +0000 (01:28 +0800)]
run : avoid double tokenization (#14327)

* run : avoid double tokenization by adopting common_tokenize heuristic

* build : fix windows gcc and clang warnings

* lint : fixed trailing whitepace

* run : fix is_first flag

3 months agoexamples : fix is_first logic for tokenization (#14329)
Georgi Gerganov [Sun, 22 Jun 2025 17:10:07 +0000 (20:10 +0300)]
examples : fix is_first logic for tokenization (#14329)

ggml-ci

3 months agoHIP: enable vec fattn on RDNA4 (#14323)
uvos [Sun, 22 Jun 2025 14:51:23 +0000 (16:51 +0200)]
HIP: enable vec fattn on RDNA4 (#14323)

3 months agomtmd : fix Pixtral OOM with large images by capping image_size to 1024 (#14326)
yuiseki [Sun, 22 Jun 2025 12:44:57 +0000 (21:44 +0900)]
mtmd : fix Pixtral OOM with large images by capping image_size to 1024 (#14326)

Mistral Small 2506 models using Pixtral vision encoder were running out
of GPU memory when processing images larger than 1024x1024 pixels due to
exponential memory growth from unlimited image size.

This fix applies the same 1024x1024 limit used by Qwen2VL models to
prevent OOM issues while maintaining compatibility with existing models.

3 months agocommon : use std::string_view now that we target c++17 (#14319)
Sigbjørn Skjæret [Sun, 22 Jun 2025 05:37:43 +0000 (07:37 +0200)]
common : use std::string_view now that we target c++17 (#14319)

3 months agoCUDA: add mean operation (#14313)
Aman Gupta [Sun, 22 Jun 2025 04:39:54 +0000 (12:39 +0800)]
CUDA: add mean operation (#14313)

* CUDA: add mean operation

* add back sum_rows_f32_cuda

* Review: early exit if col!=0

3 months agogguf-py : fix Qwen3-Embedding eos token (#14314)
Sigbjørn Skjæret [Sat, 21 Jun 2025 16:12:05 +0000 (18:12 +0200)]
gguf-py : fix Qwen3-Embedding eos token (#14314)

3 months agoAdd support for VK_EXT_debug_utils to add labels to Vulkan objects. (#13792)
Markus Tavenrath [Sat, 21 Jun 2025 06:17:12 +0000 (08:17 +0200)]
Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (#13792)

* Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled.

* remove #ifdef for debug utils and add queue marker.

3 months agogguf-py : fix TemplateProcessing pair when bos/eos is missing (#14312)
Sigbjørn Skjæret [Sat, 21 Jun 2025 05:33:21 +0000 (07:33 +0200)]
gguf-py : fix TemplateProcessing pair when bos/eos is missing (#14312)

3 months agometal : fix thread-safety (#14300)
Georgi Gerganov [Sat, 21 Jun 2025 05:04:18 +0000 (08:04 +0300)]
metal : fix thread-safety (#14300)

ggml-ci

3 months agomemory : rename interface to llama_memory_context_i (#14296)
Georgi Gerganov [Sat, 21 Jun 2025 05:03:46 +0000 (08:03 +0300)]
memory : rename interface to llama_memory_context_i (#14296)

* memory : rename interface to llama_memory_context_i

ggml-ci

* cont : fix comments

* cont : use "mctx" for referencing a memory context

ggml-ci

3 months agoconvert : fix Llama 4 conversion (#14311)
Daniel Han [Sat, 21 Jun 2025 04:32:01 +0000 (21:32 -0700)]
convert : fix Llama 4 conversion (#14311)

3 months agosync : ggml
Georgi Gerganov [Fri, 20 Jun 2025 17:50:24 +0000 (20:50 +0300)]
sync : ggml

ggml-ci

3 months agoAdd `ggml_roll` (ggml/1274)
Acly [Wed, 18 Jun 2025 11:34:50 +0000 (13:34 +0200)]
Add `ggml_roll` (ggml/1274)

* ggml : add ggml_roll

* use set/get_op_params & std::min

3 months agodocs : fix the link to llama.h (#14293)
David Chiu [Fri, 20 Jun 2025 17:43:35 +0000 (01:43 +0800)]
docs : fix the link to llama.h (#14293)

3 months agoCUDA: add conv_2d_transpose (#14287)
Aman Gupta [Fri, 20 Jun 2025 14:48:24 +0000 (22:48 +0800)]
CUDA: add conv_2d_transpose (#14287)

* CUDA: add conv_2d_transpose

* remove direct include of cuda_fp16

* Review: add brackets for readability, remove ggml_set_param and add asserts

3 months agolint : remove trailing whitepace (#14304)
Sigbjørn Skjæret [Fri, 20 Jun 2025 14:37:44 +0000 (16:37 +0200)]
lint : remove trailing whitepace (#14304)

3 months agovocab : prevent tokenizer overflow (#14301)
Ruikai Peng [Fri, 20 Jun 2025 14:13:06 +0000 (22:13 +0800)]
vocab : prevent tokenizer overflow (#14301)

* vocab : prevent stack overflow in tokenize

* vocab : return error instead of aborting on oversized token count

* vocab : INT32_MIN from llama_tokenize on overflow

3 months agosycl: add usage of enqueue_functions extension (#14244)
Nicolò Scipione [Fri, 20 Jun 2025 13:07:21 +0000 (15:07 +0200)]
sycl: add usage of enqueue_functions extension (#14244)

* Add header and namespace to use enqueue_functions extension

* Convert submit and parallel_for to use new extension in convert.cpp

* Convert submit and parallel_for to use extension in ggml-sycl.cpp

* Convert submit and parallel_for to use extension in gla.cpp

* Convert submit and parallel_for in mmq.cpp

* Convert submit and parallel_for in mmvq.cpp

* Convert submit and parallel_for in remaining files

* Convert all simple parallel_for to nd_launch from enqueue_functions
extension

* Wrapping extension in general function

Create a general function that enable the enqueue_functions extension if
it is enable in the compiler, otherwise call the general SYCL function
to launch kernels.

---------

Signed-off-by: nscipione <redacted>
3 months agoImplement GGML_CPU_ALL_VARIANTS for PowerPC (#14286)
Christian Kastner [Fri, 20 Jun 2025 12:17:32 +0000 (12:17 +0000)]
Implement GGML_CPU_ALL_VARIANTS for PowerPC (#14286)

* Add PowerPC feature detection and scoring

* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for PowerPC

* ggml-cpu: Delay some initializations until function is called

When using GGML_BACKEND_DL=ON, these initializations might use
instructions that are not supported by the current CPU.

---------

Co-authored-by: Diego Devesa <redacted>
3 months agollama : improve sep token handling (#14272)
Sigbjørn Skjæret [Fri, 20 Jun 2025 12:04:09 +0000 (14:04 +0200)]
llama : improve sep token handling (#14272)

3 months agocuda : synchronize graph capture and cublas handle destruction (#14288)
Diego Devesa [Fri, 20 Jun 2025 11:57:36 +0000 (04:57 -0700)]
cuda : synchronize graph capture and cublas handle destruction (#14288)

Workarounds an issue that may cause CUDA graph capture to fail when a cuBLAS handle is destroyed in a different thread

3 months agoggml : fix repack work size for mul_mat_id (#14292)
Georgi Gerganov [Fri, 20 Jun 2025 08:19:15 +0000 (11:19 +0300)]
ggml : fix repack work size for mul_mat_id (#14292)

ggml-ci

3 months agoggml: Update KleidiAI to v1.9.0 (#14277)
Charles Xu [Fri, 20 Jun 2025 07:51:01 +0000 (09:51 +0200)]
ggml: Update KleidiAI to v1.9.0 (#14277)

3 months agomodel : more uniform output id handling (#14275)
Georgi Gerganov [Fri, 20 Jun 2025 07:50:27 +0000 (10:50 +0300)]
model : more uniform output id handling (#14275)

* model : more uniform output id handling

ggml-ci

* cont : revert n_outputs < n_tokens optimization

ggml-ci

* cont : fix out_ids initialization

ggml-ci

3 months agoubatch : new splitting logic (#14217) upstream/0.0.5713
Georgi Gerganov [Fri, 20 Jun 2025 07:14:14 +0000 (10:14 +0300)]
ubatch : new splitting logic (#14217)

ggml-ci

3 months agoCUDA: add conv_2d_dw (#14265)
Aman Gupta [Fri, 20 Jun 2025 01:50:24 +0000 (09:50 +0800)]
CUDA: add conv_2d_dw (#14265)

* CUDA: add conv_2d_dw

* better naming

* simplify using template

* Review: fix operation ordering in ggml-cuda, use __forceinline__, use more const

3 months agoggml-cpu : remove unnecesary arm feature detection (#14281)
Diego Devesa [Thu, 19 Jun 2025 19:24:14 +0000 (12:24 -0700)]
ggml-cpu : remove unnecesary arm feature detection (#14281)

Support for Arm runtime feature detection has now been added to GGML_CPU_ALL_VARIANTS. This removes the old and not very functional code.

3 months agogguf-py : make sentencepiece optional (#14200) gguf-v0.17.1
Alex Trotta [Thu, 19 Jun 2025 13:56:12 +0000 (09:56 -0400)]
gguf-py : make sentencepiece optional (#14200)

* Make sentencepiece optional

* Bump to 0.18.0

* Bump patch instead of minor

Co-authored-by: compilade <redacted>
---------

Co-authored-by: compilade <redacted>
3 months agoserver : add server parameters for draft model cache type (#13782)
aa956 [Thu, 19 Jun 2025 13:01:03 +0000 (16:01 +0300)]
server : add server parameters for draft model cache type (#13782)

Co-authored-by: aa956 <redacted>
3 months agobuild : suppress gcc15 compile warnings (#14261)
fanyang [Thu, 19 Jun 2025 12:49:48 +0000 (20:49 +0800)]
build : suppress gcc15 compile warnings (#14261)

* Change _contains_any() substrs to std::string_view and fix the find comparison logic.

3 months agosycl: Cleanup codepaths in Get Rows in sycl backend (#14215)
Anton Mitkov [Thu, 19 Jun 2025 10:40:21 +0000 (11:40 +0100)]
sycl: Cleanup codepaths in Get Rows in sycl backend (#14215)

Addresses unused reorder path

3 months agollama-bench : add --no-warmup flag (#14224) (#14270)
bashayer hijji [Thu, 19 Jun 2025 10:24:12 +0000 (13:24 +0300)]
llama-bench : add --no-warmup flag (#14224) (#14270)

Add no_warmup parameter to cmd_params struct and command-line parsing to allow users to skip warmup runs before benchmarking.

- Add no_warmup boolean field to cmd_params struct

- Add --no-warmup command-line argument parsing

- Add help text documentation for the new flag

- Wrap existing warmup logic in conditional check

- Maintain full backward compatibility (warmup enabled by default)

Addresses #14224

3 months agoconvert : fix remote option in Windows (#14100)
pqnet [Thu, 19 Jun 2025 10:21:40 +0000 (12:21 +0200)]
convert : fix remote option in Windows (#14100)

3 months agollamafile : support s390x SIMD instruction set (#14273)
Aaron Teo [Thu, 19 Jun 2025 09:48:54 +0000 (17:48 +0800)]
llamafile : support s390x SIMD instruction set (#14273)

3 months agoVulkan: Set device max size for host memory to avoid OOM warning and fallback to...
0cc4m [Thu, 19 Jun 2025 07:15:42 +0000 (09:15 +0200)]
Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (#14249)

3 months agomemory : Hybrid recurrent cache (#13979)
Gabe Goodhart [Thu, 19 Jun 2025 05:08:14 +0000 (00:08 -0500)]
memory : Hybrid recurrent cache (#13979)

* feat: Add llama_model_is_hybrid API call

Also, split llama_model_is_recurrent into llm_arch_is_recurrent in
llama-arch with llama_model_is_recurrent delegating to
llm_arch_is_recurrent. The same split is done for hybird. This is needed
because there are places where the llama_model has not yet been initialized
but we need to check if the model is recurrent (specifically for the
per-layer recurrent check array in hparams).

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add c++ side constants for attention layer indices hparam

Branch: GraniteFour

* feat: Add support for distinguishing recurrent vs non-recurrent layers in hparams

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* feat: Auto-fill hparams.recurrent_layer_arr based on whether the model is recurrent

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* refactor: rename *_is_hybrid -> *_is_hybrid_recurrent

The implementation of the hybrid cache intentionally does not specify the
types of the child caches, so there was a naming mismatch with these
predicate functions that used "hybrid" to imply "hybrid recurrent."

Branch: HybridCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add layer filter to recurrent cache

Branch: HybridCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use per-layer sizing everywhere in kv caches

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* feat: First pass at llama_kv_cache_hybrid_recurrent

This follows the pattern in iswa where the two child caches are held
explicitly to support the case where a model requires a single attention
cache and a single recurrent cache where each layer uses exactly one of the
caches.

This is a rewrite of the more generic approach in the original hybrid cache
PR: https://github.com/ggml-org/llama.cpp/pull/13276

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Construct hybrid recurrent cache for hybrid recurrent models

This includes a refactor of the create_memory logic to avoid needing to use
the arch enum explicitly unless a model needs explicit cache instantiation
logic beyond the standard logic for recurrent, hybrid, unified, and iswa.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix wrong bool condition for split equal in hybrid cache

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix shift logic to defer to unified cache

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Support hybrid recurrent in llama-graph

NOTE: I intentionally did not add support for s_mask since it will be going
away soon

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix logic for initializing inputs and attn layers for hybrid caches

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* fix: Update recurrent cache for changes to remove intermediate kv_cache interface

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix status for init_update sig for recurrent cache state

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* fix: Add missing padding to n_ctx for hybrid cache construction

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* fix: Update clear signature for data argument after rebase

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove errant virtual destructor leftover from previous impl attempt

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use per-layer n_embd_k/v_s calls for mamba (1) layers

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Remove n_embd_k/v_s from unified cache

No longer needed now that unified isn't also supporting recurrent

https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140761069

Branch: HybridRecurrentCache

* refactor: Remove layer index from n_embd_k/v_s

Now that it's not used at all in the unified cache, we don't need to use
the layer index to zero it out for attention layers.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Remove n_embd_k/v_gqa from recurrent cache

This is no longer needed now that there are separate implementations

https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140825128

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Allow custom layer filters for hybrid recurrent

This should help support architectures like Falcon H1 where there is
overlap between layers that need attention and recurrent caches.

https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140748922

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove logits_all after rebase

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove llama_model_is_hybrid_Recurrent public API

https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2141728423

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Use llama_memory_state_ptr for child states in hybrid memory state

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Overhaul build_recurrent_state / build_inp_s_copy to match attention pattern

https://github.com/ggml-org/llama.cpp/pull/13979/files#r2141701738

This is a big overhaul to bring consistency between how inputs and per-
layer components are created for attention layers and recurrent layers. The
main changes are:

- Rename class llm_graph_input_s_copy -> llm_graph_input_rs
- Add a corresponding llm_graph_input_rs_hybrid_recurrent
- Rename build_inp_s_copy -> build_rs_inp_recurrent
- Add a corresponding build_rs_inp_hybrid_recurrent
- Rename build_recurrent_state -> build_rs to match build_attn w/
llm_graph_input_rs android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
- Add a corresponding overload of build_rs w/
llm_graph_input_rs_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
- Add a llm_graph_input_attn_kv_hybrid_recurrent analogous to
llm_graph_input_attn_kv_unified
- Add a build_attn override that takes
llm_graph_input_attn_kv_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input

This makes the two paradigms fully consistent. The main drawback is the
code duplication in the build_attn and build_rs implementations where the
only difference between implementations is how they cast the memory state.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix resize vs reserve and skip null tensors in size computation

https://github.com/ggml-org/llama.cpp/pull/13979/files#r2149469788

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
Co-Authored-By: @younesbelkada
* fix: Fix initialization of child states

Since initially writing this PR, the logic in the child state types changed
such that using the "init full" signature and keeping the ubatches on the
parent struct no longer worked.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Use a common build_recurrent_state method that is cache-agnostic

This reduces the code duplication between the different build_rs impls and
also retains a similar signature to the previous build_recurrent_state
method while standardizing on the input-dispatched build_rs implementation.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* recurrent : rework graph inputs + add TODOs

ggml-ci

* refactor: Make status and child states const in hybrid and iswa

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Rename llama_kv_cache_[recurrent|hybrid_recurrent] to remove kv cache

This removes the notion of "kv" from the interface names for these memory
types. There are still many references to kv in the implementation of the
recurrent memory which will need further adjustment.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor!: Rename all k/v related values for recurrent/hybrid to r/s

Anywhere that "kv_<state|cell|size|etc>" is used, I've used the more
generic "mem_" prefix. The specifics of "k" (key) translate to "r"
(recurrent state) and "v" (value) translate to "s" (state-space embedding
states).

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refacor: _recurrent -> _recr for brevity

It just _happens_ to have the same number of letters as _attn!

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* style: Fix spacing for ref

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: recurrent_layer() -> is_recurrent()

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* style: Fix spacing for size_s_bytes declaration

Co-authored-by: Georgi Gerganov <redacted>
---------

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
3 months agometal : add mean kernel (#14267)
Georgi Gerganov [Thu, 19 Jun 2025 05:05:21 +0000 (08:05 +0300)]
metal : add mean kernel (#14267)

* metal : add mean kernel

ggml-ci

* cont : dedup implementation

ggml-ci

3 months agodocs: add s390x build documentation (#14264)
Aaron Teo [Wed, 18 Jun 2025 17:10:26 +0000 (01:10 +0800)]
docs: add s390x build documentation (#14264)

* docs: add s390x-specific build docs

Signed-off-by: Aaron Teo <redacted>
* docs: add s390x model conversion steps

Signed-off-by: Aaron Teo <redacted>
* docs: s390x build indent

Signed-off-by: Aaron Teo <redacted>
* docs: update hyperlinks for s390x docs

Signed-off-by: Aaron Teo <redacted>
* docs: update llama.h docs

Signed-off-by: Aaron Teo <redacted>
* docs: s390x add accelerator and perf optimizations

Signed-off-by: Aaron Teo <redacted>
* docs: s390x indent blocks

Signed-off-by: Aaron Teo <redacted>
* docs: revert block indentation

Signed-off-by: Aaron Teo <redacted>
* docs: add support information for s390x

Signed-off-by: Aaron Teo <redacted>
* docs: s390x reword

Signed-off-by: Aaron Teo <redacted>
* docs: remove indentation for accelerator section s390x

Signed-off-by: Aaron Teo <redacted>
* docs: remove redundant words s390x

Signed-off-by: Aaron Teo <redacted>
* docs: reword for s390x

Signed-off-by: Aaron Teo <redacted>
* docs: s390x reword simd

Signed-off-by: Aaron Teo <redacted>
* docs: fix trailing whitespace for s390x

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
3 months agoggml-cpu: reduce asm calls for hsum (#14037)
Aaron Teo [Wed, 18 Jun 2025 17:10:08 +0000 (01:10 +0800)]
ggml-cpu: reduce asm calls for hsum (#14037)

Signed-off-by: Aaron Teo <redacted>
3 months agoggml-cpu: fix uncaught underscore terminators (#14023)
Aaron Teo [Wed, 18 Jun 2025 17:06:49 +0000 (01:06 +0800)]
ggml-cpu: fix uncaught underscore terminators (#14023)

Signed-off-by: Aaron Teo <redacted>
3 months agoggml: Add Apple support for GGML_CPU_ALL_VARIANTS (#14258)
Charles Xu [Wed, 18 Jun 2025 11:40:07 +0000 (13:40 +0200)]
ggml: Add Apple support for GGML_CPU_ALL_VARIANTS (#14258)

3 months agomtmd : refactor llava-uhd preprocessing logic (#14247)
Xuan-Son Nguyen [Wed, 18 Jun 2025 08:43:57 +0000 (10:43 +0200)]
mtmd : refactor llava-uhd preprocessing logic (#14247)

* mtmd : refactor llava-uhd preprocessing logic

* fix editorconfig

3 months agollama-chat : fix multiple system message for gemma, orion (#14246)
Xuan-Son Nguyen [Wed, 18 Jun 2025 07:58:43 +0000 (09:58 +0200)]
llama-chat : fix multiple system message for gemma, orion (#14246)

3 months agoconvert : fix null head_dim AutoConfig regression (#14248)
Sigbjørn Skjæret [Wed, 18 Jun 2025 07:52:07 +0000 (09:52 +0200)]
convert : fix null head_dim AutoConfig regression (#14248)

3 months agosync : ggml
Georgi Gerganov [Wed, 18 Jun 2025 06:58:23 +0000 (09:58 +0300)]
sync : ggml

ggml-ci

3 months agoggml : disable warnings for tests when using MSVC (ggml/1273)
Daniel Bevenius [Fri, 13 Jun 2025 13:06:42 +0000 (15:06 +0200)]
ggml : disable warnings for tests when using MSVC (ggml/1273)

* ggml : disable warnings for tests when using MSVC

This commit disables warnings for tests on windows when using MSVC.

The motivation for this is that this brings the build output more
inline with what Linux/MacOS systems produce.

There is still one warning generated for the tests which is:
```console
  Building Custom Rule C:/ggml/tests/CMakeLists.txt
cl : command line  warning D9025: overriding '/DNDEBUG' with '/UNDEBUG'
[C:\ggml\build\tests\test-arange.vcxproj]
  test-arange.cpp
  test-arange.vcxproj -> C:\ggml\build\bin\Release\test-arange.exe
```

* ggml : fix typo in tests disable list

3 months agoggml : remove unused ggml_context_container (ggml/1272)
Daniel Bevenius [Fri, 13 Jun 2025 07:05:44 +0000 (09:05 +0200)]
ggml : remove unused ggml_context_container (ggml/1272)

This commit removes the unused `ggml_context_container` structure from
the ggml library. It looks like the usage of this struct was removed in
Commit 4757fe18d56ec11bf9c07feaca6e9d5b5357e7f4 ("ggml : alloc
ggml_contexts on the heap (whisper/2525)").

The motivation for this changes is to improve code clarity/readability.

3 months agoexamples : include examples in msvc disable warn (ggml/1270)
Daniel Bevenius [Thu, 12 Jun 2025 10:27:09 +0000 (12:27 +0200)]
examples : include examples in msvc disable warn (ggml/1270)

This commit adds the examples in the "list" of targets to ignore MSVC
warnings.

The motivation for this is that currently the examples generate a number
of warnings that are ignore/disabled for the core ggml project. This
makes for a cleaner output when building.

3 months agocmake: remove shader-gen step-targets from ggml-vulkan (#14226)
bandoti [Tue, 17 Jun 2025 20:33:25 +0000 (17:33 -0300)]
cmake: remove shader-gen step-targets from ggml-vulkan (#14226)

* Remove step-targets from vulkan-shaders-gen

* Unset DESTDIR when building vulkan-shaders-gen

3 months agoggml-cpu : remove the weak alias trick (#14221)
xctan [Tue, 17 Jun 2025 09:58:32 +0000 (17:58 +0800)]
ggml-cpu : remove the weak alias trick (#14221)

3 months agomusa: fix build warning (unused variable) (#14231)
R0CKSTAR [Tue, 17 Jun 2025 09:48:08 +0000 (17:48 +0800)]
musa: fix build warning (unused variable) (#14231)

Signed-off-by: Xiaodong Ye <redacted>
3 months agocommon : suggest --jinja when autodetection fails (#14222)
Sigbjørn Skjæret [Mon, 16 Jun 2025 19:58:42 +0000 (21:58 +0200)]
common : suggest --jinja when autodetection fails (#14222)

3 months agoserver : fix incorrect usage of llama_get_embeddings() (#14225)
Georgi Gerganov [Mon, 16 Jun 2025 19:33:27 +0000 (22:33 +0300)]
server : fix incorrect usage of llama_get_embeddings() (#14225)

* server : fix incorrect usage of llama_get_embeddings()

ggml-ci

* cont : fix the fix

ggml-ci

3 months agollama : add thread safety test (#14035)
Diego Devesa [Mon, 16 Jun 2025 15:11:43 +0000 (08:11 -0700)]
llama : add thread safety test (#14035)

* llama : add thread safety test

* llamafile : remove global state

* llama : better LLAMA_SPLIT_MODE_NONE logic

when main_gpu < 0 GPU devices are not used

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agocmake: clean up external project logic for vulkan-shaders-gen (#14179)
bandoti [Mon, 16 Jun 2025 13:32:13 +0000 (10:32 -0300)]
cmake: clean up external project logic for vulkan-shaders-gen (#14179)

* Remove install step for vulkan-shaders-gen

* Add install step to normalize msvc with make

* Regenerate modified shaders at build-time

3 months agomodel : add NeoBERT (#14164)
Đinh Trọng Huy [Mon, 16 Jun 2025 12:53:41 +0000 (21:53 +0900)]
model : add NeoBERT (#14164)

* convert neobert model to gguf

* add inference graph

* fix flake8 lint

* followed reviewer suggestions

Co-authored-by: Georgi Gerganov <redacted>
* follow reviewers suggestions

Co-authored-by: Georgi Gerganov <redacted>
* override NeoBERT feed-forward length

---------

Co-authored-by: dinhhuy <redacted>
Co-authored-by: Georgi Gerganov <redacted>
3 months agoHIP: disable rocwmma on gfx12 by default until rocm 7.0 (#14202)
uvos [Mon, 16 Jun 2025 11:47:38 +0000 (13:47 +0200)]
HIP: disable rocwmma on gfx12 by default until rocm 7.0 (#14202)

3 months agollama : rework embeddings logic (#14208)
Georgi Gerganov [Mon, 16 Jun 2025 11:14:00 +0000 (14:14 +0300)]
llama : rework embeddings logic (#14208)

* llama : rework embeddings logic

ggml-ci

* cont : fix rerank

ggml-ci

* cont : engrish [no ci]

* cont : fix rerank

ggml-ci

* server : support both embeddings and completions with single model

ggml-ci

* cont : avoid embeddings_org

ggml-ci

3 months agoggml: Add Android support for GGML_CPU_ALL_VARIANTS (#14206)
Charles Xu [Mon, 16 Jun 2025 09:47:57 +0000 (11:47 +0200)]
ggml: Add Android support for GGML_CPU_ALL_VARIANTS (#14206)

3 months agoconvert : remove arcee change in convert_hf_to_gguf_update.py (#14207)
Bartowski [Mon, 16 Jun 2025 08:16:06 +0000 (09:16 +0100)]
convert : remove arcee change in convert_hf_to_gguf_update.py (#14207)

3 months agogguf-py : allow key override when adding value to GGUFWriter (#14194)
Đinh Trọng Huy [Mon, 16 Jun 2025 07:20:59 +0000 (16:20 +0900)]
gguf-py : allow key override when adding value to GGUFWriter (#14194)

Co-authored-by: dinhhuy <redacted>
3 months agovulkan: mutex around vkQueueSubmit (#14127)
Jeff Bolz [Mon, 16 Jun 2025 06:21:08 +0000 (00:21 -0600)]
vulkan: mutex around vkQueueSubmit (#14127)

This fixes the remaining crash in test-thread-safety on my system.

3 months agoggml-cpu : rework weak alias on apple targets (#14146)
xctan [Mon, 16 Jun 2025 05:54:15 +0000 (13:54 +0800)]
ggml-cpu : rework weak alias on apple targets (#14146)

* ggml-cpu : rework weak alias on apple targets

* fix powerpc detection

* fix ppc detection

* fix powerpc detection on darwin

3 months agomodel : Add support for Arcee AI's upcoming AFM model (#14185)
Bartowski [Sun, 15 Jun 2025 23:04:06 +0000 (00:04 +0100)]
model : Add support for Arcee AI's upcoming AFM model (#14185)

* Add Arcee AFM support

* Add draft update code

* Fix linter and update URL, may still not be final

* Update src/llama-model.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Remote accidental blank line

---------

Co-authored-by: Xuan-Son Nguyen <redacted>
3 months agoserver : When listening on a unix domain socket don't print http:// and port (#14180)
Eric Curtin [Sun, 15 Jun 2025 21:36:22 +0000 (23:36 +0200)]
server : When listening on a unix domain socket don't print http:// and port (#14180)

Instead show something like this:

main: server is listening on file.sock - starting the main loop

Signed-off-by: Eric Curtin <redacted>
3 months agoquantize : change int to unsigned int for KV overrides (#14197)
Ed Addario [Sun, 15 Jun 2025 16:53:45 +0000 (17:53 +0100)]
quantize : change int to unsigned int for KV overrides (#14197)

3 months agoCUDA/HIP: fix ssm_scan on devices where warp size is not 32 (#14196)
uvos [Sun, 15 Jun 2025 15:30:13 +0000 (17:30 +0200)]
CUDA/HIP: fix ssm_scan on devices where warp size is not 32 (#14196)

3 months agoHIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (#14183)
uvos [Sun, 15 Jun 2025 13:45:27 +0000 (15:45 +0200)]
HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (#14183)

3 months agokv-cache : fix use-after-move of defrag info (#14189)
Georgi Gerganov [Sun, 15 Jun 2025 07:52:11 +0000 (10:52 +0300)]
kv-cache : fix use-after-move of defrag info (#14189)

ggml-ci

3 months agomodel : add dots.llm1 architecture support (#14044) (#14118)
Mikko Juola [Sun, 15 Jun 2025 07:52:06 +0000 (00:52 -0700)]
model : add dots.llm1 architecture support (#14044) (#14118)

Adds:

* Dots1Model to convert_hf_to_gguf.py

* Computation graph code to llama-model.cpp

* Chat template to llama-chat.cpp to detect this model's template.

---

The model is called "dots.llm1" (I decided to shorten it to dots1 or
DOTS1 in the code generally) architecture.

The only models that exist as of writing of this commit that follow this
architecture are "dots.llm1.inst" and "dots.llm1.base" from here:

* https://huggingface.co/rednote-hilab/dots.llm1.inst

* https://huggingface.co/rednote-hilab/dots.llm1.base

The model architecture is a combination of Qwen and Deepseek parts, as
seen here:

https://github.com/huggingface/transformers/blob/ffe12627b4e84489d2ab91dd0ec00614855edc79/src/transformers/models/dots1/modular_dots1.py

3 months agocparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188)
Georgi Gerganov [Sun, 15 Jun 2025 07:08:58 +0000 (10:08 +0300)]
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188)

ggml-ci

3 months agobatch : auto-gen positions + verify multi-sequence input (#14177)
Georgi Gerganov [Sun, 15 Jun 2025 06:18:37 +0000 (09:18 +0300)]
batch : auto-gen positions + verify multi-sequence input (#14177)

* batch : verify multi-sequence input batches

ggml-ci

* cont : auto-gen positions + verify multi-seq input

ggml-ci

* cont : first print debug info, then perform validation

ggml-ci

* cont : fix position auto-gen + add comments

ggml-ci

3 months agodocs : remove WIP since PR has been merged (#13912)
Pepijn de Vos [Sun, 15 Jun 2025 06:06:37 +0000 (08:06 +0200)]
docs : remove WIP since PR has been merged (#13912)

3 months agollama-chat : Do not throw when tool parsing fails (#14012)
Piotr [Sat, 14 Jun 2025 16:25:15 +0000 (18:25 +0200)]
llama-chat : Do not throw when tool parsing fails (#14012)

Currently when a model generates output which looks like a tool call,
but is invalid an exception is thrown and not handled, causing the cli
or llama-server to bail. Instead, handle the chat parser exception and
simply return the generated text in such cases.

Signed-off-by: Piotr Stankiewicz <redacted>
3 months agocompare-llama-bench: add option to plot (#14169)
Aman Gupta [Sat, 14 Jun 2025 08:34:20 +0000 (16:34 +0800)]
compare-llama-bench: add option to plot (#14169)

* compare llama-bench: add option to plot

* Address review comments: convert case + add type hints

* Add matplotlib to requirements

* fix tests

* Improve comment and fix assert condition for test

* Add back default test_name, add --plot_log_scale

* use log_scale regardless of x_values

3 months agovocab : fix build (#14175)
Georgi Gerganov [Fri, 13 Jun 2025 17:03:05 +0000 (20:03 +0300)]
vocab : fix build (#14175)

ggml-ci

3 months agosycl: fix docker image (#14144)
Svetlozar Georgiev [Fri, 13 Jun 2025 16:32:56 +0000 (17:32 +0100)]
sycl: fix docker image (#14144)

3 months agoMerge commit from fork
Guy Goldenberg [Fri, 13 Jun 2025 16:20:25 +0000 (19:20 +0300)]
Merge commit from fork

* vocab : prevent integer overflow during load

* Add static cast and GGML_ABORT

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agobatch : add LLAMA_BATCH_DEBUG environment variable (#14172)
Georgi Gerganov [Fri, 13 Jun 2025 15:35:00 +0000 (18:35 +0300)]
batch : add LLAMA_BATCH_DEBUG environment variable (#14172)

* batch : add LLAMA_BATCH_DEBUG environment variable

ggml-ci

* cont : improve seq_id display

3 months agodocs : Update multimodal.md (#14122)
ddpasa [Fri, 13 Jun 2025 13:17:53 +0000 (15:17 +0200)]
docs : Update multimodal.md (#14122)

* Update multimodal.md

* Update multimodal.md

3 months agobatch : rework llama_batch_allocr (#14153)
Georgi Gerganov [Fri, 13 Jun 2025 10:47:55 +0000 (13:47 +0300)]
batch : rework llama_batch_allocr (#14153)

* batch : rework llama_batch_allocr

ggml-ci

* cont : move validation inside class

ggml-ci

* cont : move output counting to class

ggml-ci

* cont : minor

ggml-ci

* batch : add TODOs

ggml-ci

3 months agoreadme : remove survey link (#14168)
Georgi Gerganov [Fri, 13 Jun 2025 08:55:44 +0000 (11:55 +0300)]
readme : remove survey link (#14168)

3 months agocmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167)
Christian Kastner [Fri, 13 Jun 2025 08:38:52 +0000 (08:38 +0000)]
cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167)

* cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT

* cmake: Pass on LLAMA_BUILD_* to GGML_BUILD_*

3 months agopooling : make cls_b and cls_out_b optional (#14165)
Đinh Trọng Huy [Fri, 13 Jun 2025 08:34:08 +0000 (17:34 +0900)]
pooling : make cls_b and cls_out_b optional (#14165)

Co-authored-by: dinhhuy <redacted>
3 months agoserver : fix SWA condition for full context reprocess (#14163)
Georgi Gerganov [Fri, 13 Jun 2025 08:18:25 +0000 (11:18 +0300)]
server : fix SWA condition for full context reprocess (#14163)

ggml-ci

3 months agosycl: Adding additional cpy dbg print output (#14034)
Anton Mitkov [Fri, 13 Jun 2025 07:51:39 +0000 (08:51 +0100)]
sycl: Adding additional cpy dbg print output (#14034)

3 months agoSYCL: Bump oneMath commit (#14152)
Ewan Crawford [Fri, 13 Jun 2025 07:45:37 +0000 (08:45 +0100)]
SYCL: Bump oneMath commit (#14152)

Update oneMath commit to merged PR https://github.com/uxlfoundation/oneMath/pull/669
which adds SYCL-Graph support for recording CUDA BLAS commands.

With this change the `MUL_MAT` tests now pass on DPC++ CUDA backends with SYCL-Graph
enabled. Prior to this change, an error would be thrown.

```
$ GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0 -o MUL_MAT -p type_a=f16,type_b=f32,m=16,n=1,k=256,bs=\\[1,1\\],nr=\\[2

UR CUDA ERROR:
        Value:           700
        Name:            CUDA_ERROR_ILLEGAL_ADDRESS
        Description:     an illegal memory access was encountered
        Function:        operator()
        Source Location: $HOME/dpcpp/unified-runtime/source/adapters/cuda/queue.cpp:154

Native API failed. Native API returns: 2147483646 (UR_RESULT_ERROR_UNKNOWN)
Exception caught at file:$HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp, line:3598, func:operator()
SYCL error: CHECK_TRY_ERROR((stream)->wait()): Meet error in this line code!
  in function ggml_backend_sycl_synchronize at $HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3598
$HOME/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/common.hpp:118: SYCL error
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
```

3 months agocmake : Improve build-info.cpp generation (#14156)
Christian Kastner [Fri, 13 Jun 2025 06:51:34 +0000 (06:51 +0000)]
cmake : Improve build-info.cpp generation (#14156)

* cmake: Simplify build-info.cpp generation

The rebuild of build-info.cpp still gets triggered when .git/index gets
changes.

* cmake: generate build-info.cpp in build dir