Gabe Goodhart [Thu, 19 Jun 2025 05:08:14 +0000 (00:08 -0500)]
memory : Hybrid recurrent cache (#13979)
* feat: Add llama_model_is_hybrid API call
Also, split llama_model_is_recurrent into llm_arch_is_recurrent in
llama-arch with llama_model_is_recurrent delegating to
llm_arch_is_recurrent. The same split is done for hybird. This is needed
because there are places where the llama_model has not yet been initialized
but we need to check if the model is recurrent (specifically for the
per-layer recurrent check array in hparams).
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <redacted>
* feat: Add c++ side constants for attention layer indices hparam
Branch: GraniteFour
* feat: Add support for distinguishing recurrent vs non-recurrent layers in hparams
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <redacted>
* feat: Auto-fill hparams.recurrent_layer_arr based on whether the model is recurrent
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <redacted>
* refactor: rename *_is_hybrid -> *_is_hybrid_recurrent
The implementation of the hybrid cache intentionally does not specify the
types of the child caches, so there was a naming mismatch with these
predicate functions that used "hybrid" to imply "hybrid recurrent."
Branch: HybridCache
Signed-off-by: Gabe Goodhart <redacted>
* feat: Add layer filter to recurrent cache
Branch: HybridCache
Signed-off-by: Gabe Goodhart <redacted>
* fix: Use per-layer sizing everywhere in kv caches
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <redacted>
* feat: First pass at llama_kv_cache_hybrid_recurrent
This follows the pattern in iswa where the two child caches are held
explicitly to support the case where a model requires a single attention
cache and a single recurrent cache where each layer uses exactly one of the
caches.
This is a rewrite of the more generic approach in the original hybrid cache
PR: https://github.com/ggml-org/llama.cpp/pull/13276
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* feat: Construct hybrid recurrent cache for hybrid recurrent models
This includes a refactor of the create_memory logic to avoid needing to use
the arch enum explicitly unless a model needs explicit cache instantiation
logic beyond the standard logic for recurrent, hybrid, unified, and iswa.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix wrong bool condition for split equal in hybrid cache
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix shift logic to defer to unified cache
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* feat: Support hybrid recurrent in llama-graph
NOTE: I intentionally did not add support for s_mask since it will be going
away soon
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix logic for initializing inputs and attn layers for hybrid caches
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <redacted>
* fix: Update recurrent cache for changes to remove intermediate kv_cache interface
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix status for init_update sig for recurrent cache state
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <redacted>
* fix: Add missing padding to n_ctx for hybrid cache construction
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <redacted>
* fix: Update clear signature for data argument after rebase
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove errant virtual destructor leftover from previous impl attempt
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* fix: Use per-layer n_embd_k/v_s calls for mamba (1) layers
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Remove n_embd_k/v_s from unified cache
No longer needed now that unified isn't also supporting recurrent
This is a big overhaul to bring consistency between how inputs and per-
layer components are created for attention layers and recurrent layers. The
main changes are:
- Rename class llm_graph_input_s_copy -> llm_graph_input_rs
- Add a corresponding llm_graph_input_rs_hybrid_recurrent
- Rename build_inp_s_copy -> build_rs_inp_recurrent
- Add a corresponding build_rs_inp_hybrid_recurrent
- Rename build_recurrent_state -> build_rs to match build_attn w/
llm_graph_input_rs android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
- Add a corresponding overload of build_rs w/
llm_graph_input_rs_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
- Add a llm_graph_input_attn_kv_hybrid_recurrent analogous to
llm_graph_input_attn_kv_unified
- Add a build_attn override that takes
llm_graph_input_attn_kv_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
This makes the two paradigms fully consistent. The main drawback is the
code duplication in the build_attn and build_rs implementations where the
only difference between implementations is how they cast the memory state.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix resize vs reserve and skip null tensors in size computation
Signed-off-by: Gabe Goodhart <redacted> Co-Authored-By: @younesbelkada
* fix: Fix initialization of child states
Since initially writing this PR, the logic in the child state types changed
such that using the "init full" signature and keeping the ubatches on the
parent struct no longer worked.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Use a common build_recurrent_state method that is cache-agnostic
This reduces the code duplication between the different build_rs impls and
also retains a similar signature to the previous build_recurrent_state
method while standardizing on the input-dispatched build_rs implementation.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* recurrent : rework graph inputs + add TODOs
ggml-ci
* refactor: Make status and child states const in hybrid and iswa
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Rename llama_kv_cache_[recurrent|hybrid_recurrent] to remove kv cache
This removes the notion of "kv" from the interface names for these memory
types. There are still many references to kv in the implementation of the
recurrent memory which will need further adjustment.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* refactor!: Rename all k/v related values for recurrent/hybrid to r/s
Anywhere that "kv_<state|cell|size|etc>" is used, I've used the more
generic "mem_" prefix. The specifics of "k" (key) translate to "r"
(recurrent state) and "v" (value) translate to "s" (state-space embedding
states).
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* refacor: _recurrent -> _recr for brevity
It just _happens_ to have the same number of letters as _attn!
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* style: Fix spacing for ref
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* refactor: recurrent_layer() -> is_recurrent()
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <redacted>
* style: Fix spacing for size_s_bytes declaration
Co-authored-by: Georgi Gerganov <redacted>
---------
Signed-off-by: Gabe Goodhart <redacted> Co-authored-by: Georgi Gerganov <redacted>
Daniel Bevenius [Fri, 13 Jun 2025 13:06:42 +0000 (15:06 +0200)]
ggml : disable warnings for tests when using MSVC (ggml/1273)
* ggml : disable warnings for tests when using MSVC
This commit disables warnings for tests on windows when using MSVC.
The motivation for this is that this brings the build output more
inline with what Linux/MacOS systems produce.
There is still one warning generated for the tests which is:
```console
Building Custom Rule C:/ggml/tests/CMakeLists.txt
cl : command line warning D9025: overriding '/DNDEBUG' with '/UNDEBUG'
[C:\ggml\build\tests\test-arange.vcxproj]
test-arange.cpp
test-arange.vcxproj -> C:\ggml\build\bin\Release\test-arange.exe
```
This commit removes the unused `ggml_context_container` structure from
the ggml library. It looks like the usage of this struct was removed in
Commit 4757fe18d56ec11bf9c07feaca6e9d5b5357e7f4 ("ggml : alloc
ggml_contexts on the heap (whisper/2525)").
The motivation for this changes is to improve code clarity/readability.
Daniel Bevenius [Thu, 12 Jun 2025 10:27:09 +0000 (12:27 +0200)]
examples : include examples in msvc disable warn (ggml/1270)
This commit adds the examples in the "list" of targets to ignore MSVC
warnings.
The motivation for this is that currently the examples generate a number
of warnings that are ignore/disabled for the core ggml project. This
makes for a cleaner output when building.
Piotr [Sat, 14 Jun 2025 16:25:15 +0000 (18:25 +0200)]
llama-chat : Do not throw when tool parsing fails (#14012)
Currently when a model generates output which looks like a tool call,
but is invalid an exception is thrown and not handled, causing the cli
or llama-server to bail. Instead, handle the chat parser exception and
simply return the generated text in such cases.
UR CUDA ERROR:
Value: 700
Name: CUDA_ERROR_ILLEGAL_ADDRESS
Description: an illegal memory access was encountered
Function: operator()
Source Location: $HOME/dpcpp/unified-runtime/source/adapters/cuda/queue.cpp:154
Native API failed. Native API returns: 2147483646 (UR_RESULT_ERROR_UNKNOWN)
Exception caught at file:$HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp, line:3598, func:operator()
SYCL error: CHECK_TRY_ERROR((stream)->wait()): Meet error in this line code!
in function ggml_backend_sycl_synchronize at $HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3598
$HOME/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/common.hpp:118: SYCL error
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
```
Christian Kastner [Wed, 11 Jun 2025 19:07:44 +0000 (19:07 +0000)]
Implement GGML_CPU_ALL_VARIANTS for ARM (#14080)
* ggml-cpu: Factor out feature detection build from x86
* ggml-cpu: Add ARM feature detection and scoring
This is analogous to cpu-feats-x86.cpp. However, to detect compile-time
activation of features, we rely on GGML_USE_<FEAT> which need to be set
in cmake, instead of GGML_<FEAT> that users would set for x86.
This is because on ARM, users specify features with GGML_CPU_ARM_ARCH,
rather than with individual flags.
* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for ARM
Like x86, however to pass around arch flags within cmake, we use
GGML_INTERNAL_<FEAT> as we don't have GGML_<FEAT>.
Some features are optional, so we may need to build multiple backends
per arch version (armv8.2_1, armv8.2_2, ...), and let the scoring
function sort out which one can be used.
* ggml-cpu: Limit ARM GGML_CPU_ALL_VARIANTS to Linux for now
The other platforms will need their own specific variants.
This also fixes the bug that the the variant-building branch was always
being executed as the else-branch of GGML_NATIVE=OFF. The branch is
moved to an elseif-branch which restores the previous behavior.
Jeff Bolz [Wed, 11 Jun 2025 14:48:52 +0000 (09:48 -0500)]
vulkan: Better thread-safety for command pools/buffers (#14116)
This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.
Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.
The problem was apparently caused by how the tail cells were swapped.
* graph : simplify logic for recurrent state copies
* kv-cache : use cell without src refs for rs_z in recurrent cache
* llama-graph : fix recurrent state copy
The `state_copy` shuffle assumes everything is moved at once,
which is not true when `states_extra` is copied back to the cache
before copying the range of states between `head` and `head + n_seqs`.
This is only a problem if any of the cells in [`head`, `head + n_seqs`)
have an `src` in [`head + n_seqs`, `head + n_kv`),
which does happen when `n_ubatch > 1` in the `llama-parallel` example.
Changing the order of the operations avoids the potential overwrite
before use, although when copies are avoided (like with Mamba2),
this will require further changes.
* llama-graph : rename n_state to state_size in build_recurrent_state
This naming should reduce confusion between the state size
and the number of states.