]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
9 days agobuild : suppress gcc15 compile warnings (#14261)
fanyang [Thu, 19 Jun 2025 12:49:48 +0000 (20:49 +0800)]
build : suppress gcc15 compile warnings (#14261)

* Change _contains_any() substrs to std::string_view and fix the find comparison logic.

9 days agosycl: Cleanup codepaths in Get Rows in sycl backend (#14215)
Anton Mitkov [Thu, 19 Jun 2025 10:40:21 +0000 (11:40 +0100)]
sycl: Cleanup codepaths in Get Rows in sycl backend (#14215)

Addresses unused reorder path

9 days agollama-bench : add --no-warmup flag (#14224) (#14270)
bashayer hijji [Thu, 19 Jun 2025 10:24:12 +0000 (13:24 +0300)]
llama-bench : add --no-warmup flag (#14224) (#14270)

Add no_warmup parameter to cmd_params struct and command-line parsing to allow users to skip warmup runs before benchmarking.

- Add no_warmup boolean field to cmd_params struct

- Add --no-warmup command-line argument parsing

- Add help text documentation for the new flag

- Wrap existing warmup logic in conditional check

- Maintain full backward compatibility (warmup enabled by default)

Addresses #14224

9 days agoconvert : fix remote option in Windows (#14100)
pqnet [Thu, 19 Jun 2025 10:21:40 +0000 (12:21 +0200)]
convert : fix remote option in Windows (#14100)

9 days agollamafile : support s390x SIMD instruction set (#14273)
Aaron Teo [Thu, 19 Jun 2025 09:48:54 +0000 (17:48 +0800)]
llamafile : support s390x SIMD instruction set (#14273)

9 days agoVulkan: Set device max size for host memory to avoid OOM warning and fallback to...
0cc4m [Thu, 19 Jun 2025 07:15:42 +0000 (09:15 +0200)]
Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (#14249)

9 days agomemory : Hybrid recurrent cache (#13979)
Gabe Goodhart [Thu, 19 Jun 2025 05:08:14 +0000 (00:08 -0500)]
memory : Hybrid recurrent cache (#13979)

* feat: Add llama_model_is_hybrid API call

Also, split llama_model_is_recurrent into llm_arch_is_recurrent in
llama-arch with llama_model_is_recurrent delegating to
llm_arch_is_recurrent. The same split is done for hybird. This is needed
because there are places where the llama_model has not yet been initialized
but we need to check if the model is recurrent (specifically for the
per-layer recurrent check array in hparams).

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add c++ side constants for attention layer indices hparam

Branch: GraniteFour

* feat: Add support for distinguishing recurrent vs non-recurrent layers in hparams

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* feat: Auto-fill hparams.recurrent_layer_arr based on whether the model is recurrent

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* refactor: rename *_is_hybrid -> *_is_hybrid_recurrent

The implementation of the hybrid cache intentionally does not specify the
types of the child caches, so there was a naming mismatch with these
predicate functions that used "hybrid" to imply "hybrid recurrent."

Branch: HybridCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add layer filter to recurrent cache

Branch: HybridCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use per-layer sizing everywhere in kv caches

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* feat: First pass at llama_kv_cache_hybrid_recurrent

This follows the pattern in iswa where the two child caches are held
explicitly to support the case where a model requires a single attention
cache and a single recurrent cache where each layer uses exactly one of the
caches.

This is a rewrite of the more generic approach in the original hybrid cache
PR: https://github.com/ggml-org/llama.cpp/pull/13276

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Construct hybrid recurrent cache for hybrid recurrent models

This includes a refactor of the create_memory logic to avoid needing to use
the arch enum explicitly unless a model needs explicit cache instantiation
logic beyond the standard logic for recurrent, hybrid, unified, and iswa.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix wrong bool condition for split equal in hybrid cache

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix shift logic to defer to unified cache

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Support hybrid recurrent in llama-graph

NOTE: I intentionally did not add support for s_mask since it will be going
away soon

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix logic for initializing inputs and attn layers for hybrid caches

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* fix: Update recurrent cache for changes to remove intermediate kv_cache interface

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix status for init_update sig for recurrent cache state

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* fix: Add missing padding to n_ctx for hybrid cache construction

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <redacted>
* fix: Update clear signature for data argument after rebase

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove errant virtual destructor leftover from previous impl attempt

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use per-layer n_embd_k/v_s calls for mamba (1) layers

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Remove n_embd_k/v_s from unified cache

No longer needed now that unified isn't also supporting recurrent

https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140761069

Branch: HybridRecurrentCache

* refactor: Remove layer index from n_embd_k/v_s

Now that it's not used at all in the unified cache, we don't need to use
the layer index to zero it out for attention layers.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Remove n_embd_k/v_gqa from recurrent cache

This is no longer needed now that there are separate implementations

https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140825128

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Allow custom layer filters for hybrid recurrent

This should help support architectures like Falcon H1 where there is
overlap between layers that need attention and recurrent caches.

https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140748922

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove logits_all after rebase

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove llama_model_is_hybrid_Recurrent public API

https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2141728423

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Use llama_memory_state_ptr for child states in hybrid memory state

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* feat: Overhaul build_recurrent_state / build_inp_s_copy to match attention pattern

https://github.com/ggml-org/llama.cpp/pull/13979/files#r2141701738

This is a big overhaul to bring consistency between how inputs and per-
layer components are created for attention layers and recurrent layers. The
main changes are:

- Rename class llm_graph_input_s_copy -> llm_graph_input_rs
- Add a corresponding llm_graph_input_rs_hybrid_recurrent
- Rename build_inp_s_copy -> build_rs_inp_recurrent
- Add a corresponding build_rs_inp_hybrid_recurrent
- Rename build_recurrent_state -> build_rs to match build_attn w/
llm_graph_input_rs android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
- Add a corresponding overload of build_rs w/
llm_graph_input_rs_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
- Add a llm_graph_input_attn_kv_hybrid_recurrent analogous to
llm_graph_input_attn_kv_unified
- Add a build_attn override that takes
llm_graph_input_attn_kv_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input

This makes the two paradigms fully consistent. The main drawback is the
code duplication in the build_attn and build_rs implementations where the
only difference between implementations is how they cast the memory state.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix resize vs reserve and skip null tensors in size computation

https://github.com/ggml-org/llama.cpp/pull/13979/files#r2149469788

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
Co-Authored-By: @younesbelkada
* fix: Fix initialization of child states

Since initially writing this PR, the logic in the child state types changed
such that using the "init full" signature and keeping the ubatches on the
parent struct no longer worked.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Use a common build_recurrent_state method that is cache-agnostic

This reduces the code duplication between the different build_rs impls and
also retains a similar signature to the previous build_recurrent_state
method while standardizing on the input-dispatched build_rs implementation.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* recurrent : rework graph inputs + add TODOs

ggml-ci

* refactor: Make status and child states const in hybrid and iswa

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Rename llama_kv_cache_[recurrent|hybrid_recurrent] to remove kv cache

This removes the notion of "kv" from the interface names for these memory
types. There are still many references to kv in the implementation of the
recurrent memory which will need further adjustment.

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor!: Rename all k/v related values for recurrent/hybrid to r/s

Anywhere that "kv_<state|cell|size|etc>" is used, I've used the more
generic "mem_" prefix. The specifics of "k" (key) translate to "r"
(recurrent state) and "v" (value) translate to "s" (state-space embedding
states).

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refacor: _recurrent -> _recr for brevity

It just _happens_ to have the same number of letters as _attn!

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* style: Fix spacing for ref

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* refactor: recurrent_layer() -> is_recurrent()

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <redacted>
* style: Fix spacing for size_s_bytes declaration

Co-authored-by: Georgi Gerganov <redacted>
---------

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
9 days agometal : add mean kernel (#14267)
Georgi Gerganov [Thu, 19 Jun 2025 05:05:21 +0000 (08:05 +0300)]
metal : add mean kernel (#14267)

* metal : add mean kernel

ggml-ci

* cont : dedup implementation

ggml-ci

9 days agodocs: add s390x build documentation (#14264)
Aaron Teo [Wed, 18 Jun 2025 17:10:26 +0000 (01:10 +0800)]
docs: add s390x build documentation (#14264)

* docs: add s390x-specific build docs

Signed-off-by: Aaron Teo <redacted>
* docs: add s390x model conversion steps

Signed-off-by: Aaron Teo <redacted>
* docs: s390x build indent

Signed-off-by: Aaron Teo <redacted>
* docs: update hyperlinks for s390x docs

Signed-off-by: Aaron Teo <redacted>
* docs: update llama.h docs

Signed-off-by: Aaron Teo <redacted>
* docs: s390x add accelerator and perf optimizations

Signed-off-by: Aaron Teo <redacted>
* docs: s390x indent blocks

Signed-off-by: Aaron Teo <redacted>
* docs: revert block indentation

Signed-off-by: Aaron Teo <redacted>
* docs: add support information for s390x

Signed-off-by: Aaron Teo <redacted>
* docs: s390x reword

Signed-off-by: Aaron Teo <redacted>
* docs: remove indentation for accelerator section s390x

Signed-off-by: Aaron Teo <redacted>
* docs: remove redundant words s390x

Signed-off-by: Aaron Teo <redacted>
* docs: reword for s390x

Signed-off-by: Aaron Teo <redacted>
* docs: s390x reword simd

Signed-off-by: Aaron Teo <redacted>
* docs: fix trailing whitespace for s390x

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
9 days agoggml-cpu: reduce asm calls for hsum (#14037)
Aaron Teo [Wed, 18 Jun 2025 17:10:08 +0000 (01:10 +0800)]
ggml-cpu: reduce asm calls for hsum (#14037)

Signed-off-by: Aaron Teo <redacted>
9 days agoggml-cpu: fix uncaught underscore terminators (#14023)
Aaron Teo [Wed, 18 Jun 2025 17:06:49 +0000 (01:06 +0800)]
ggml-cpu: fix uncaught underscore terminators (#14023)

Signed-off-by: Aaron Teo <redacted>
10 days agoggml: Add Apple support for GGML_CPU_ALL_VARIANTS (#14258)
Charles Xu [Wed, 18 Jun 2025 11:40:07 +0000 (13:40 +0200)]
ggml: Add Apple support for GGML_CPU_ALL_VARIANTS (#14258)

10 days agomtmd : refactor llava-uhd preprocessing logic (#14247)
Xuan-Son Nguyen [Wed, 18 Jun 2025 08:43:57 +0000 (10:43 +0200)]
mtmd : refactor llava-uhd preprocessing logic (#14247)

* mtmd : refactor llava-uhd preprocessing logic

* fix editorconfig

10 days agollama-chat : fix multiple system message for gemma, orion (#14246)
Xuan-Son Nguyen [Wed, 18 Jun 2025 07:58:43 +0000 (09:58 +0200)]
llama-chat : fix multiple system message for gemma, orion (#14246)

10 days agoconvert : fix null head_dim AutoConfig regression (#14248)
Sigbjørn Skjæret [Wed, 18 Jun 2025 07:52:07 +0000 (09:52 +0200)]
convert : fix null head_dim AutoConfig regression (#14248)

10 days agosync : ggml
Georgi Gerganov [Wed, 18 Jun 2025 06:58:23 +0000 (09:58 +0300)]
sync : ggml

ggml-ci

10 days agoggml : disable warnings for tests when using MSVC (ggml/1273)
Daniel Bevenius [Fri, 13 Jun 2025 13:06:42 +0000 (15:06 +0200)]
ggml : disable warnings for tests when using MSVC (ggml/1273)

* ggml : disable warnings for tests when using MSVC

This commit disables warnings for tests on windows when using MSVC.

The motivation for this is that this brings the build output more
inline with what Linux/MacOS systems produce.

There is still one warning generated for the tests which is:
```console
  Building Custom Rule C:/ggml/tests/CMakeLists.txt
cl : command line  warning D9025: overriding '/DNDEBUG' with '/UNDEBUG'
[C:\ggml\build\tests\test-arange.vcxproj]
  test-arange.cpp
  test-arange.vcxproj -> C:\ggml\build\bin\Release\test-arange.exe
```

* ggml : fix typo in tests disable list

10 days agoggml : remove unused ggml_context_container (ggml/1272)
Daniel Bevenius [Fri, 13 Jun 2025 07:05:44 +0000 (09:05 +0200)]
ggml : remove unused ggml_context_container (ggml/1272)

This commit removes the unused `ggml_context_container` structure from
the ggml library. It looks like the usage of this struct was removed in
Commit 4757fe18d56ec11bf9c07feaca6e9d5b5357e7f4 ("ggml : alloc
ggml_contexts on the heap (whisper/2525)").

The motivation for this changes is to improve code clarity/readability.

10 days agoexamples : include examples in msvc disable warn (ggml/1270)
Daniel Bevenius [Thu, 12 Jun 2025 10:27:09 +0000 (12:27 +0200)]
examples : include examples in msvc disable warn (ggml/1270)

This commit adds the examples in the "list" of targets to ignore MSVC
warnings.

The motivation for this is that currently the examples generate a number
of warnings that are ignore/disabled for the core ggml project. This
makes for a cleaner output when building.

10 days agocmake: remove shader-gen step-targets from ggml-vulkan (#14226)
bandoti [Tue, 17 Jun 2025 20:33:25 +0000 (17:33 -0300)]
cmake: remove shader-gen step-targets from ggml-vulkan (#14226)

* Remove step-targets from vulkan-shaders-gen

* Unset DESTDIR when building vulkan-shaders-gen

11 days agoggml-cpu : remove the weak alias trick (#14221)
xctan [Tue, 17 Jun 2025 09:58:32 +0000 (17:58 +0800)]
ggml-cpu : remove the weak alias trick (#14221)

11 days agomusa: fix build warning (unused variable) (#14231)
R0CKSTAR [Tue, 17 Jun 2025 09:48:08 +0000 (17:48 +0800)]
musa: fix build warning (unused variable) (#14231)

Signed-off-by: Xiaodong Ye <redacted>
11 days agocommon : suggest --jinja when autodetection fails (#14222)
Sigbjørn Skjæret [Mon, 16 Jun 2025 19:58:42 +0000 (21:58 +0200)]
common : suggest --jinja when autodetection fails (#14222)

11 days agoserver : fix incorrect usage of llama_get_embeddings() (#14225)
Georgi Gerganov [Mon, 16 Jun 2025 19:33:27 +0000 (22:33 +0300)]
server : fix incorrect usage of llama_get_embeddings() (#14225)

* server : fix incorrect usage of llama_get_embeddings()

ggml-ci

* cont : fix the fix

ggml-ci

12 days agollama : add thread safety test (#14035)
Diego Devesa [Mon, 16 Jun 2025 15:11:43 +0000 (08:11 -0700)]
llama : add thread safety test (#14035)

* llama : add thread safety test

* llamafile : remove global state

* llama : better LLAMA_SPLIT_MODE_NONE logic

when main_gpu < 0 GPU devices are not used

---------

Co-authored-by: Georgi Gerganov <redacted>
12 days agocmake: clean up external project logic for vulkan-shaders-gen (#14179)
bandoti [Mon, 16 Jun 2025 13:32:13 +0000 (10:32 -0300)]
cmake: clean up external project logic for vulkan-shaders-gen (#14179)

* Remove install step for vulkan-shaders-gen

* Add install step to normalize msvc with make

* Regenerate modified shaders at build-time

12 days agomodel : add NeoBERT (#14164)
Đinh Trọng Huy [Mon, 16 Jun 2025 12:53:41 +0000 (21:53 +0900)]
model : add NeoBERT (#14164)

* convert neobert model to gguf

* add inference graph

* fix flake8 lint

* followed reviewer suggestions

Co-authored-by: Georgi Gerganov <redacted>
* follow reviewers suggestions

Co-authored-by: Georgi Gerganov <redacted>
* override NeoBERT feed-forward length

---------

Co-authored-by: dinhhuy <redacted>
Co-authored-by: Georgi Gerganov <redacted>
12 days agoHIP: disable rocwmma on gfx12 by default until rocm 7.0 (#14202)
uvos [Mon, 16 Jun 2025 11:47:38 +0000 (13:47 +0200)]
HIP: disable rocwmma on gfx12 by default until rocm 7.0 (#14202)

12 days agollama : rework embeddings logic (#14208)
Georgi Gerganov [Mon, 16 Jun 2025 11:14:00 +0000 (14:14 +0300)]
llama : rework embeddings logic (#14208)

* llama : rework embeddings logic

ggml-ci

* cont : fix rerank

ggml-ci

* cont : engrish [no ci]

* cont : fix rerank

ggml-ci

* server : support both embeddings and completions with single model

ggml-ci

* cont : avoid embeddings_org

ggml-ci

12 days agoggml: Add Android support for GGML_CPU_ALL_VARIANTS (#14206)
Charles Xu [Mon, 16 Jun 2025 09:47:57 +0000 (11:47 +0200)]
ggml: Add Android support for GGML_CPU_ALL_VARIANTS (#14206)

12 days agoconvert : remove arcee change in convert_hf_to_gguf_update.py (#14207)
Bartowski [Mon, 16 Jun 2025 08:16:06 +0000 (09:16 +0100)]
convert : remove arcee change in convert_hf_to_gguf_update.py (#14207)

12 days agogguf-py : allow key override when adding value to GGUFWriter (#14194)
Đinh Trọng Huy [Mon, 16 Jun 2025 07:20:59 +0000 (16:20 +0900)]
gguf-py : allow key override when adding value to GGUFWriter (#14194)

Co-authored-by: dinhhuy <redacted>
12 days agovulkan: mutex around vkQueueSubmit (#14127)
Jeff Bolz [Mon, 16 Jun 2025 06:21:08 +0000 (00:21 -0600)]
vulkan: mutex around vkQueueSubmit (#14127)

This fixes the remaining crash in test-thread-safety on my system.

12 days agoggml-cpu : rework weak alias on apple targets (#14146)
xctan [Mon, 16 Jun 2025 05:54:15 +0000 (13:54 +0800)]
ggml-cpu : rework weak alias on apple targets (#14146)

* ggml-cpu : rework weak alias on apple targets

* fix powerpc detection

* fix ppc detection

* fix powerpc detection on darwin

12 days agomodel : Add support for Arcee AI's upcoming AFM model (#14185)
Bartowski [Sun, 15 Jun 2025 23:04:06 +0000 (00:04 +0100)]
model : Add support for Arcee AI's upcoming AFM model (#14185)

* Add Arcee AFM support

* Add draft update code

* Fix linter and update URL, may still not be final

* Update src/llama-model.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Remote accidental blank line

---------

Co-authored-by: Xuan-Son Nguyen <redacted>
12 days agoserver : When listening on a unix domain socket don't print http:// and port (#14180)
Eric Curtin [Sun, 15 Jun 2025 21:36:22 +0000 (23:36 +0200)]
server : When listening on a unix domain socket don't print http:// and port (#14180)

Instead show something like this:

main: server is listening on file.sock - starting the main loop

Signed-off-by: Eric Curtin <redacted>
12 days agoquantize : change int to unsigned int for KV overrides (#14197)
Ed Addario [Sun, 15 Jun 2025 16:53:45 +0000 (17:53 +0100)]
quantize : change int to unsigned int for KV overrides (#14197)

13 days agoCUDA/HIP: fix ssm_scan on devices where warp size is not 32 (#14196)
uvos [Sun, 15 Jun 2025 15:30:13 +0000 (17:30 +0200)]
CUDA/HIP: fix ssm_scan on devices where warp size is not 32 (#14196)

13 days agoHIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (#14183)
uvos [Sun, 15 Jun 2025 13:45:27 +0000 (15:45 +0200)]
HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (#14183)

13 days agokv-cache : fix use-after-move of defrag info (#14189)
Georgi Gerganov [Sun, 15 Jun 2025 07:52:11 +0000 (10:52 +0300)]
kv-cache : fix use-after-move of defrag info (#14189)

ggml-ci

13 days agomodel : add dots.llm1 architecture support (#14044) (#14118)
Mikko Juola [Sun, 15 Jun 2025 07:52:06 +0000 (00:52 -0700)]
model : add dots.llm1 architecture support (#14044) (#14118)

Adds:

* Dots1Model to convert_hf_to_gguf.py

* Computation graph code to llama-model.cpp

* Chat template to llama-chat.cpp to detect this model's template.

---

The model is called "dots.llm1" (I decided to shorten it to dots1 or
DOTS1 in the code generally) architecture.

The only models that exist as of writing of this commit that follow this
architecture are "dots.llm1.inst" and "dots.llm1.base" from here:

* https://huggingface.co/rednote-hilab/dots.llm1.inst

* https://huggingface.co/rednote-hilab/dots.llm1.base

The model architecture is a combination of Qwen and Deepseek parts, as
seen here:

https://github.com/huggingface/transformers/blob/ffe12627b4e84489d2ab91dd0ec00614855edc79/src/transformers/models/dots1/modular_dots1.py

13 days agocparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188)
Georgi Gerganov [Sun, 15 Jun 2025 07:08:58 +0000 (10:08 +0300)]
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188)

ggml-ci

13 days agobatch : auto-gen positions + verify multi-sequence input (#14177)
Georgi Gerganov [Sun, 15 Jun 2025 06:18:37 +0000 (09:18 +0300)]
batch : auto-gen positions + verify multi-sequence input (#14177)

* batch : verify multi-sequence input batches

ggml-ci

* cont : auto-gen positions + verify multi-seq input

ggml-ci

* cont : first print debug info, then perform validation

ggml-ci

* cont : fix position auto-gen + add comments

ggml-ci

13 days agodocs : remove WIP since PR has been merged (#13912)
Pepijn de Vos [Sun, 15 Jun 2025 06:06:37 +0000 (08:06 +0200)]
docs : remove WIP since PR has been merged (#13912)

13 days agollama-chat : Do not throw when tool parsing fails (#14012)
Piotr [Sat, 14 Jun 2025 16:25:15 +0000 (18:25 +0200)]
llama-chat : Do not throw when tool parsing fails (#14012)

Currently when a model generates output which looks like a tool call,
but is invalid an exception is thrown and not handled, causing the cli
or llama-server to bail. Instead, handle the chat parser exception and
simply return the generated text in such cases.

Signed-off-by: Piotr Stankiewicz <redacted>
2 weeks agocompare-llama-bench: add option to plot (#14169)
Aman Gupta [Sat, 14 Jun 2025 08:34:20 +0000 (16:34 +0800)]
compare-llama-bench: add option to plot (#14169)

* compare llama-bench: add option to plot

* Address review comments: convert case + add type hints

* Add matplotlib to requirements

* fix tests

* Improve comment and fix assert condition for test

* Add back default test_name, add --plot_log_scale

* use log_scale regardless of x_values

2 weeks agovocab : fix build (#14175)
Georgi Gerganov [Fri, 13 Jun 2025 17:03:05 +0000 (20:03 +0300)]
vocab : fix build (#14175)

ggml-ci

2 weeks agosycl: fix docker image (#14144)
Svetlozar Georgiev [Fri, 13 Jun 2025 16:32:56 +0000 (17:32 +0100)]
sycl: fix docker image (#14144)

2 weeks agoMerge commit from fork
Guy Goldenberg [Fri, 13 Jun 2025 16:20:25 +0000 (19:20 +0300)]
Merge commit from fork

* vocab : prevent integer overflow during load

* Add static cast and GGML_ABORT

---------

Co-authored-by: Georgi Gerganov <redacted>
2 weeks agobatch : add LLAMA_BATCH_DEBUG environment variable (#14172)
Georgi Gerganov [Fri, 13 Jun 2025 15:35:00 +0000 (18:35 +0300)]
batch : add LLAMA_BATCH_DEBUG environment variable (#14172)

* batch : add LLAMA_BATCH_DEBUG environment variable

ggml-ci

* cont : improve seq_id display

2 weeks agodocs : Update multimodal.md (#14122)
ddpasa [Fri, 13 Jun 2025 13:17:53 +0000 (15:17 +0200)]
docs : Update multimodal.md (#14122)

* Update multimodal.md

* Update multimodal.md

2 weeks agobatch : rework llama_batch_allocr (#14153)
Georgi Gerganov [Fri, 13 Jun 2025 10:47:55 +0000 (13:47 +0300)]
batch : rework llama_batch_allocr (#14153)

* batch : rework llama_batch_allocr

ggml-ci

* cont : move validation inside class

ggml-ci

* cont : move output counting to class

ggml-ci

* cont : minor

ggml-ci

* batch : add TODOs

ggml-ci

2 weeks agoreadme : remove survey link (#14168)
Georgi Gerganov [Fri, 13 Jun 2025 08:55:44 +0000 (11:55 +0300)]
readme : remove survey link (#14168)

2 weeks agocmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167)
Christian Kastner [Fri, 13 Jun 2025 08:38:52 +0000 (08:38 +0000)]
cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167)

* cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT

* cmake: Pass on LLAMA_BUILD_* to GGML_BUILD_*

2 weeks agopooling : make cls_b and cls_out_b optional (#14165)
Đinh Trọng Huy [Fri, 13 Jun 2025 08:34:08 +0000 (17:34 +0900)]
pooling : make cls_b and cls_out_b optional (#14165)

Co-authored-by: dinhhuy <redacted>
2 weeks agoserver : fix SWA condition for full context reprocess (#14163)
Georgi Gerganov [Fri, 13 Jun 2025 08:18:25 +0000 (11:18 +0300)]
server : fix SWA condition for full context reprocess (#14163)

ggml-ci

2 weeks agosycl: Adding additional cpy dbg print output (#14034)
Anton Mitkov [Fri, 13 Jun 2025 07:51:39 +0000 (08:51 +0100)]
sycl: Adding additional cpy dbg print output (#14034)

2 weeks agoSYCL: Bump oneMath commit (#14152)
Ewan Crawford [Fri, 13 Jun 2025 07:45:37 +0000 (08:45 +0100)]
SYCL: Bump oneMath commit (#14152)

Update oneMath commit to merged PR https://github.com/uxlfoundation/oneMath/pull/669
which adds SYCL-Graph support for recording CUDA BLAS commands.

With this change the `MUL_MAT` tests now pass on DPC++ CUDA backends with SYCL-Graph
enabled. Prior to this change, an error would be thrown.

```
$ GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0 -o MUL_MAT -p type_a=f16,type_b=f32,m=16,n=1,k=256,bs=\\[1,1\\],nr=\\[2

UR CUDA ERROR:
        Value:           700
        Name:            CUDA_ERROR_ILLEGAL_ADDRESS
        Description:     an illegal memory access was encountered
        Function:        operator()
        Source Location: $HOME/dpcpp/unified-runtime/source/adapters/cuda/queue.cpp:154

Native API failed. Native API returns: 2147483646 (UR_RESULT_ERROR_UNKNOWN)
Exception caught at file:$HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp, line:3598, func:operator()
SYCL error: CHECK_TRY_ERROR((stream)->wait()): Meet error in this line code!
  in function ggml_backend_sycl_synchronize at $HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3598
$HOME/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/common.hpp:118: SYCL error
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
```

2 weeks agocmake : Improve build-info.cpp generation (#14156)
Christian Kastner [Fri, 13 Jun 2025 06:51:34 +0000 (06:51 +0000)]
cmake : Improve build-info.cpp generation (#14156)

* cmake: Simplify build-info.cpp generation

The rebuild of build-info.cpp still gets triggered when .git/index gets
changes.

* cmake: generate build-info.cpp in build dir

2 weeks agovocab : prevent heap overflow when vocab is too small (#14145)
Georgi Gerganov [Fri, 13 Jun 2025 05:03:54 +0000 (08:03 +0300)]
vocab : prevent heap overflow when vocab is too small (#14145)

ggml-ci

2 weeks agosycl: Remove not needed copy f16->f32 for dnnl mul mat (#14125)
Anton Mitkov [Thu, 12 Jun 2025 13:15:11 +0000 (14:15 +0100)]
sycl: Remove not needed copy f16->f32 for dnnl mul mat (#14125)

2 weeks agoreadme : remove project status link (#14149)
Georgi Gerganov [Thu, 12 Jun 2025 11:43:09 +0000 (14:43 +0300)]
readme : remove project status link (#14149)

2 weeks agoserver : re-enable SWA speculative decoding (#14131)
Georgi Gerganov [Thu, 12 Jun 2025 08:51:38 +0000 (11:51 +0300)]
server : re-enable SWA speculative decoding (#14131)

ggml-ci

2 weeks agocontext : simplify output counting logic during decode (#14142)
Georgi Gerganov [Thu, 12 Jun 2025 08:50:01 +0000 (11:50 +0300)]
context : simplify output counting logic during decode (#14142)

* batch : remove logits_all flag

ggml-ci

* context : simplify output counting logic during decode

ggml-ci

* cont : fix comments

2 weeks agobatch : remove logits_all flag (#14141)
Georgi Gerganov [Thu, 12 Jun 2025 08:49:26 +0000 (11:49 +0300)]
batch : remove logits_all flag (#14141)

ggml-ci

2 weeks agocmake : handle whitepsaces in path during metal build (#14126)
Georgi Gerganov [Thu, 12 Jun 2025 07:14:24 +0000 (10:14 +0300)]
cmake : handle whitepsaces in path during metal build (#14126)

* cmake : handle whitepsaces in path during metal build

ggml-ci

* cont : proper fix

ggml-ci

---------

Co-authored-by: Daniel Bevenius <redacted>
2 weeks agokv-cache : fix split_equal handling in unified implementation (#14130)
Georgi Gerganov [Thu, 12 Jun 2025 07:02:15 +0000 (10:02 +0300)]
kv-cache : fix split_equal handling in unified implementation (#14130)

ggml-ci

2 weeks agocontext : round n_tokens to next multiple of n_seqs when reserving (#14140)
compilade [Thu, 12 Jun 2025 06:56:04 +0000 (02:56 -0400)]
context : round n_tokens to next multiple of n_seqs when reserving (#14140)

This fixes RWKV inference which otherwise failed
when the worst case ubatch.n_seq_tokens rounded to 0.

2 weeks agocommon: fix issue with regex_escape routine on windows (#14133)
bandoti [Wed, 11 Jun 2025 20:19:44 +0000 (17:19 -0300)]
common: fix issue with regex_escape routine on windows (#14133)

2 weeks agoImplement GGML_CPU_ALL_VARIANTS for ARM (#14080)
Christian Kastner [Wed, 11 Jun 2025 19:07:44 +0000 (19:07 +0000)]
Implement GGML_CPU_ALL_VARIANTS for ARM (#14080)

* ggml-cpu: Factor out feature detection build from x86

* ggml-cpu: Add ARM feature detection and scoring

This is analogous to cpu-feats-x86.cpp. However, to detect compile-time
activation of features, we rely on GGML_USE_<FEAT> which need to be set
in cmake, instead of GGML_<FEAT> that users would set for x86.

This is because on ARM, users specify features with GGML_CPU_ARM_ARCH,
rather than with individual flags.

* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for ARM

Like x86, however to pass around arch flags within cmake, we use
GGML_INTERNAL_<FEAT> as we don't have GGML_<FEAT>.

Some features are optional, so we may need to build multiple backends
per arch version (armv8.2_1, armv8.2_2, ...), and let the scoring
function sort out which one can be used.

* ggml-cpu: Limit ARM GGML_CPU_ALL_VARIANTS to Linux for now

The other platforms will need their own specific variants.

This also fixes the bug that the the variant-building branch was always
being executed as the else-branch of GGML_NATIVE=OFF. The branch is
moved to an elseif-branch which restores the previous behavior.

2 weeks agochore : clean up relative source dir paths (#14128)
Sigbjørn Skjæret [Wed, 11 Jun 2025 17:04:23 +0000 (19:04 +0200)]
chore : clean up relative source dir paths (#14128)

2 weeks agotests : add test-tokenizers-repo (#14017)
Sigbjørn Skjæret [Wed, 11 Jun 2025 15:16:32 +0000 (17:16 +0200)]
tests : add test-tokenizers-repo (#14017)

2 weeks agovulkan: Better thread-safety for command pools/buffers (#14116)
Jeff Bolz [Wed, 11 Jun 2025 14:48:52 +0000 (09:48 -0500)]
vulkan: Better thread-safety for command pools/buffers (#14116)

This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.

2 weeks agowebui: Wrap long numbers instead of infinite horizontal scroll (#14062)
Aman [Wed, 11 Jun 2025 14:42:25 +0000 (22:42 +0800)]
webui: Wrap long numbers instead of infinite horizontal scroll (#14062)

* webui: Wrap long numbers instead of infinite horizontal scroll

* Use tailwind class

* update index.html.gz

2 weeks agokv-cache : relax SWA masking condition (#14119)
Georgi Gerganov [Wed, 11 Jun 2025 13:48:45 +0000 (16:48 +0300)]
kv-cache : relax SWA masking condition (#14119)

ggml-ci

2 weeks agoserver : pass default --keep argument (#14120)
Taylor [Wed, 11 Jun 2025 10:43:43 +0000 (06:43 -0400)]
server : pass default --keep argument (#14120)

2 weeks agokv-cache : add LLAMA_KV_CACHE_DEBUG environment variable (#14121)
Georgi Gerganov [Wed, 11 Jun 2025 09:52:45 +0000 (12:52 +0300)]
kv-cache : add LLAMA_KV_CACHE_DEBUG environment variable (#14121)

2 weeks agovulkan: Track descriptor pools/sets per-context (#14109)
Jeff Bolz [Wed, 11 Jun 2025 05:19:25 +0000 (00:19 -0500)]
vulkan: Track descriptor pools/sets per-context (#14109)

Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.

2 weeks agoopencl: add `mul_mv_id_q4_0_f32_8x_flat` (#14003)
lhez [Tue, 10 Jun 2025 23:55:58 +0000 (16:55 -0700)]
opencl: add `mul_mv_id_q4_0_f32_8x_flat` (#14003)

2 weeks agokv-cache : avoid modifying recurrent cells when setting inputs (#13834)
compilade [Tue, 10 Jun 2025 22:20:14 +0000 (18:20 -0400)]
kv-cache : avoid modifying recurrent cells when setting inputs (#13834)

* kv-cache : avoid modifying recurrent cells when setting inputs

* kv-cache : remove inp_s_mask

It was replaced with equivalent and simpler functionality
with rs_z (the first zeroed state) and the already-existing inp_s_copy.

* kv-cache : fix non-consecutive token pos warning for recurrent models

The problem was apparently caused by how the tail cells were swapped.

* graph : simplify logic for recurrent state copies

* kv-cache : use cell without src refs for rs_z in recurrent cache

* llama-graph : fix recurrent state copy

The `state_copy` shuffle assumes everything is moved at once,
which is not true when `states_extra` is copied back to the cache
before copying the range of states between `head` and `head + n_seqs`.
This is only a problem if any of the cells in [`head`, `head + n_seqs`)
have an `src` in [`head + n_seqs`, `head + n_kv`),
which does happen when `n_ubatch > 1` in the `llama-parallel` example.

Changing the order of the operations avoids the potential overwrite
before use, although when copies are avoided (like with Mamba2),
this will require further changes.

* llama-graph : rename n_state to state_size in build_recurrent_state

This naming should reduce confusion between the state size
and the number of states.

2 weeks agoconvert : fix duplicate key DeepSeek-R1 conversion error (#14103)
Sigbjørn Skjæret [Tue, 10 Jun 2025 21:29:52 +0000 (23:29 +0200)]
convert : fix duplicate key DeepSeek-R1 conversion error (#14103)

2 weeks agollama : support GEGLU for jina-bert-v2 (#14090)
Sigbjørn Skjæret [Tue, 10 Jun 2025 16:02:08 +0000 (18:02 +0200)]
llama : support GEGLU for jina-bert-v2 (#14090)

2 weeks agovulkan: force device 0 in CI (#14106)
Jeff Bolz [Tue, 10 Jun 2025 15:53:47 +0000 (10:53 -0500)]
vulkan: force device 0 in CI (#14106)

2 weeks agoFixed spec timings to: accepted/tested instead of accepted/drafted (#14104)
Juk Armstrong [Tue, 10 Jun 2025 15:48:07 +0000 (16:48 +0100)]
Fixed spec timings to: accepted/tested instead of accepted/drafted (#14104)

2 weeks agosync : ggml
Georgi Gerganov [Tue, 10 Jun 2025 14:37:45 +0000 (17:37 +0300)]
sync : ggml

ggml-ci

2 weeks agoggml : fix weak alias win32 (whisper/0)
Georgi Gerganov [Tue, 10 Jun 2025 08:34:10 +0000 (11:34 +0300)]
ggml : fix weak alias win32 (whisper/0)

ggml-ci

2 weeks agoVulkan: Don't default to CPU device (like llvmpipe), even if no other device is avail...
0cc4m [Tue, 10 Jun 2025 12:01:33 +0000 (14:01 +0200)]
Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (#14099)

2 weeks agorpc : nicer error messages for RPC server crash (#14076)
Isaac McFadyen [Tue, 10 Jun 2025 06:41:01 +0000 (02:41 -0400)]
rpc : nicer error messages for RPC server crash (#14076)

2 weeks agosync : ggml
Georgi Gerganov [Tue, 10 Jun 2025 06:20:51 +0000 (09:20 +0300)]
sync : ggml

ggml-ci

2 weeks agoAdd in-build ggml::ggml ALIAS library (ggml/1260)
Kai Pastor [Tue, 3 Jun 2025 10:33:28 +0000 (12:33 +0200)]
Add in-build ggml::ggml ALIAS library (ggml/1260)

Enable uniform linking with subproject and with find_package.

2 weeks agometal : use less stack memory in FA kernel (#14088)
Georgi Gerganov [Mon, 9 Jun 2025 20:05:02 +0000 (23:05 +0300)]
metal : use less stack memory in FA kernel (#14088)

* metal : use less stack memory in FA kernel

ggml-ci

* cont : fix BF16 variant

2 weeks agokv-cache : fix shift and defrag logic (#14081)
Georgi Gerganov [Mon, 9 Jun 2025 20:04:35 +0000 (23:04 +0300)]
kv-cache : fix shift and defrag logic (#14081)

* kv-cache : fix shift

ggml-ci

* cont : reset shift[i]

ggml-ci

* cont : fix defrag erasing cells that didn't move

ggml-ci

2 weeks agollama : allow building all tests on windows when not using shared libs (#13980)
Diego Devesa [Mon, 9 Jun 2025 18:03:09 +0000 (11:03 -0700)]
llama : allow building all tests on windows when not using shared libs (#13980)

* llama : allow building all tests on windows when not using shared libraries

* add static windows build to ci

* tests : enable debug logs for test-chat

---------

Co-authored-by: Georgi Gerganov <redacted>
2 weeks agoggml-cpu : split arch-specific implementations (#13892)
xctan [Mon, 9 Jun 2025 14:47:13 +0000 (22:47 +0800)]
ggml-cpu : split arch-specific implementations (#13892)

* move ggml-cpu-aarch64 to repack

* split quantize_row_q8_0/1

* split helper functions

* split ggml_vec_dot_q4_0_q8_0

* split ggml_vec_dot_q4_1_q8_1

* split ggml_vec_dot_q5_0_q8_0

* split ggml_vec_dot_q5_1_q8_1

* split ggml_vec_dot_q8_0_q8_0

* split ggml_vec_dot_tq1_0_q8_K

* split ggml_vec_dot_tq2_0_q8_K

* split ggml_vec_dot_q2_K_q8_K

* split ggml_vec_dot_q3_K_q8_K

* split ggml_vec_dot_q4_K_q8_K

* split ggml_vec_dot_q5_K_q8_K

* split ggml_vec_dot_q6_K_q8_K

* split ggml_vec_dot_iq2_xxs_q8_K

* split ggml_vec_dot_iq2_xs_q8_K

* split ggml_vec_dot_iq2_s_q8_K

* split ggml_vec_dot_iq3_xxs_q8_K

* split ggml_vec_dot_iq3_s_q8_K

* split ggml_vec_dot_iq1_s_q8_K

* split ggml_vec_dot_iq1_m_q8_K

* split ggml_vec_dot_iq4_nl_q8_0

* split ggml_vec_dot_iq4_xs_q8_K

* fix typos

* fix missing prototypes

* rename ggml-cpu-quants.c

* rename ggml-cpu-traits

* rename arm folder

* move cpu-feats-x86.cpp

* rename ggml-cpu-hbm

* update arm detection macro in quants.c

* move iq quant tables

* split ggml_quantize_mat_q8_0/K

* split ggml_gemv_*

* split ggml_gemm_*

* rename namespace aarch64 to repack

* use weak aliases to replace test macros

* rename GGML_CPU_AARCH64 to GGML_CPU_REPACK

* rename more aarch64 to repack

* clean up rebase leftover

* fix compilation errors

* remove trailing spaces

* try to fix clang compilation errors

* try to fix clang compilation errors again

* try to fix clang compilation errors, 3rd attempt

* try to fix clang compilation errors, 4th attempt

* try to fix clang compilation errors, 5th attempt

* try to fix clang compilation errors, 6th attempt

* try to fix clang compilation errors, 7th attempt

* try to fix clang compilation errors, 8th attempt

* try to fix clang compilation errors, 9th attempt

* more cleanup

* fix compilation errors

* fix apple targets

* fix a typo in arm version of ggml_vec_dot_q4_K_q8_K

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 weeks agocuda : fix device sync on buffer clear (#14033)
Diego Devesa [Mon, 9 Jun 2025 14:36:26 +0000 (07:36 -0700)]
cuda : fix device sync on buffer clear (#14033)

2 weeks agograph : fix geglu (#14077)
Georgi Gerganov [Mon, 9 Jun 2025 14:17:31 +0000 (17:17 +0300)]
graph : fix geglu (#14077)

ggml-ci

2 weeks agoCANN: Simplify the environment variable setting(#13104)
Xinpeng Dou [Mon, 9 Jun 2025 11:47:39 +0000 (19:47 +0800)]
CANN: Simplify the environment variable setting(#13104)

* Simplify the environment variable setting to specify the memory pool type.

* Adjust the GGML_CANN_ASYNC_MODE setting to accept yes, enable, 1, or on (case-insensitive) as valid options.

* update

* fix CI

* update

* delete whitespace

* fix according to review

* update CANN.md

* update CANN.md

2 weeks agowebui: fix sidebar being covered by main content (#14082)
R0CKSTAR [Mon, 9 Jun 2025 10:01:17 +0000 (18:01 +0800)]
webui: fix sidebar being covered by main content (#14082)

* webui: fix sidebar being covered by main content

Signed-off-by: Xiaodong Ye <redacted>
* webui: update index.html.gz

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
2 weeks agoserver : fix LRU check (#14079)
Georgi Gerganov [Mon, 9 Jun 2025 09:57:58 +0000 (12:57 +0300)]
server : fix LRU check (#14079)

ggml-ci

2 weeks agosycl: Add reorder to Q6_K mmvq implementation (#13885)
Nicolò Scipione [Mon, 9 Jun 2025 09:47:07 +0000 (11:47 +0200)]
sycl: Add reorder to Q6_K mmvq implementation (#13885)

* Add Reorder to Q6_K mmvq implementation

* Address PR comments: clean up comments

* Remove unused parameter after refactoring q4_k

* Adding inline to function and removing unnecessary reference to int

---------

Signed-off-by: nscipione <redacted>