]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
3 weeks agoCUDA: Add `fastdiv` to `k_bin_bcast*`, giving 1-3% E2E performance (#15872)
Oliver Simons [Wed, 10 Sep 2025 20:04:03 +0000 (22:04 +0200)]
CUDA: Add `fastdiv` to `k_bin_bcast*`, giving 1-3% E2E performance (#15872)

* Add fastdiv and fastmodulo to k_bin_bcast kernel

* Address review comments

* `prod_` instead of `prod` suffix

* Add test case for `k_bin_bcast_unravel` in CUDA backend

3 weeks agollama : support T5 models with unequal number of encoder-decoder layers (#15909)
Jie Fu (傅杰) [Wed, 10 Sep 2025 18:51:51 +0000 (02:51 +0800)]
llama : support T5 models with unequal number of encoder-decoder layers (#15909)

* Extend the support of T5 models with different encoder-decoder layers

Signed-off-by: Jie Fu <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/constants.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/gguf_writer.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-arch.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-arch.h

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-hparams.h

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Rename n_dec_layer --> dec_n_layer

Signed-off-by: Jie Fu <redacted>
* Adapt to cases when dec_n_layer > n_layer

Signed-off-by: Jie Fu <redacted>
---------

Signed-off-by: Jie Fu <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agograph : support non-contiguous Q in build_attn_mha (#15908)
Sigbjørn Skjæret [Wed, 10 Sep 2025 17:08:59 +0000 (19:08 +0200)]
graph : support non-contiguous Q in build_attn_mha (#15908)

* support non-contiguous Q in build_attn_mha

* Update src/llama-graph.cpp

ggml-ci

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
3 weeks agoggml-cpu : fix padding in ggml_timestep_embedding (#15917)
Daniel Bevenius [Wed, 10 Sep 2025 15:31:40 +0000 (17:31 +0200)]
ggml-cpu : fix padding in ggml_timestep_embedding (#15917)

This commit fixes the zero padding for odd dimensions in
ggml_compute_forward_timestep_embedding_f32.
The motivation for this is that currently if an odd dimension is used,
the padding check incorrectly uses the dimension value for indexing.
For example, with dim=15:

Elements 0-6 are set to cosine values
Elements 7-13 are set to sine values
Element 14 is left uninitialized (contains garbage)
Element 15 is correctly set to zero

This fix changes embed_data[dim] to embed_data[2 * half] so that
element 14 (the first unused element) is properly set to zero as well
as the last element.

Resolves: https://github.com/ggml-org/ggml/issues/1324

3 weeks agometal : make the backend async (#15906)
Georgi Gerganov [Wed, 10 Sep 2025 14:52:35 +0000 (17:52 +0300)]
metal : make the backend async (#15906)

* metal : make the backend async

ggml-ci

* cont : add comments, extend op offload, clean up

ggml-ci

* metal : fix batch size for MUL_MAT_ID

* metal : remove deprecated ggml_backend_metal_buffer_from_ptr

* metal : create only metal buffers, no wrapping of host memory

ggml-ci

* metal : restore .alloc_buffer for buffer_from_ptr_type

ggml-ci

* metal : remove broken implementation of GGML_OP_SET

ggml-ci

* metal : clean-up loose ends, ready for tests

ggml-ci

* metal : support both private and shared buffers

ggml-ci

* metal : enable private buffers + add global device queue

* metal : disable host buffer to prevent races

ggml-ci

* metal : avoid extra copy during set_tensor

ggml-ci

* metal : use separate buffer types for shread and private Metal buffers

ggml-ci

* metal : simplify synchronization logic

ggml-ci

* metal : fix build

ggml-ci

* metal : do not implement cpy_tensor

ggml-ci

* metal : separate implementations for shared and private buffers

ggml-ci

3 weeks agoci : add caching for ROCm installation in release workflow (#15924)
Daniel Bevenius [Wed, 10 Sep 2025 13:39:57 +0000 (15:39 +0200)]
ci : add caching for ROCm installation in release workflow (#15924)

This commit applies the same caching to the release workflow which
currently exists for the main CI workflow that was introduced in Commit
ff02caf9eed261423289d1531a56536fbf57bfc2 ("ci : cache ROCm installation
in windows-latest-cmake-hip (#15887)").

3 weeks agotests : filter out no-ops from coverage report (#15900)
Daniel Bevenius [Wed, 10 Sep 2025 12:17:09 +0000 (14:17 +0200)]
tests : filter out no-ops from coverage report (#15900)

* tests : filter out no-ops from coverage report

This commit is a follow-up commit for #15745 to address the feedback on
how no-op operations should be filtered out from the coverage report.

The feedback regarding the UNARY and GLU sub-operations not being
handled I not exactly sure what should be done. They are included in the
coverage, for example ABS, ELU, EXP, GELU, GEGLU, GEGLU_ERF etc are in
the list of covered operations:
```console
$ ./build/bin/test-backend-ops --show-coverage
Operations covered by tests (89):
  ✓ ABS
  ✓ ACC
  ✓ ADD
  ✓ ADD1
  ✓ ADD_ID
  ✓ ARANGE
  ✓ ARGMAX
  ✓ ARGSORT
  ✓ CLAMP
  ✓ CONCAT
  ✓ CONV_2D
  ✓ CONV_2D_DW
  ✓ CONV_3D
  ✓ CONV_TRANSPOSE_1D
  ✓ CONV_TRANSPOSE_2D
  ✓ COS
  ✓ COUNT_EQUAL
  ✓ CPY
  ✓ CROSS_ENTROPY_LOSS
  ✓ CROSS_ENTROPY_LOSS_BACK
  ✓ DIAG_MASK_INF
  ✓ DIV
  ✓ DUP
  ✓ ELU
  ✓ EXP
  ✓ FLASH_ATTN_EXT
  ✓ GATED_LINEAR_ATTN
  ✓ GEGLU
  ✓ GEGLU_ERF
  ✓ GEGLU_QUICK
  ✓ GELU
  ✓ GELU_ERF
  ✓ GELU_QUICK
  ✓ GET_ROWS
  ✓ GET_ROWS_BACK
  ✓ GROUP_NORM
  ✓ HARDSIGMOID
  ✓ HARDSWISH
  ✓ IM2COL
  ✓ IM2COL_3D
  ✓ L2_NORM
  ✓ LEAKY_RELU
  ✓ LOG
  ✓ MEAN
  ✓ MUL
  ✓ MUL_MAT
  ✓ MUL_MAT_ID
  ✓ NEG
  ✓ NORM
  ✓ OPT_STEP_ADAMW
  ✓ OPT_STEP_SGD
  ✓ OUT_PROD
  ✓ PAD
  ✓ PAD_REFLECT_1D
  ✓ POOL_2D
  ✓ REGLU
  ✓ RELU
  ✓ REPEAT
  ✓ REPEAT_BACK
  ✓ RMS_NORM
  ✓ RMS_NORM_BACK
  ✓ ROLL
  ✓ ROPE
  ✓ ROPE_BACK
  ✓ RWKV_WKV6
  ✓ RWKV_WKV7
  ✓ SCALE
  ✓ SET
  ✓ SET_ROWS
  ✓ SGN
  ✓ SIGMOID
  ✓ SILU
  ✓ SILU_BACK
  ✓ SIN
  ✓ SOFT_MAX
  ✓ SOFT_MAX_BACK
  ✓ SQR
  ✓ SQRT
  ✓ SSM_CONV
  ✓ SSM_SCAN
  ✓ STEP
  ✓ SUB
  ✓ SUM
  ✓ SUM_ROWS
  ✓ SWIGLU
  ✓ SWIGLU_OAI
  ✓ TANH
  ✓ TIMESTEP_EMBEDDING
  ✓ UPSCALE

Operations without tests (14):
  ✗ ADD_REL_POS
  ✗ CUSTOM
  ✗ DIAG
  ✗ DIAG_MASK_ZERO
  ✗ FLASH_ATTN_BACK
  ✗ GET_REL_POS
  ✗ IM2COL_BACK
  ✗ MAP_CUSTOM1
  ✗ MAP_CUSTOM2
  ✗ MAP_CUSTOM3
  ✗ POOL_1D
  ✗ POOL_2D_BACK
  ✗ WIN_PART
  ✗ WIN_UNPART

Coverage Summary:
  Total operations: 103
  Tested operations: 89
  Untested operations: 14
  Coverage: 86.4%
```

Refs: https://github.com/ggml-org/llama.cpp/pull/15745

* use of ggml_op enum values instead of strcmp

3 weeks agomedia : add transparent icon svg and png [no ci] (#15891)
j-k [Wed, 10 Sep 2025 11:51:28 +0000 (12:51 +0100)]
media : add transparent icon svg and png [no ci] (#15891)

3 weeks agogitignore : Ignore vim swap files in tests (#15901)
Jesse [Wed, 10 Sep 2025 11:28:47 +0000 (07:28 -0400)]
gitignore : Ignore vim swap files in tests (#15901)

3 weeks agoCANN: Add ROPE sin/cos cache for reuse (#15912)
Chenguang Li [Wed, 10 Sep 2025 10:42:00 +0000 (18:42 +0800)]
CANN: Add ROPE sin/cos cache for reuse (#15912)

* CANN: Add ROPE sin/cos cache for reuse

Introduce sin/cos caching mechanism in ROPE to avoid redundant
computation across layers. The cache is built on the first layer
per device and reused by subsequent layers if parameters match.

- Added sin_cache / cos_cache pointers and position_length tracking
- Introduced cache validity flags and properties:
  (ext_factor, theta_scale, freq_scale, attn_factor, is_neox)
- Accelerates ROPE by eliminating repeated sin/cos generation

This change reduces overhead in multi-layer scenarios while
preserving correctness by verifying parameter consistency.

Co-authored-by: hipudding <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
Co-authored-by: hipudding <redacted>
3 weeks agoCANN: implement LRU cache for ACL graphs (#15814)
Chenguang Li [Wed, 10 Sep 2025 07:29:12 +0000 (15:29 +0800)]
CANN: implement LRU cache for ACL graphs (#15814)

* CANN: implement LRU cache for ACL graphs in CANN backend

- Introduce ggml_cann_graph_lru_cache to store multiple ggml_cann_graph objects.
- Graphs are loaded on demand and evicted using LRU policy when capacity is exceeded.
- Updated push, move_to_front, and clear methods to manage cached graphs efficiently.
- Ensures reuse of graphs, reducing graph reconstruction overhead in CANN backend.

* fix typo

* The LRU cache capacity can be configured via an env variable

Signed-off-by: noemotiovon <redacted>
* refactory acl graph

* refactory && fix review comments

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
3 weeks agollama : check returned fn ptrs from ggml_backend_reg_get_proc_address (#15893)
Daniel Bevenius [Wed, 10 Sep 2025 03:33:58 +0000 (05:33 +0200)]
llama : check returned fn ptrs from ggml_backend_reg_get_proc_address (#15893)

This commit adds check for two function pointers returned from
ggml_backend_reg_get_proc_address.

The motivation for this is that the function pointer could be nullptr if
the get proc address function changes in the future. This is also
consistent with all the other calls to ggml_backend_reg_get_proc_address
in the code base.

3 weeks agoci : cache ROCm installation in windows-latest-cmake-hip (#15887)
Daniel Bevenius [Wed, 10 Sep 2025 03:23:19 +0000 (05:23 +0200)]
ci : cache ROCm installation in windows-latest-cmake-hip (#15887)

This commit adds caching of the ROCm installation for the windows-latest-cmake-hip job.

The motivation for this is that the installation can sometimes hang and/or not complete properly leaving an invalid installation which later fails the build. By caching the installation hopefully we can keep a good installation available in the cache and avoid the installation step.

Refs: https://github.com/ggml-org/llama.cpp/pull/15365

3 weeks agovulkan: throw the oom error instead of no memory type found (#15905)
Ruben Ortlam [Tue, 9 Sep 2025 20:26:03 +0000 (22:26 +0200)]
vulkan: throw the oom error instead of no memory type found (#15905)

3 weeks agovulkan: Fix OOB accesses in soft_max_back (#15861)
Jeff Bolz [Tue, 9 Sep 2025 12:41:15 +0000 (07:41 -0500)]
vulkan: Fix OOB accesses in soft_max_back (#15861)

3 weeks agoHIP: use v_dot2_f32_f16 instruction for FA (#15884)
Johannes Gäßler [Tue, 9 Sep 2025 12:04:43 +0000 (14:04 +0200)]
HIP: use v_dot2_f32_f16 instruction for FA (#15884)

3 weeks agoWorkaround for subgroup arithmetic failing on MoltenVK with AMD GPUs (issue 15846...
lksj92hs [Tue, 9 Sep 2025 12:01:15 +0000 (15:01 +0300)]
Workaround for subgroup arithmetic failing on MoltenVK with AMD GPUs (issue 15846) (#15886)

3 weeks agoCUDA: Add mul_mat_id support for the mmf kernel (#15767)
Aman Gupta [Tue, 9 Sep 2025 06:38:02 +0000 (14:38 +0800)]
CUDA: Add mul_mat_id support for the mmf kernel (#15767)

* CUDA: Add mul_mat_id support the mmf

Add support for mul_mat_id for bs < 16

* Review: use warp_size, fix should_use_mmf condition

* Launch one block per expert, stride along n_expert_used

* templatize mul_mat_id

* Pad shmem to 16 bytes, add helper function mul_mat_f_switch_ids

* Reduce compile times by dividing mmf into f16, bf16 and f32 variants

* Divide mmf by ncols_dst

* Add missing files

* Fix MUSA/HIP builds

3 weeks agoCUDA: fix GET_ROWS for large tensors (#15882)
Johannes Gäßler [Tue, 9 Sep 2025 06:11:01 +0000 (08:11 +0200)]
CUDA: fix GET_ROWS for large tensors (#15882)

3 weeks agocontrib : add notes about merging PRs (#15881)
Georgi Gerganov [Tue, 9 Sep 2025 05:42:10 +0000 (08:42 +0300)]
contrib : add notes about merging PRs (#15881)

* contrib : add notes about merging PRs

* Update CONTRIBUTING.md

Co-authored-by: Diego Devesa <redacted>
* Update CONTRIBUTING.md

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Diego Devesa <redacted>
Co-authored-by: Johannes Gäßler <redacted>
3 weeks agorequirements : update transformers/torch for Embedding Gemma (#15828)
Daniel Bevenius [Tue, 9 Sep 2025 04:06:52 +0000 (06:06 +0200)]
requirements : update transformers/torch for Embedding Gemma (#15828)

* requirements : update transformers/torch for Embedding Gemma

This commit updates the requirements to support converting
Embedding Gemma 300m models.

The motivation for this change is that during development I had a local
copy of the transformers package which is what I used for converting
the models. This was a mistake on my part and I should have also updated
my transformers version to the official release.

I had checked the requirements/requirements-convert_legacy_llama.txt
file and noted that the version was >=4.45.1,<5.0.0 and came to the
conculusion that no updated would be needed, this assumed that
Embedding Gemma would be in a transformers release at the time
Commit fb15d649ed14ab447eeab911e0c9d21e35fb243e ("llama : add support
for EmbeddingGemma 300m (#15798)) was merged. So anyone wanting to
convert themselves would be able to do so. However, Embedding Gemma is
a preview release and this commit updates the requirements to use this
preview release.

* resolve additional python dependencies

* fix pyright errors in tokenizer test and remove unused import

3 weeks agomodel-conversion : add extra debugging support for model conversion (#15877)
Piotr Wilkin (ilintar) [Tue, 9 Sep 2025 04:05:55 +0000 (06:05 +0200)]
model-conversion : add extra debugging support for model conversion (#15877)

* feat: Extra debugging support for model conversion - added BF16 support for llama-callback-eval and support for dumping intermediate steps in run-org-model.py

3 weeks agojson : support `enum` values within `allOf` (#15830)
Aldehir Rojas [Mon, 8 Sep 2025 21:14:32 +0000 (16:14 -0500)]
json : support `enum` values within `allOf` (#15830)

3 weeks agomedia : add llama1 icon (#15878)
j-k [Mon, 8 Sep 2025 18:57:01 +0000 (19:57 +0100)]
media : add llama1 icon (#15878)

Add svg and png based off llama1-icon.svg

3 weeks agovulkan: sort graph to allow more parallel execution (#15850)
Jeff Bolz [Mon, 8 Sep 2025 18:10:07 +0000 (13:10 -0500)]
vulkan: sort graph to allow more parallel execution (#15850)

* vulkan: sort graph to allow more parallel execution

Add a backend proc to allow the backend to modify the graph. The
vulkan implementation looks at which nodes depend on each other
and greedily reorders them to group together nodes that don't
depend on each other. It only reorders the nodes, doesn't change
the contents of any of them.

With #15489, this reduces the number of synchronizations needed.

* call optimize_graph per-split

3 weeks agoCUDA: generate_cu_files.py - add missing mxfp4 (#15880)
Aman Gupta [Mon, 8 Sep 2025 17:23:46 +0000 (01:23 +0800)]
CUDA: generate_cu_files.py - add missing mxfp4 (#15880)

3 weeks agochat : Deepseek V3.1 reasoning and tool calling support (OpenAI Style) (#15533)
Jesse [Mon, 8 Sep 2025 14:59:48 +0000 (10:59 -0400)]
chat : Deepseek V3.1 reasoning and tool calling support (OpenAI Style) (#15533)

* Add DeepSeek V3.1 thinking mode support

- Added COMMON_CHAT_FORMAT_DEEPSEEK_V3_1 enum value
- Created common_chat_params_init_deepseek_v3_1() function (currently uses R1 implementation)
- Created common_chat_parse_deepseek_v3_1() function that handles V3.1 thinking format:
  - Extracts reasoning content before '</think>' tag into reasoning_content
  - Extracts regular content after '</think>' tag into content
  - No opening '<think>' tag in V3.1 format
- Added detection logic for V3.1 templates based on pattern: 'message['prefix'] is defined and message['prefix'] and thinking'
- Added V3.1 case to parsing switch statement

This addresses the issue where V3.1 outputs reasoning content followed by '</think>' and then regular content without the opening '<think>' tag.

* Another attempt by V3.1 non-thinking

* Fix test, but it's not asserting anything.

* Ignore vim swap files in tests dir

* Update the test

* Try using try_find_literal instead of regex

* passing test

* Revert "Try using try_find_literal instead of regex"

This reverts commit c50d887ec2780dd9e6b8b397e92347d3db8d5575.

* Remove unnecessary change

* Remove comment

* Add code to handle non-thinking mode.

* Try to set message['prefix'] when thinking is enabled.

* This fixes reasoning, but breaks normal content. We need state in the
chat parser.

* DeepSeek V3.1 thinking is now the default. Disable with `--reasoning-budget 0`.

* Simplify (DeepSeek V3.1 reasoning)

* Fix sign inversion bug

* Add some tool calling code (not working).

* Tool calls working in non-reasoning mode.

* Attempt a unit test for tool call parsing.

* Passing test

* Add tests for both happy path and broken fenced DeepSeek V3.1 tool call variants.

* Passing DeepSeek V3.1 tool call tests, but model is not working.

* Revert assistance response prefill change. Not my monkeys.

* Add fenced_thinking unit test variant. Passes, but thinking tool calling
still isn't working for some reason.

* Tests pass in reasoning mode. Also e2e tool test passes.

* Make a copy of the parse_json_tool_calls function for deepseek-v3.1 so
as to not accidentally introduce regressions.

* Fix thinking_forced_open logic. tool calling broken. Need to add another
test case.

* That's what I get for cargo culting a newline.

* Add multi tool call test for deepseek v3.1 non-reasoning

* Move test, remove .gitignore change

* Place deepseek-v3.1 reasoning test directly into existing reasoning
function per CISC's request.

* Address whitespace CI failure.

* Merge two assert_equals per CISC's request.

* Add DeepSeek-V3.1 tests to tests/test-chat.cpp per CISC's request.

* Merge deepseek V3.1 and regular parse_json_tool_calls() function
behaviors by adding optional update_cursor argument.

* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* DeepSeek V3.1 fix reasoning_format none

* Strip grammar down to strictly what we expect based on model card. Throw
out parts we cargo culted from R1 that don't make sense.

* Update tests/test-chat-parser.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* DeepSeek V3.1 - Add edge case where thinking is forced open, there is
tool calling in the reasoning content, but then the model just stops the
output without closing the </think> tag, so it's not a partial. In this
case, use the tool call in the reasoning content.

* DeepSeek V3.1 - simplify update_cursor

* Update common/chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update common/chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update common/chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Fix indent

---------

Co-authored-by: openhands <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agoserver : bring back timings_per_token (#15879)
Xuan-Son Nguyen [Mon, 8 Sep 2025 14:50:05 +0000 (21:50 +0700)]
server : bring back timings_per_token (#15879)

3 weeks agocuda : fix supports_op condition for get_rows when number of blocks is too large...
Georgi Gerganov [Mon, 8 Sep 2025 10:56:51 +0000 (13:56 +0300)]
cuda : fix supports_op condition for get_rows when number of blocks is too large (#15868)

* cuda : fix supports_op condition for get_rows when src1->ne2 > 1

ggml-ci

* ggml : add comment about ggml_get_rows

ggml-ci

* cuda : add FIXME [no ci]

* cuda : update support condition

ggml-ci

3 weeks agometal : refactor + optimize (#15857)
Georgi Gerganov [Mon, 8 Sep 2025 10:34:56 +0000 (13:34 +0300)]
metal : refactor + optimize (#15857)

* metal : refactor

ggml-ci

* cont : refactor FA-vec kernel

* cont : print metal library load time

* minor : warn to debug + bettern kernel names

ggml-ci

* metal : optimize mul_mv q8_0

ggml-ci

* metal : simplify FA pipeline creation functions

ggml-ci

* metal : improve naming consistency

* metal : safer function constants offsets

ggml-ci

* metal : comments

ggml-ci

3 weeks agoggml: allow casting between f32 and i32 (#15783)
Xuan-Son Nguyen [Mon, 8 Sep 2025 10:33:01 +0000 (17:33 +0700)]
ggml: allow casting between f32 and i32 (#15783)

* ggml: allow casting between f32 and i32

* fix cuda

* add vulkan

* fix CPU non-cont

* add non-cont test case

* add note

* extend test number range

* correct note

* add cont version for vulkan

3 weeks agoCUDA: non-contiguous src0 not supported for PAD (#15869)
Sigbjørn Skjæret [Mon, 8 Sep 2025 09:55:44 +0000 (11:55 +0200)]
CUDA: non-contiguous src0 not supported for PAD (#15869)

3 weeks agoconvert : force setting sliding_window from original config (#15867)
Daniel Bevenius [Mon, 8 Sep 2025 07:44:34 +0000 (09:44 +0200)]
convert : force setting sliding_window from original config (#15867)

* convert : force setting sliding_window from original config

This commit modifies the set_gguf_parameters method for EmbeddingGemma
so that it reads the sliding_window parameter from the original model
config.json and uses that value.

The motivation for this change is that the Gemma3TextConfig
constructor adjusts the sliding_window value, which can lead to
inconsistencies when converting models as we expects this value to
match the original model's configuration.

Refs: https://github.com/huggingface/transformers/blob/bb45d3631ec7026db04a77d33a52b31766372160/src/transformers/models/gemma3/configuration_gemma3.py#L230

* fix flake8 error

* add link to huggingface PR

3 weeks agobatched-bench : fix llama_synchronize usage during prompt processing (#15835)
Georgi Gerganov [Mon, 8 Sep 2025 07:27:07 +0000 (10:27 +0300)]
batched-bench : fix llama_synchronize usage during prompt processing (#15835)

ggml-ci

3 weeks agocontext : fix n_outputs during reserve (#15858)
Georgi Gerganov [Mon, 8 Sep 2025 07:26:36 +0000 (10:26 +0300)]
context : fix n_outputs during reserve (#15858)

ggml-ci

3 weeks agomodel : avoid ggml_cont_3d for fused QKV weights (#15662)
Georgi Gerganov [Mon, 8 Sep 2025 07:25:33 +0000 (10:25 +0300)]
model : avoid ggml_cont_3d for fused QKV weights (#15662)

* model : avoid ggml_cont_3d for fused QKV weights

ggml-ci

* kv-cache : make cpy_k and cpy_v implementation more readable

ggml-ci

* cont : add comments

ggml-ci

* cont : minor fix [no ci]

* cont : one more fix

* cont : clarity

ggml-ci

* kv-cache : require contiguous heads of k_cur and v_cur

ggml-ci

3 weeks agotests: large sizes for get_rows (#15687)
Jeff Bolz [Mon, 8 Sep 2025 04:23:41 +0000 (23:23 -0500)]
tests: large sizes for get_rows (#15687)

3 weeks agoCANN: Stream sync between devices for acl_graph (#15809)
Chenguang Li [Mon, 8 Sep 2025 02:03:29 +0000 (10:03 +0800)]
CANN: Stream sync between devices for acl_graph (#15809)

* CANN: Switch to stream synchronization

Switch to stream synchronization because events are not effective.

Co-authored-by: hipudding <redacted>
* CANN: add Comments

---------

Co-authored-by: hipudding <redacted>
3 weeks agovulkan: support im2col_3d (#15795)
Jeff Bolz [Sun, 7 Sep 2025 18:50:26 +0000 (13:50 -0500)]
vulkan: support im2col_3d (#15795)

3 weeks agoggml-cpu: clean up s390x SIMD (#15855)
Aaron Teo [Sun, 7 Sep 2025 18:18:28 +0000 (02:18 +0800)]
ggml-cpu: clean up s390x SIMD (#15855)

* ggml-cpu: clean up s390x simd

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 0da4b6aa07d96b758812d17b2c82267632fa4ba5)
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix hsum data types

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
3 weeks agovulkan: Support pad_ext (#15794)
Jeff Bolz [Sun, 7 Sep 2025 17:00:49 +0000 (12:00 -0500)]
vulkan: Support pad_ext (#15794)

3 weeks agovulkan: Use larger loads in scalar/coopmat1 matmul (#15729)
Jeff Bolz [Sun, 7 Sep 2025 16:53:07 +0000 (11:53 -0500)]
vulkan: Use larger loads in scalar/coopmat1 matmul (#15729)

I think glslang will translate an access like x[i][1].z to
OpAccessChain ... x, i, 1, 2
OpLoad float16_t ...

rather than loading all of x[i] in a single OpLoad. Change the
code to explicitly load the vector/matrix.

3 weeks agoggml WebGPU: remove userdata from request adapter callback (#15527)
Daniel Bevenius [Sun, 7 Sep 2025 08:19:45 +0000 (10:19 +0200)]
ggml WebGPU: remove userdata from request adapter callback (#15527)

* ggml WebGPU: remove userdata from request adapter callback

This commit removes the `userdata` parameter from the WebGPU request
adapter callback in `ggml-webgpu.cpp`. Instead, the lambda function
captures the `webgpu_context` directly.

The motivation for this change is to simplify the code and improve
readability.

* inline the callback lambda into the RequestAdapter call

This commit removes the callback lambda variable and inlines it directly
into the RequestAdapter call.

3 weeks agoCUDA: faster tile FA (Pascal/AMD), headsize 256 (#15769)
Johannes Gäßler [Sat, 6 Sep 2025 22:26:28 +0000 (00:26 +0200)]
CUDA: faster tile FA (Pascal/AMD), headsize 256 (#15769)

3 weeks agokleidiai: generalize compute_forward_kv_cache to compute_forward_fp16 (#15817)
Charles Xu [Sat, 6 Sep 2025 14:08:43 +0000 (16:08 +0200)]
kleidiai: generalize compute_forward_kv_cache to compute_forward_fp16 (#15817)

3 weeks agoserver : speed up tests (#15836)
Xuan-Son Nguyen [Sat, 6 Sep 2025 12:45:24 +0000 (19:45 +0700)]
server : speed up tests (#15836)

* server : speed up tests

* clean up

* restore timeout_seconds in some places

* flake8

* explicit offline

3 weeks agoserver : implement prompt processing progress report in stream mode (#15827)
Xuan-Son Nguyen [Sat, 6 Sep 2025 11:35:04 +0000 (18:35 +0700)]
server : implement prompt processing progress report in stream mode (#15827)

* server : implement `return_progress`

* add timings.cache_n

* add progress.time_ms

* add test

* fix test for chat/completions

* readme: add docs on timings

* use ggml_time_us

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
3 weeks agoggml-cpu: document use of "free" memory [no ci] (#15834)
Johannes Gäßler [Sat, 6 Sep 2025 11:28:44 +0000 (13:28 +0200)]
ggml-cpu: document use of "free" memory [no ci] (#15834)

3 weeks agoggml-cpu: drop support for nnpa intrinsics (#15821)
Aaron Teo [Sat, 6 Sep 2025 03:27:28 +0000 (11:27 +0800)]
ggml-cpu: drop support for nnpa intrinsics (#15821)

3 weeks agoaLoRA Support (#15327)
Gabe Goodhart [Fri, 5 Sep 2025 23:32:39 +0000 (17:32 -0600)]
aLoRA Support (#15327)

* feat: Add python-side constants and conversion for adapter.lora.invocation_string

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add c++ side constants for adapter.lora.invocation_string

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* feat: Parse invocation string for adapters from GGUF

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* fix(python): Update conversion to alora_invocation_tokens

This is the preferred method in PEFT which is the source of ground truth

https://github.com/huggingface/peft/pull/2609/files#diff-13380145401d203d5935c5189dd09879f990b81aa63e8e3aaff8ce9110333f0e

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* fix(cpp): Update to alora_invocation_tokens on c++ side

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add C APIs to get alora invocation token array from lora

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* feat: Initial implementation of alora cache logic in server

This does not yet do the part to identify the invocation tokens and only
apply the lora adapter afterwards, but it does seem to produce correct
results if the invocation tokens are the beginning of the uncached input.

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* feat: Identify alora invocation sequences

This currently limits to a single enabled alora per slot. Multiple aloras
with different invocation sequences would be possible, but it would require
a more complex integration of the adapter toggling and is not really a well
studied case for alora since it's unclear if one alora can reuse cache from
previous prefill computed with a different alora.

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* feat: Only reuse cache for tokens before the alora invocation start

This is a bit of an edge case, but theoretically a user could try the same
query with the alora disabled (just using the base model), then retry with
the alora. The cached tokens from the first pass should be invalid.

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* feat: Handle un-cached tokens that come before the alora activation

The solution is to only fill up to the token before the invocation start in
the batch if there are any tokens to be prefilled between those pulled from
cache and the invocation start. When this is detected, the alora is
temporarily disabled with a scale of 0.0, then immediately re-enabled after
it has been initialized for the internal graph. Since the batch does not
complete the prompt tokens, the remaining prompt tokens are handled in the
next task, pulling all of the non-alora tokens from cache and proceeding
with prefill for the alora tokens.

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use || instead of 'or'

Too much python :facepalm:

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix off-by-one for limiting cached tokens to before alora start

This was the cause of the inconsistent results from the dummy test script
with and without the turn that runs the prompt without the adapter before
running it with the adapter.

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* fix: Support backwards-compatibility for "invocation_string" in adapter_config.json

While this has been replaced in the PEFT PR in favor of
alora_invocation_tokens, the existing adapters in the ibm-granite org on HF
use "invocation_string," so this will enable backwards compatibility and
enable testing now (before PEFT PR changes have percolated everywhere).

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
* fix: Remove duplicate logging

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
* feat: Report alora_invocation_string and alora_invocation_tokens from /lora-adapters

Branch: gabe-l-hart/alora-support

Signed-off-by: Gabe Goodhart <redacted>
---------

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agoci : exempt correct research label (#15825)
Sigbjørn Skjæret [Fri, 5 Sep 2025 23:21:15 +0000 (01:21 +0200)]
ci : exempt correct research label (#15825)

4 weeks agoThinking model disabled assistant prefill (#15404)
Gabe Goodhart [Fri, 5 Sep 2025 20:31:24 +0000 (14:31 -0600)]
Thinking model disabled assistant prefill (#15404)

* feat: Set enable_thinking IFF not disabled and supported

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <redacted>
* fix: Fix inverted logic condition for prefill error

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <redacted>
* fix: Always parse the enable_thinking kwarg to overwrite the default value

From what I can tell, this started as a Qwen3-specific keyword, but from
the use in `chat.cpp` translates this inputs.enable_thinking to the right
thinking kwarg for the given model, this is now more of a standardized
kwarg, so it should always override the default value when sent as part of
the chat_template_kwargs field in the API.

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <redacted>
* fix: Don't limit tempalte expansion check to jinja

With the use_jinja check, non-jinja models would enable thinking and always
fail assistant prefill

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add the error text to json type errors in json_value

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <redacted>
* feat: Explicitly reject string values for "enable_thinking"

There are too many possible "truthy" / "falsy" strings and too many
ambiguous strings that don't have a clear truthy/falsy value, so the
simplest thing to do here is to reject the request. Ideally, this would be
a 422 (Unprocessable Entity), but right now it's coming back as a 500.

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <redacted>
* refactor: Move logic for detecting template enable_thinking support to common

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use raw pointer for common chat template function

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <redacted>
---------

Signed-off-by: Gabe Goodhart <redacted>
4 weeks agoImplement --log-colors with always/never/auto (#15792)
Eric Curtin [Fri, 5 Sep 2025 18:43:59 +0000 (19:43 +0100)]
Implement --log-colors with always/never/auto (#15792)

With auto by default

Signed-off-by: Eric Curtin <redacted>
4 weeks agoCUDA: fastdiv, launch bounds for mmvq + q8_1 quant (#15802)
Johannes Gäßler [Fri, 5 Sep 2025 14:07:02 +0000 (16:07 +0200)]
CUDA: fastdiv, launch bounds for mmvq + q8_1 quant (#15802)

* CUDA: fastdiv, launch bounds for mmvq + q8_1 quant

4 weeks agotests : add --list-ops and --show-coverage options (#15745)
Daniel Bevenius [Fri, 5 Sep 2025 12:49:21 +0000 (14:49 +0200)]
tests : add --list-ops and --show-coverage options (#15745)

This commit adds two new command-line options to the
test-backend-ops.cpp that allow users to list all available GGML
operations and to show test coverage of these operations.

The motivation for this is that it can be useful to quickly see which
operations are currently covered by tests and which are not. Also it
migth be useful when using the `support` mode.

4 weeks agogguf: gguf_writer refactor (#15691)
Erik Scholz [Fri, 5 Sep 2025 09:34:28 +0000 (11:34 +0200)]
gguf: gguf_writer refactor (#15691)

* gguf: split gguf writer into base and buf impl
* gguf: templated gguf write out
* gguf: file based writer (avoid writing everything to memory first!)
* examples(llama2c): fix log not being the same level and compiler nits

4 weeks agokv-cache : fix SWA checks + disable cacheless iSWA (#15811)
Georgi Gerganov [Fri, 5 Sep 2025 07:39:22 +0000 (10:39 +0300)]
kv-cache : fix SWA checks + disable cacheless iSWA (#15811)

ggml-ci

4 weeks agomodel-conversion : add --embeddings flag to modelcard.template [no ci] (#15801)
Daniel Bevenius [Fri, 5 Sep 2025 02:36:23 +0000 (04:36 +0200)]
model-conversion : add --embeddings flag to modelcard.template [no ci] (#15801)

This commit updates the modelcard.template file used in the model
conversion scripts for embedding models to include the llama-server
--embeddings flag in the recommended command to run the model.

The motivation for this change was that when using the model-conversion
"tool" to upload the EmbeddingGemma models to Hugging Face this flag was
missing and the embedding endpoint was there for not available when
copy&pasting the command.

4 weeks agochat : fixed crash when Hermes 2 <tool_call> had a newline before it (#15639)
ExtReMLapin [Thu, 4 Sep 2025 23:24:08 +0000 (01:24 +0200)]
chat : fixed crash when Hermes 2 <tool_call> had a newline before it (#15639)

Co-authored-by: CNE Pierre FICHEPOIL <redacted>
4 weeks agochat : nemotron thinking & toolcalling support (#15676)
Piotr Wilkin (ilintar) [Thu, 4 Sep 2025 23:22:22 +0000 (01:22 +0200)]
chat : nemotron thinking & toolcalling support (#15676)

* feat: nemotron thinking & toolcalling support

* Trailing whitespaces

* Corrected template for Nemotron

* Template and parser fixes

* Final template and grammar changes

* Whitespace

* Always do lazy grammar processing since </think> tag will always be there.

* Allow extra content after toolcall

* Whitespace

* New tests: thinking + tools, tools + content, thinking + tools + content (new!)

* Whitespace

* Remove cURL test script

4 weeks agoscripts : add Jinja tester PySide6 simple app (#15756)
Piotr Wilkin (ilintar) [Thu, 4 Sep 2025 23:05:12 +0000 (01:05 +0200)]
scripts : add Jinja tester PySide6 simple app (#15756)

* feat: add Jinja tester PySide6 simple app

* Linter fixes

* Pylint fixes

* Whitespace

* Add commandline support; add formatter; add extensions

* Remove testing actions

* Silence flake8 warnings for commandline mode

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Fix trailing whitespace/newline logic

* Update scripts/jinja/jinja-tester.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update scripts/jinja/jinja-tester.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agollama : add support for EmbeddingGemma 300m (#15798)
Daniel Bevenius [Thu, 4 Sep 2025 16:10:29 +0000 (18:10 +0200)]
llama : add support for EmbeddingGemma 300m (#15798)

This commit add support for the EmbeddingGemma 300m. This model supports
sliding window attention (SWA) and a new swq_type is introduced to
support symmetric SWA masking.

This commit also extracts the code from the function
llama_is_masked_swa in llama-impl.h, so that the logic can be shared
by both llm_graph_input_attn_no_cache::set_input and
llama_kv_cache::set_input_kq_mask.

With this commit the EmbeddingGemma 300m model can be converted to
to GGUF and used with llama.cpp.

Once the model has been uploaded to HuggingFace it can be used like
this:
```console
./build/bin/llama-cli -hf ggml-org/embeddinggemma-300m-GGUF:Q8_0
```

4 weeks agometal : Add template specialization for mul_mm_id w/ ne20 == 10 (#15799)
Gabe Goodhart [Thu, 4 Sep 2025 15:53:22 +0000 (09:53 -0600)]
metal : Add template specialization for mul_mm_id w/ ne20 == 10 (#15799)

Branch: GGMLMetalNE20

Signed-off-by: Gabe Goodhart <redacted>
4 weeks agollama : set n_outputs to 1 to avoid 0 outputs mean-pooling (#15791)
Daniel Bevenius [Thu, 4 Sep 2025 13:40:44 +0000 (15:40 +0200)]
llama : set n_outputs to 1 to avoid 0 outputs mean-pooling (#15791)

* llama : set n_outputs to 1 to avoid 0 outputs mean-pooling

This commit modifies the llama_context constructor to set n_outputs to
1.

The motivation for this is that when using pooling, and specifically
mean pooling, for embeddings having n_outputs set to 0 can lead to the
following error:
```console
$ build/bin/llama-embedding -m models/nomic-embed-text-1.5-Q4_K_M.gguf \
   --pooling mean -p "Hello, how are you?"
...
llama_context:        CPU  output buffer size =     0.12 MiB
/home/danbev/work/ai/llama.cpp/ggml/src/ggml.c:3023: GGML_ASSERT(ggml_can_mul_mat(a, b)) failed
0x0000743c96d107e3 in __GI___wait4 (pid=292978, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
warning: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory
30 in ../sysdeps/unix/sysv/linux/wait4.c
196         waitpid(child_pid, NULL, 0);
230         ggml_print_backtrace();
3023     GGML_ASSERT(ggml_can_mul_mat(a, b));
1823                 cur = ggml_mul_mat(ctx0, ggml_cont(ctx0, ggml_transpose(ctx0, inp)), inp_mean);
18983     llm->build_pooling(cls, cls_b, cls_out, cls_out_b);
1399     auto * gf = model.build_graph(gparams);
292             auto * gf = graph_reserve(1, n_seqs, n_outputs, mctx.get(), true);
2329         auto * ctx = new llama_context(*model, params);
913     llama_context * lctx = llama_init_from_model(model, cparams);
105     common_init_result llama_init = common_init_from_params(params);
[Inferior 1 (process 292976) detached]
Aborted (core dumped)
```

Co-authored-by: Georgi Gerganov <redacted>
* add comment about not reserving graphs with zero outputs

* add assert in graph_reserve to ensure n_outputs >= 1

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agoCANN: Refactor ND to NZ workspace to be per-device (#15763)
Chenguang Li [Thu, 4 Sep 2025 12:20:14 +0000 (20:20 +0800)]
CANN: Refactor ND to NZ workspace to be per-device (#15763)

* CANN:Refactor ND to NZ workspace to be per-device in Ascend backend

- Replaced the previous single global ND→NZ workspace with a per-device
  cache using unordered_map keyed by device ID.
- Functions `release_nz_workspace`, `relloc_nz_workspace`, and
  `get_nz_workspace` now manage workspace independently for each device,
  preventing memory conflicts in multi-device / pipeline parallel scenarios.
- This change fixes potential precision issues caused by workspace
  overwrites when multiple devices perform ND→NZ conversions concurrently.

Co-authored-by: hipudding <redacted>
* refactor

Signed-off-by: noemotiovon <redacted>
* rename

Signed-off-by: noemotiovon <redacted>
* fix review comments

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
Co-authored-by: hipudding <redacted>
4 weeks agoserver: add exceed_context_size_error type (#15780)
Xuan-Son Nguyen [Thu, 4 Sep 2025 09:50:23 +0000 (11:50 +0200)]
server: add exceed_context_size_error type (#15780)

* server: add exceed_context_size_error type

* change error code to 400

4 weeks agoDocument the new max GPU layers default in help (#15771)
Eric Curtin [Thu, 4 Sep 2025 09:49:44 +0000 (10:49 +0100)]
Document the new max GPU layers default in help (#15771)

This is a key change, just letting users know.

Signed-off-by: Eric Curtin <redacted>
4 weeks agoggml: add ops for WAN video model (cuda && cpu) (#15669)
leejet [Thu, 4 Sep 2025 08:38:49 +0000 (16:38 +0800)]
ggml: add ops for WAN video model (cuda && cpu) (#15669)

* add conv3d support

* add ggml_pad_ext for cpu & cuda backend

* cuda/cpu: add im2col_3d support

* cuda: make im2col a little faster

* fix cuda pad/scale/im2col3d

* make im2col_3d faster

* gguf: support loading tensors which n_dims > GGML_MAX_DIMS

* fix cuda get_rows

* avoid ggml_conv_3d conflict

* correct GGML_OP_COUNT assertion

* avoid build failure

* avoid build failure on MacOS

* cuda: remove unnecessary MIN define

* fix cpu im2col_3d

* adjust the code style

* cuda: use simpler loop in get_rows

* add test_im2col_3d to test-backend-ops

* test-backend-ops.cpp: remove trailing whitespace

* cpu: im2col_3d support non continuous src

Co-authored-by: Jeff Bolz <redacted>
* fix test_im2col_3d

* remove unused variables

* cuda: get_rows: dfloat2 -> float2

* add test_pad_ext to test-backend-ops.cpp

* add gguf_init_from_file_ext impl

* Revert "gguf: support loading tensors which n_dims > GGML_MAX_DIMS"

This reverts commit d8377a0a37f314bd3713fe043b4333ad661610c1.

* Revert "add gguf_init_from_file_ext impl"

This reverts commit d9f1d13208c68ef83b3538201ac7f31614fb1994.

* update ggml_backend_vk_device_supports_op

* fix ggml_backend_vk_device_supports_op

* update other backend supports op for ggml_pad_ext

* metal/opencl/sycl/vulkan: fix GGML_OP_PAD check in supports_op

---------

Co-authored-by: Jeff Bolz <redacted>
4 weeks agoCANN: Fix precision issue on 310I DUO multi-devices (#15784)
hipudding [Thu, 4 Sep 2025 07:12:30 +0000 (15:12 +0800)]
CANN: Fix precision issue on 310I DUO multi-devices (#15784)

4 weeks agoopencl: add hs=40 to FA (#15758)
rmatif [Thu, 4 Sep 2025 06:30:28 +0000 (08:30 +0200)]
opencl: add hs=40 to FA (#15758)

4 weeks agoCANN: fix acl_rstd allocation size in ggml_cann_rms_norm (#15760)
Chenguang Li [Thu, 4 Sep 2025 03:03:02 +0000 (11:03 +0800)]
CANN: fix acl_rstd allocation size in ggml_cann_rms_norm (#15760)

Fixes #15330

Adjust the allocation size of acl_rstd. The parameter `dims` is set to 3 according to the CANN documentation.

Co-authored-by: Yuchuan <redacted>
4 weeks agovulkan: fix mmv subgroup16 selection (#15775)
Ruben Ortlam [Wed, 3 Sep 2025 20:55:10 +0000 (22:55 +0200)]
vulkan: fix mmv subgroup16 selection (#15775)

4 weeks agovulkan: don't use std::string in load_shaders, to improve compile time (#15724)
Jeff Bolz [Wed, 3 Sep 2025 18:33:15 +0000 (13:33 -0500)]
vulkan: don't use std::string in load_shaders, to improve compile time (#15724)

* vulkan: don't use std::string in load_shaders, to improve compile time

* keep the string version for those calls that use it

4 weeks agovulkan : update ggml_vk_instance_validation_ext_available (#15666)
Daniel Bevenius [Wed, 3 Sep 2025 18:24:50 +0000 (20:24 +0200)]
vulkan : update ggml_vk_instance_validation_ext_available (#15666)

* vulkan : update ggml_vk_instance_validation_ext_available

This commit updates ggml_vk_instance_validation_ext_available() to
check for VK_EXT_validation_features instead of
VK_KHR_portability_enumeration.

Based on how the returned boolean is used later in the code (to enable
both the validation layer and the VK_EXT_validation_features extension),
it appears the function may have been intended to check for the
validation layer features extension.

* remove try/catch

This was a left over from a previous iteration where I was explicitly
quering for a specific validation layer first, which would throw.

* update warning message about validation layers

4 weeks agoggml vulkan: add hardsigmoid and hardswish operations (#15762)
Shin-myoung-serp [Wed, 3 Sep 2025 18:22:55 +0000 (03:22 +0900)]
ggml vulkan: add hardsigmoid and hardswish operations (#15762)

4 weeks agoCUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E...
Oliver Simons [Wed, 3 Sep 2025 17:59:16 +0000 (19:59 +0200)]
CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E (#15715)

* Add fastdiv, use it in modulo and use modulo in rms_norm_f32

Fastdiv is much faster way to do integer division, which was identified
as bottleneck in rms_norm_f32

* Support more `block_size` values in `rms_norm_f32`

This makes us more flexible in selecting the optimal threads w.r.t
paralellizing across a col vs. launch-overheads of threads and mio
throttles

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <redacted>
* Replace modulo with fastmodulo in `rms_norm_f32`

* Use `BinPackArguments=true` for formating function calls

Will file a separate PR to adjust .clang-format file

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <redacted>
* Use uint3 for both `fastdiv` and `fastmodulo`

The compiler seems to reliably optimize away the unused .z component in
the fastdiv use-case, see https://godbolt.org/z/rx8KPrKr3

* More constrained type declarations

Co-authored-by: Johannes Gäßler <redacted>
* Rename fastdiv and fastmodulo variables to shared variable name

As suggest by JohannesGaessler, this increases clarity of the intended
use

* Pack fastdiv/fastmodulo constants into uint2/uint3 objects

By packing constants to be used together into a struct, we are less
likely to make errors.

* Rename function parameter of fastmodulo

`modulo_consts` is more fitting/descriptive

---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agomodel-conversion : fix pyright errors (#15770)
Daniel Bevenius [Wed, 3 Sep 2025 16:28:36 +0000 (18:28 +0200)]
model-conversion : fix pyright errors (#15770)

This commit addresses type errors reported by pyright in the model
conversion scripts.

4 weeks agosampling : optimize dist sampler (#15704)
Georgi Gerganov [Wed, 3 Sep 2025 15:16:26 +0000 (18:16 +0300)]
sampling : optimize dist sampler (#15704)

ggml-ci

4 weeks agollama : fix incorrect model type for Gemma 270M (#15764)
Daniel Bevenius [Wed, 3 Sep 2025 11:35:49 +0000 (13:35 +0200)]
llama : fix incorrect model type for  Gemma 270M (#15764)

This commit fixes the model type for the Gemma 270M model in
llama_model.cpp which should be LLM_TYPE_270M. I incorrectly added this
previously as LLM_TYPE_537M which was wrong.

The motivation for this is that it causes the model to not be identified
properly when using tools like llama-bench. For example:
```console
$ ./build/bin/llama-bench -m models/gemma-3-270m-Q8_0.gguf
| model                          |       size | ...
| ------------------------------ | ---------: | ...
| gemma3 ?B Q8_0                 | 271.81 MiB | ...
| gemma3 ?B Q8_0                 | 271.81 MiB | ...
```

With the changes in this commit the output will be:
```console
$ ./build/bin/llama-bench -m models/gemma-3-270m-Q8_0.gguf
| model                          |       size | ...
| ------------------------------ | ---------: | ...
| gemma3 270M Q8_0               | 271.81 MiB | ...
| gemma3 270M Q8_0               | 271.81 MiB | ...
```

4 weeks agomodel-conversion : remove hardcoded /bin/bash shebangs [no ci] (#15765)
Daniel Bevenius [Wed, 3 Sep 2025 10:50:47 +0000 (12:50 +0200)]
model-conversion : remove hardcoded /bin/bash shebangs [no ci] (#15765)

* model-conversion : remove hardcoded /bin/bash shebangs [no ci]

This commit updates the bash scripts to use env instead of using
hardcoded /bin/bash in the shebang line.

The motivation for this is that some systems may have bash installed
in a different location, and using /usr/bin/env bash ensures that
the script will use the first bash interpreter found in the user's
PATH, making the scripts more portable across different environments.

* model-conversion : rename script to .py [no ci]

This commit renames run-casual-gen-embeddings-org.sh to
run-casual-gen-embeddings-org.py to reflect its Python nature.

4 weeks agoCANN: Add RoPE contiguous check for 310I DUP device (#15735)
hipudding [Wed, 3 Sep 2025 08:46:01 +0000 (16:46 +0800)]
CANN: Add RoPE contiguous check for 310I DUP device (#15735)

4 weeks agoggml-cpu : optimize RVV kernels (#15720)
xctan [Wed, 3 Sep 2025 08:16:21 +0000 (16:16 +0800)]
ggml-cpu : optimize RVV kernels (#15720)

* ggml-cpu : optimize rvv ggml_vec_dot_f32

* ggml-cpu : optimize 128-bit rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : fix riscv arch flags

* ggml-cpu : add more rvv ops

* ggml-cpu : optimize rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : optimize rvv ggml_vec_dot_q6_K_q8_K

* ggml-cpu : minor rvv adjustments

* ggml-cpu : fix riscv include

4 weeks agomodel-conversion : add missing curl script [no ci] (#15761)
Daniel Bevenius [Wed, 3 Sep 2025 07:48:35 +0000 (09:48 +0200)]
model-conversion : add missing curl script [no ci] (#15761)

This commit adds a curl script to the model-conversion examples
which is currently missing. This script is required for the running the
embedding server targets to test llama-server embeddings functionality.

4 weeks agoCANN: Mask unsupported TRANSPOSE_1D operator (#15733)
hipudding [Wed, 3 Sep 2025 06:08:22 +0000 (14:08 +0800)]
CANN: Mask unsupported TRANSPOSE_1D operator (#15733)

CANN currently does not support kernels larger than 255.
This change disables such cases.

4 weeks agoCANN: Fix type float_t to float (#15736)
Chenguang Li [Wed, 3 Sep 2025 02:43:53 +0000 (10:43 +0800)]
CANN: Fix type float_t to float (#15736)

Signed-off-by: noemotiovon <redacted>
4 weeks agofix: resolve unsigned int initialization warning for n_dims/size in gguf.cpp (#15754)
SnA1lGo [Tue, 2 Sep 2025 19:27:30 +0000 (03:27 +0800)]
fix: resolve unsigned int initialization warning for n_dims/size in gguf.cpp (#15754)

4 weeks agochore: Update `.clang-format` to use `BinPackArguments=true` (#15744)
Oliver Simons [Tue, 2 Sep 2025 17:40:37 +0000 (19:40 +0200)]
chore: Update `.clang-format` to use `BinPackArguments=true` (#15744)

This seems to correspond with what we want to do, see
[here](https://github.com/ggml-org/llama.cpp/pull/15715#discussion_r2315613796)
and [clang-format docs](https://clang.llvm.org/docs/ClangFormatStyleOptions.html#binpackarguments)

4 weeks agollama: -fa 1/0/-1 aliases for -fa on/off/auto (#15746)
Johannes Gäßler [Tue, 2 Sep 2025 16:17:26 +0000 (18:17 +0200)]
llama: -fa 1/0/-1 aliases for -fa on/off/auto (#15746)

4 weeks agovulkan: fix shaders gen when no integer dot is available (#15740)
Ruben Ortlam [Tue, 2 Sep 2025 14:02:26 +0000 (16:02 +0200)]
vulkan: fix shaders gen when no integer dot is available (#15740)

4 weeks agoCANN: Resolve soft_max precision issue (#15730)
hipudding [Tue, 2 Sep 2025 09:12:37 +0000 (17:12 +0800)]
CANN: Resolve soft_max precision issue (#15730)

Previously, the slope tensor was set to fp16 to improve efficiency.
While this worked correctly in FA, it caused precision issues in soft_max.
This change applies different data types for different operators
to balance both accuracy and performance.

4 weeks agovulkan: Fix macro parameter order for f32 matmul shaders (#15716)
Jeff Bolz [Tue, 2 Sep 2025 06:37:01 +0000 (01:37 -0500)]
vulkan: Fix macro parameter order for f32 matmul shaders (#15716)

4 weeks agoopencl: add attn sinks support for FA kernels (#15706)
rmatif [Tue, 2 Sep 2025 06:26:53 +0000 (08:26 +0200)]
opencl: add attn sinks support for FA kernels (#15706)

4 weeks agoCANN: Support eager execution mode under ACL graph compilation (#15712)
Chenguang Li [Tue, 2 Sep 2025 06:07:48 +0000 (14:07 +0800)]
CANN: Support eager execution mode under ACL graph compilation (#15712)

* [CANN] Support eager execution mode under ACL graph compilation

Add support for running operators in eager mode while ACL graph
compilation is enabled. This allows bypassing graph execution
and directly submitting ops, which is useful for debugging and
reducing graph build overhead in certain scenarios.

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
* rename to acl_graph_mode

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
4 weeks agoCANN: Support ext_factor in rope (#15710)
hipudding [Tue, 2 Sep 2025 06:05:23 +0000 (14:05 +0800)]
CANN: Support ext_factor in rope (#15710)

4 weeks agoggml-backend: raise GGML_MAX_SPLIT_INPUTS (#15722)
Johannes Gäßler [Mon, 1 Sep 2025 23:14:55 +0000 (01:14 +0200)]
ggml-backend: raise GGML_MAX_SPLIT_INPUTS (#15722)

4 weeks agovulkan: use memory budget extension to read memory usage (#15545)
Gilad S. [Mon, 1 Sep 2025 19:17:42 +0000 (22:17 +0300)]
vulkan: use memory budget extension to read memory usage (#15545)

* vulkan: use memory budget extension to read memory usage

* fix: formatting and names

* formatting

* fix: detect and cache memory budget extension availability on init

* fix: read `budgetprops.heapBudget` instead of `heap.size` when memory budget extension is available

* style: lints

4 weeks agovulkan: add missing clamps in new mul_mat_id paths (#15702)
Jeff Bolz [Mon, 1 Sep 2025 19:01:10 +0000 (14:01 -0500)]
vulkan: add missing clamps in new mul_mat_id paths (#15702)

This is a missing interaction between #15546 and #15652

4 weeks agovulkan: disable large mmv subgroups on older Nvidia GPUs (#15717)
Ruben Ortlam [Mon, 1 Sep 2025 18:58:35 +0000 (20:58 +0200)]
vulkan: disable large mmv subgroups on older Nvidia GPUs (#15717)

4 weeks agoggml: SVE support for exponential functions (#15145)
s-goto-11 [Mon, 1 Sep 2025 18:13:49 +0000 (03:13 +0900)]
ggml: SVE support for exponential functions (#15145)

* SVE support for exponential functions

Add const notation to variable pg

* Update ggml/src/ggml-cpu/vec.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Add const

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agoggml: aarch64: Implement SVE F16 kernels for vector functions (#15115)
Prashant Vithule [Mon, 1 Sep 2025 18:13:16 +0000 (23:43 +0530)]
ggml: aarch64: Implement SVE F16 kernels for vector functions  (#15115)

* Added sve implementation for vec_dot_fp16 Kernel

* removed white spaces

* Added comment

* removed white spaces

* changed GGML_F16x_VEC_FMA for code consistency

* Update vec.h

---------

Co-authored-by: vithulep <redacted>