]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
4 weeks agoDocument the new max GPU layers default in help (#15771)
Eric Curtin [Thu, 4 Sep 2025 09:49:44 +0000 (10:49 +0100)]
Document the new max GPU layers default in help (#15771)

This is a key change, just letting users know.

Signed-off-by: Eric Curtin <redacted>
4 weeks agoggml: add ops for WAN video model (cuda && cpu) (#15669)
leejet [Thu, 4 Sep 2025 08:38:49 +0000 (16:38 +0800)]
ggml: add ops for WAN video model (cuda && cpu) (#15669)

* add conv3d support

* add ggml_pad_ext for cpu & cuda backend

* cuda/cpu: add im2col_3d support

* cuda: make im2col a little faster

* fix cuda pad/scale/im2col3d

* make im2col_3d faster

* gguf: support loading tensors which n_dims > GGML_MAX_DIMS

* fix cuda get_rows

* avoid ggml_conv_3d conflict

* correct GGML_OP_COUNT assertion

* avoid build failure

* avoid build failure on MacOS

* cuda: remove unnecessary MIN define

* fix cpu im2col_3d

* adjust the code style

* cuda: use simpler loop in get_rows

* add test_im2col_3d to test-backend-ops

* test-backend-ops.cpp: remove trailing whitespace

* cpu: im2col_3d support non continuous src

Co-authored-by: Jeff Bolz <redacted>
* fix test_im2col_3d

* remove unused variables

* cuda: get_rows: dfloat2 -> float2

* add test_pad_ext to test-backend-ops.cpp

* add gguf_init_from_file_ext impl

* Revert "gguf: support loading tensors which n_dims > GGML_MAX_DIMS"

This reverts commit d8377a0a37f314bd3713fe043b4333ad661610c1.

* Revert "add gguf_init_from_file_ext impl"

This reverts commit d9f1d13208c68ef83b3538201ac7f31614fb1994.

* update ggml_backend_vk_device_supports_op

* fix ggml_backend_vk_device_supports_op

* update other backend supports op for ggml_pad_ext

* metal/opencl/sycl/vulkan: fix GGML_OP_PAD check in supports_op

---------

Co-authored-by: Jeff Bolz <redacted>
4 weeks agoCANN: Fix precision issue on 310I DUO multi-devices (#15784)
hipudding [Thu, 4 Sep 2025 07:12:30 +0000 (15:12 +0800)]
CANN: Fix precision issue on 310I DUO multi-devices (#15784)

4 weeks agoopencl: add hs=40 to FA (#15758)
rmatif [Thu, 4 Sep 2025 06:30:28 +0000 (08:30 +0200)]
opencl: add hs=40 to FA (#15758)

4 weeks agoCANN: fix acl_rstd allocation size in ggml_cann_rms_norm (#15760)
Chenguang Li [Thu, 4 Sep 2025 03:03:02 +0000 (11:03 +0800)]
CANN: fix acl_rstd allocation size in ggml_cann_rms_norm (#15760)

Fixes #15330

Adjust the allocation size of acl_rstd. The parameter `dims` is set to 3 according to the CANN documentation.

Co-authored-by: Yuchuan <redacted>
4 weeks agovulkan: fix mmv subgroup16 selection (#15775)
Ruben Ortlam [Wed, 3 Sep 2025 20:55:10 +0000 (22:55 +0200)]
vulkan: fix mmv subgroup16 selection (#15775)

4 weeks agovulkan: don't use std::string in load_shaders, to improve compile time (#15724)
Jeff Bolz [Wed, 3 Sep 2025 18:33:15 +0000 (13:33 -0500)]
vulkan: don't use std::string in load_shaders, to improve compile time (#15724)

* vulkan: don't use std::string in load_shaders, to improve compile time

* keep the string version for those calls that use it

4 weeks agovulkan : update ggml_vk_instance_validation_ext_available (#15666)
Daniel Bevenius [Wed, 3 Sep 2025 18:24:50 +0000 (20:24 +0200)]
vulkan : update ggml_vk_instance_validation_ext_available (#15666)

* vulkan : update ggml_vk_instance_validation_ext_available

This commit updates ggml_vk_instance_validation_ext_available() to
check for VK_EXT_validation_features instead of
VK_KHR_portability_enumeration.

Based on how the returned boolean is used later in the code (to enable
both the validation layer and the VK_EXT_validation_features extension),
it appears the function may have been intended to check for the
validation layer features extension.

* remove try/catch

This was a left over from a previous iteration where I was explicitly
quering for a specific validation layer first, which would throw.

* update warning message about validation layers

4 weeks agoggml vulkan: add hardsigmoid and hardswish operations (#15762)
Shin-myoung-serp [Wed, 3 Sep 2025 18:22:55 +0000 (03:22 +0900)]
ggml vulkan: add hardsigmoid and hardswish operations (#15762)

4 weeks agoCUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E...
Oliver Simons [Wed, 3 Sep 2025 17:59:16 +0000 (19:59 +0200)]
CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E (#15715)

* Add fastdiv, use it in modulo and use modulo in rms_norm_f32

Fastdiv is much faster way to do integer division, which was identified
as bottleneck in rms_norm_f32

* Support more `block_size` values in `rms_norm_f32`

This makes us more flexible in selecting the optimal threads w.r.t
paralellizing across a col vs. launch-overheads of threads and mio
throttles

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <redacted>
* Replace modulo with fastmodulo in `rms_norm_f32`

* Use `BinPackArguments=true` for formating function calls

Will file a separate PR to adjust .clang-format file

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <redacted>
* Use uint3 for both `fastdiv` and `fastmodulo`

The compiler seems to reliably optimize away the unused .z component in
the fastdiv use-case, see https://godbolt.org/z/rx8KPrKr3

* More constrained type declarations

Co-authored-by: Johannes Gäßler <redacted>
* Rename fastdiv and fastmodulo variables to shared variable name

As suggest by JohannesGaessler, this increases clarity of the intended
use

* Pack fastdiv/fastmodulo constants into uint2/uint3 objects

By packing constants to be used together into a struct, we are less
likely to make errors.

* Rename function parameter of fastmodulo

`modulo_consts` is more fitting/descriptive

---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agomodel-conversion : fix pyright errors (#15770)
Daniel Bevenius [Wed, 3 Sep 2025 16:28:36 +0000 (18:28 +0200)]
model-conversion : fix pyright errors (#15770)

This commit addresses type errors reported by pyright in the model
conversion scripts.

4 weeks agosampling : optimize dist sampler (#15704)
Georgi Gerganov [Wed, 3 Sep 2025 15:16:26 +0000 (18:16 +0300)]
sampling : optimize dist sampler (#15704)

ggml-ci

4 weeks agollama : fix incorrect model type for Gemma 270M (#15764)
Daniel Bevenius [Wed, 3 Sep 2025 11:35:49 +0000 (13:35 +0200)]
llama : fix incorrect model type for  Gemma 270M (#15764)

This commit fixes the model type for the Gemma 270M model in
llama_model.cpp which should be LLM_TYPE_270M. I incorrectly added this
previously as LLM_TYPE_537M which was wrong.

The motivation for this is that it causes the model to not be identified
properly when using tools like llama-bench. For example:
```console
$ ./build/bin/llama-bench -m models/gemma-3-270m-Q8_0.gguf
| model                          |       size | ...
| ------------------------------ | ---------: | ...
| gemma3 ?B Q8_0                 | 271.81 MiB | ...
| gemma3 ?B Q8_0                 | 271.81 MiB | ...
```

With the changes in this commit the output will be:
```console
$ ./build/bin/llama-bench -m models/gemma-3-270m-Q8_0.gguf
| model                          |       size | ...
| ------------------------------ | ---------: | ...
| gemma3 270M Q8_0               | 271.81 MiB | ...
| gemma3 270M Q8_0               | 271.81 MiB | ...
```

4 weeks agomodel-conversion : remove hardcoded /bin/bash shebangs [no ci] (#15765)
Daniel Bevenius [Wed, 3 Sep 2025 10:50:47 +0000 (12:50 +0200)]
model-conversion : remove hardcoded /bin/bash shebangs [no ci] (#15765)

* model-conversion : remove hardcoded /bin/bash shebangs [no ci]

This commit updates the bash scripts to use env instead of using
hardcoded /bin/bash in the shebang line.

The motivation for this is that some systems may have bash installed
in a different location, and using /usr/bin/env bash ensures that
the script will use the first bash interpreter found in the user's
PATH, making the scripts more portable across different environments.

* model-conversion : rename script to .py [no ci]

This commit renames run-casual-gen-embeddings-org.sh to
run-casual-gen-embeddings-org.py to reflect its Python nature.

4 weeks agoCANN: Add RoPE contiguous check for 310I DUP device (#15735)
hipudding [Wed, 3 Sep 2025 08:46:01 +0000 (16:46 +0800)]
CANN: Add RoPE contiguous check for 310I DUP device (#15735)

4 weeks agoggml-cpu : optimize RVV kernels (#15720)
xctan [Wed, 3 Sep 2025 08:16:21 +0000 (16:16 +0800)]
ggml-cpu : optimize RVV kernels (#15720)

* ggml-cpu : optimize rvv ggml_vec_dot_f32

* ggml-cpu : optimize 128-bit rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : fix riscv arch flags

* ggml-cpu : add more rvv ops

* ggml-cpu : optimize rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : optimize rvv ggml_vec_dot_q6_K_q8_K

* ggml-cpu : minor rvv adjustments

* ggml-cpu : fix riscv include

4 weeks agomodel-conversion : add missing curl script [no ci] (#15761)
Daniel Bevenius [Wed, 3 Sep 2025 07:48:35 +0000 (09:48 +0200)]
model-conversion : add missing curl script [no ci] (#15761)

This commit adds a curl script to the model-conversion examples
which is currently missing. This script is required for the running the
embedding server targets to test llama-server embeddings functionality.

4 weeks agoCANN: Mask unsupported TRANSPOSE_1D operator (#15733)
hipudding [Wed, 3 Sep 2025 06:08:22 +0000 (14:08 +0800)]
CANN: Mask unsupported TRANSPOSE_1D operator (#15733)

CANN currently does not support kernels larger than 255.
This change disables such cases.

4 weeks agoCANN: Fix type float_t to float (#15736)
Chenguang Li [Wed, 3 Sep 2025 02:43:53 +0000 (10:43 +0800)]
CANN: Fix type float_t to float (#15736)

Signed-off-by: noemotiovon <redacted>
4 weeks agofix: resolve unsigned int initialization warning for n_dims/size in gguf.cpp (#15754)
SnA1lGo [Tue, 2 Sep 2025 19:27:30 +0000 (03:27 +0800)]
fix: resolve unsigned int initialization warning for n_dims/size in gguf.cpp (#15754)

4 weeks agochore: Update `.clang-format` to use `BinPackArguments=true` (#15744)
Oliver Simons [Tue, 2 Sep 2025 17:40:37 +0000 (19:40 +0200)]
chore: Update `.clang-format` to use `BinPackArguments=true` (#15744)

This seems to correspond with what we want to do, see
[here](https://github.com/ggml-org/llama.cpp/pull/15715#discussion_r2315613796)
and [clang-format docs](https://clang.llvm.org/docs/ClangFormatStyleOptions.html#binpackarguments)

4 weeks agollama: -fa 1/0/-1 aliases for -fa on/off/auto (#15746)
Johannes Gäßler [Tue, 2 Sep 2025 16:17:26 +0000 (18:17 +0200)]
llama: -fa 1/0/-1 aliases for -fa on/off/auto (#15746)

4 weeks agovulkan: fix shaders gen when no integer dot is available (#15740)
Ruben Ortlam [Tue, 2 Sep 2025 14:02:26 +0000 (16:02 +0200)]
vulkan: fix shaders gen when no integer dot is available (#15740)

4 weeks agoCANN: Resolve soft_max precision issue (#15730)
hipudding [Tue, 2 Sep 2025 09:12:37 +0000 (17:12 +0800)]
CANN: Resolve soft_max precision issue (#15730)

Previously, the slope tensor was set to fp16 to improve efficiency.
While this worked correctly in FA, it caused precision issues in soft_max.
This change applies different data types for different operators
to balance both accuracy and performance.

4 weeks agovulkan: Fix macro parameter order for f32 matmul shaders (#15716)
Jeff Bolz [Tue, 2 Sep 2025 06:37:01 +0000 (01:37 -0500)]
vulkan: Fix macro parameter order for f32 matmul shaders (#15716)

4 weeks agoopencl: add attn sinks support for FA kernels (#15706)
rmatif [Tue, 2 Sep 2025 06:26:53 +0000 (08:26 +0200)]
opencl: add attn sinks support for FA kernels (#15706)

4 weeks agoCANN: Support eager execution mode under ACL graph compilation (#15712)
Chenguang Li [Tue, 2 Sep 2025 06:07:48 +0000 (14:07 +0800)]
CANN: Support eager execution mode under ACL graph compilation (#15712)

* [CANN] Support eager execution mode under ACL graph compilation

Add support for running operators in eager mode while ACL graph
compilation is enabled. This allows bypassing graph execution
and directly submitting ops, which is useful for debugging and
reducing graph build overhead in certain scenarios.

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
* rename to acl_graph_mode

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
4 weeks agoCANN: Support ext_factor in rope (#15710)
hipudding [Tue, 2 Sep 2025 06:05:23 +0000 (14:05 +0800)]
CANN: Support ext_factor in rope (#15710)

4 weeks agoggml-backend: raise GGML_MAX_SPLIT_INPUTS (#15722)
Johannes Gäßler [Mon, 1 Sep 2025 23:14:55 +0000 (01:14 +0200)]
ggml-backend: raise GGML_MAX_SPLIT_INPUTS (#15722)

4 weeks agovulkan: use memory budget extension to read memory usage (#15545)
Gilad S. [Mon, 1 Sep 2025 19:17:42 +0000 (22:17 +0300)]
vulkan: use memory budget extension to read memory usage (#15545)

* vulkan: use memory budget extension to read memory usage

* fix: formatting and names

* formatting

* fix: detect and cache memory budget extension availability on init

* fix: read `budgetprops.heapBudget` instead of `heap.size` when memory budget extension is available

* style: lints

4 weeks agovulkan: add missing clamps in new mul_mat_id paths (#15702)
Jeff Bolz [Mon, 1 Sep 2025 19:01:10 +0000 (14:01 -0500)]
vulkan: add missing clamps in new mul_mat_id paths (#15702)

This is a missing interaction between #15546 and #15652

4 weeks agovulkan: disable large mmv subgroups on older Nvidia GPUs (#15717)
Ruben Ortlam [Mon, 1 Sep 2025 18:58:35 +0000 (20:58 +0200)]
vulkan: disable large mmv subgroups on older Nvidia GPUs (#15717)

4 weeks agoggml: SVE support for exponential functions (#15145)
s-goto-11 [Mon, 1 Sep 2025 18:13:49 +0000 (03:13 +0900)]
ggml: SVE support for exponential functions (#15145)

* SVE support for exponential functions

Add const notation to variable pg

* Update ggml/src/ggml-cpu/vec.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Add const

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agoggml: aarch64: Implement SVE F16 kernels for vector functions (#15115)
Prashant Vithule [Mon, 1 Sep 2025 18:13:16 +0000 (23:43 +0530)]
ggml: aarch64: Implement SVE F16 kernels for vector functions  (#15115)

* Added sve implementation for vec_dot_fp16 Kernel

* removed white spaces

* Added comment

* removed white spaces

* changed GGML_F16x_VEC_FMA for code consistency

* Update vec.h

---------

Co-authored-by: vithulep <redacted>
4 weeks agoconvert : remove redundant code (#15708)
Jie Fu (傅杰) [Mon, 1 Sep 2025 15:53:31 +0000 (23:53 +0800)]
convert : remove redundant code (#15708)

Signed-off-by: Jie Fu <redacted>
4 weeks agoVulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants (#14903)
Ruben Ortlam [Mon, 1 Sep 2025 14:19:07 +0000 (16:19 +0200)]
Vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants (#14903)

* vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants

* vulkan: use subgroup operations for quantize_q8_1 shader

* vulkan: add q8_1_x4 type with 128-bit alignment, use in mul_mat_vecq shader

* vulkan: use q8_1_x4 blocks in mul_mmq shader

* vulkan: do 8 calculations per invocation instead of 32 in mul_mat_vecq, similar to mul_mat_vec

* vulkan: tune mul_mat_vecq performance for Intel

* vulkan: fix quantizing issue when tensor is not divisible by 128

* vulkan: adapt integer dot mmv to mmv small m optimization (#15355)

* vulkan: allow all subgroup modes for mmv and mmvq

* vulkan: use prealloc intermediate reuse for mmvq path

* vulkan: tune mmvq for Intel, AMD GCN and Nvidia RTX 3090

* vulkan: adapt mmv quantize_y path to conditional sync logic

* vulkan: disable q8_0 mmvq on Nvidia

* vulkan: enable q8_0 on Nvidia pre-turing

* fix prealloc sync condition

* fix llvmpipe subgroup 8 issue

4 weeks agoggml : WebGPU add TRANSPOSE and RESHAPE to supported ops (#15695)
Daniel Bevenius [Mon, 1 Sep 2025 12:28:49 +0000 (14:28 +0200)]
ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops (#15695)

* ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops

This commit adds support for the TRANSPOSE and RESHAPE operations in the
ggml webgpu backend.

Co-authored-by: Diego Devesa <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agodocs : add Hunyuan to models section (#15707)
Jie Fu (傅杰) [Mon, 1 Sep 2025 07:34:59 +0000 (15:34 +0800)]
docs : add Hunyuan to models section (#15707)

Signed-off-by: Jie Fu <redacted>
4 weeks agoCUDA: fix build error from ambiguous __half conversions in conv2d (#15690)
Akarshan Biswas [Mon, 1 Sep 2025 01:25:06 +0000 (06:55 +0530)]
CUDA: fix build error from ambiguous __half conversions in conv2d (#15690)

* CUDA: fix build error from ambiguous __half conversions in conv2d

Building conv2d with half precision failed because `__half` defines
multiple implicit conversion operators (to float, int, short, etc.),
causing ambiguous overload resolution when multiplying with float.

Introduce a templated `to_float` helper that explicitly converts
`__half` via `__half2float`, while passing through float unchanged.
Use this helper in conv2d accumulation to ensure unambiguous and
correct promotion to float.

Fixes some build errors with half-precision kernels on CUDA.

ggml-ci

* CUDA: Replace custom to_float helper with unified ggml_cuda_cast and add half‑>float conversion

* CUDA: Add missing convert.cuh header

* CUDA: remove unnecessary extension in ggml_cuda_cast

* CUDA: Address review comment, remove second type template argument

4 weeks agoCANN: Optimize MUL_MAT_ID (#15658)
hipudding [Mon, 1 Sep 2025 00:57:23 +0000 (08:57 +0800)]
CANN: Optimize MUL_MAT_ID (#15658)

4 weeks agoCANN: fix RoPE cache issue on multi-device (#15629)
hipudding [Mon, 1 Sep 2025 00:57:00 +0000 (08:57 +0800)]
CANN: fix RoPE cache issue on multi-device (#15629)

* CANN: fix RoPE cache issue on multi-device

RoPE cache only needs to be computed once per token.
However, in multi-device scenarios, not every device starts
computation from layer 0, which may lead to unallocated memory
issues and precision errors.

This commit records the first layer of each device to avoid
the above issues.

* CANN: Optimize first-layer detection method

* CANN: Remove trailing whitespace

* CANN: Only cache the data that can be determined as unchanged through the parameters.

* CANN: Update function comment

4 weeks agosampling : optimize samplers by reusing bucket sort (#15665)
Georgi Gerganov [Sun, 31 Aug 2025 17:41:02 +0000 (20:41 +0300)]
sampling : optimize samplers by reusing bucket sort (#15665)

* sampling : optimize sorting using bucket sort in more places

ggml-ci

* sampling : do not sort in dist sampler

ggml-ci

* sampling : avoid heap allocations for sort buffers

ggml-ci

* common : add option to sort sampling candidates by probability

ggml-ci

* sampling : revert the change for preserving sort buffers

* sampling : use std::copy instead of memcpy

* sampling : clarify purpose of partial sort helpers

ggml-ci

* cont : remove wrong comment [no ci]

* common : update comment

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agoserver : enable /slots by default and make it secure (#15630)
Georgi Gerganov [Sun, 31 Aug 2025 17:11:58 +0000 (20:11 +0300)]
server : enable /slots by default and make it secure (#15630)

* server : enable /slots by default and make it secure

ggml-ci

* server : fix tests to pass `--no-slots` when necessary

* server : extend /props with info about enabled endpoints

4 weeks agometal : fix checks for available FA kernels (#15700)
Georgi Gerganov [Sun, 31 Aug 2025 16:43:30 +0000 (19:43 +0300)]
metal : fix checks for available FA kernels (#15700)

* metal : fix checks for available FA kernels

ggml-ci

* cont : fix comment [no ci]

4 weeks agollama : fix fattn reserve call n_seqs parameter (#15699)
Diego Devesa [Sun, 31 Aug 2025 15:47:05 +0000 (08:47 -0700)]
llama : fix fattn reserve call n_seqs parameter (#15699)

ggml-ci

4 weeks agollama : separate compute buffer reserve from fattn check (#15696)
Diego Devesa [Sun, 31 Aug 2025 13:49:03 +0000 (06:49 -0700)]
llama : separate compute buffer reserve from fattn check (#15696)

Exposes ggml_backend_sched_split_graph() to allow splitting the graph without allocating compute buffers and uses it to split the graph for the automatic Flash Attention check.

4 weeks agoci : explicitly set fa off or on (#15692)
Sigbjørn Skjæret [Sun, 31 Aug 2025 13:30:20 +0000 (15:30 +0200)]
ci : explicitly set fa off or on (#15692)

4 weeks agovulkan: handle large sizes for get_rows (#15686)
Jeff Bolz [Sun, 31 Aug 2025 08:13:27 +0000 (03:13 -0500)]
vulkan: handle large sizes for get_rows (#15686)

4 weeks agovulkan: mul_mat_id coopmat2 optimizations (#15546)
Jeff Bolz [Sun, 31 Aug 2025 07:06:43 +0000 (02:06 -0500)]
vulkan: mul_mat_id coopmat2 optimizations (#15546)

* vulkan: mul_mat_id coopmat2 optimizations

Add a path for when the tile fits in BN/2, similar to what we have for mul_mat.

Only call fetch_scales/store_scales once per QUANT_K block, and once at the
beginning in case start_k is not aligned.

* Also add a path for BN/4 - worth a couple more percent

4 weeks agovulkan : remove unused portability_enumeration_ext variable (#15679)
Daniel Bevenius [Sun, 31 Aug 2025 06:46:42 +0000 (08:46 +0200)]
vulkan : remove unused portability_enumeration_ext variable (#15679)

This commit removes the portability_enumeration_ext variable from the
ggml_vk_instance_portability_enumeration_ext_available function as it
is initialized to false but never modified, making it redundant.

4 weeks agovulkan: Allow fallback to sysmem memory when vidmem is full (#15649)
Jeff Bolz [Sun, 31 Aug 2025 06:30:54 +0000 (01:30 -0500)]
vulkan: Allow fallback to sysmem memory when vidmem is full (#15649)

* vulkan: Allow fallback to sysmem memory when vidmem is full

* vulkan: Add env var GGML_VK_ALLOW_SYSMEM_FALLBACK

4 weeks agovulkan: clamp matmul and FA results to the max finite value (#15652)
Jeff Bolz [Sun, 31 Aug 2025 06:27:57 +0000 (01:27 -0500)]
vulkan: clamp matmul and FA results to the max finite value (#15652)

* vulkan: clamp matmul and FA results to the max finite value

* only clamp for fp16

4 weeks agoggml: update kleidiai to v1.13.0 (#15663)
Charles Xu [Sat, 30 Aug 2025 16:03:42 +0000 (18:03 +0200)]
ggml: update kleidiai to v1.13.0 (#15663)

4 weeks agoUpdate build.md to remove MSVC arm64 notes (#15684)
Diego Devesa [Sat, 30 Aug 2025 15:51:28 +0000 (08:51 -0700)]
Update build.md to remove MSVC arm64 notes (#15684)

Removed information about MSVC compiler limitations for arm64 builds.

4 weeks agollama: use FA + max. GPU layers by default (#15434)
Johannes Gäßler [Sat, 30 Aug 2025 14:32:10 +0000 (16:32 +0200)]
llama: use FA + max. GPU layers by default (#15434)

* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault

4 weeks agoCUDA: use FP32 arithmetic for conv2d (#15683)
Johannes Gäßler [Sat, 30 Aug 2025 14:20:32 +0000 (16:20 +0200)]
CUDA: use FP32 arithmetic for conv2d (#15683)

4 weeks agovulkan: Skip syncing for prealloc_y when it is reused (#15544)
Jeff Bolz [Sat, 30 Aug 2025 09:11:22 +0000 (04:11 -0500)]
vulkan: Skip syncing for prealloc_y when it is reused (#15544)

4 weeks agoCANN: FIx compiler warnings (#15661)
Chenguang Li [Sat, 30 Aug 2025 02:18:35 +0000 (10:18 +0800)]
CANN: FIx compiler warnings (#15661)

Signed-off-by: noemotiovon <redacted>
5 weeks agoserver : removed obsolete doc (#15670)
Sergey Alirzaev [Fri, 29 Aug 2025 22:12:53 +0000 (00:12 +0200)]
server : removed obsolete doc (#15670)

completing a4090d1174aed22dde5cacce2a4c27656b987a2f

5 weeks agoscripts: strip "AMD Instinct" from GPU name (#15668)
Johannes Gäßler [Fri, 29 Aug 2025 20:04:08 +0000 (22:04 +0200)]
scripts: strip "AMD Instinct" from GPU name (#15668)

5 weeks agoserver : add documentation for `parallel_tool_calls` param (#15647)
ExtReMLapin [Fri, 29 Aug 2025 17:25:40 +0000 (19:25 +0200)]
server : add documentation for `parallel_tool_calls` param (#15647)

Co-authored-by: Pierre F <redacted>
5 weeks agoCUDA: fix bug in rms_norm fusion (#15660)
Aman Gupta [Fri, 29 Aug 2025 13:30:06 +0000 (21:30 +0800)]
CUDA: fix bug in rms_norm fusion (#15660)

* CUDA: fix bug in rms_norm fusion

* Fix bug for OP_REPEAT

* Fix index for add

5 weeks agochat : Seed OSS thinking + tool call support (#15552)
Piotr Wilkin (ilintar) [Fri, 29 Aug 2025 12:53:41 +0000 (14:53 +0200)]
chat : Seed OSS thinking + tool call support (#15552)

* Reasoning and tool-calling support for Seed OSS

* Fix grammar and partial parsing

* Whitespace

* New chat template

* Update common/chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update common/chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Remove unused 'purge_healing_marker' helper

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agoCUDA: fuse adds, fuse add with rms norm (#15631)
Aman Gupta [Fri, 29 Aug 2025 03:35:58 +0000 (11:35 +0800)]
CUDA: fuse adds, fuse add with rms norm (#15631)

* CUDA: fused add with rms_norm_mul

* Non-broadcast fuse works

* Add fused adds

* format

* Remove n_fuse from template params

* Address review comments

* Move template inside binbcast

5 weeks agonvidia nemotron nano v2 (nemotronh) (#15507)
Gabe Goodhart [Fri, 29 Aug 2025 00:39:31 +0000 (18:39 -0600)]
nvidia nemotron nano v2 (nemotronh) (#15507)

* feat: Add NEMOTRONH to python arch enum

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add NEMOTRONH to c++ arch enum

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add NEMOTRONH to llama-arch layer map

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: First pass at conversion for nemotronh

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add a verbose log for each tensor loaded

This is really helpful for diagnosing mismatches between the expected and
received tensors

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: First (broken) pass at nemotronh model architecture

It generates tokens, just not valid ones!

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Explicitly enable add_bos_token during conversion

The `tokenizer.json`/`tokenizer_config.json` in the model are a bit
contradictory. In the config, add_bos_token is set to False, but the
tokenizer model itself has a post_processor that adds the BOS token via
type: TemplateProcessing

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use relu2 (LLM_FFN_RELU_SQR) for activation in FFN layers

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Only allocate attention cache for attention layers (not non-recurrent)

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Move residual add to after every block

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use the correct norm tensor for the MLP blocks

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* Nemotron-H: MLP gate cleanup (pass NULL for unused gate)

This model does not use a gate in MLP blocks; pass NULLs for gate tensors to make intent clear and avoid unused-pointer noise.

* SSM: respect ssm_dt_rank for dt_dim when provided

Use GGUF-provided time_step_rank (ssm_dt_rank) to set dt_dim when > 0; fallback to max(64, n_embd/16).

* fix: plamo2 - revert dt_dim to default (remove ssm_dt_rank usage)

* Rename nemotronh to nemotron_h for consistency

- Update architecture name from NEMOTRONH to NEMOTRON_H in constants.py
- Change architecture string from 'nemotronh' to 'nemotron_h' in all files
- Update enum LLM_ARCH_NEMOTRONH to LLM_ARCH_NEMOTRON_H
- Update class name llm_build_nemotronh to llm_build_nemotron_h
- Consistent naming with underscore convention (nemotron_h vs nemotronh)

* feat: Support conversion for older NemotronH models

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
---------

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Maicon Domingues <redacted>
Co-authored-by: weatherman <redacted>
5 weeks agofix: Compute the full sum in llama-eval-callback, not just the sum of printed values...
Gabe Goodhart [Thu, 28 Aug 2025 20:27:36 +0000 (15:27 -0500)]
fix: Compute the full sum in llama-eval-callback, not just the sum of printed values (#15637)

This makes it much easier to compare between llama.cpp and transformers!

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
5 weeks agoCUDA: add conv2d (#15635)
mnehete32 [Thu, 28 Aug 2025 18:33:03 +0000 (00:03 +0530)]
CUDA: add conv2d (#15635)

* CUDA: add conv2d

* CUDA: conv2d - correct formatting and added const

5 weeks agoggml-cpu: fix invalid hsum build in debug s390x (#15634)
Aaron Teo [Thu, 28 Aug 2025 14:39:27 +0000 (22:39 +0800)]
ggml-cpu: fix invalid hsum build in debug s390x (#15634)

Signed-off-by: Aaron Teo <redacted>
5 weeks agoggml : fix SSM_SCAN for n_groups > 1 (#15625)
compilade [Thu, 28 Aug 2025 14:11:36 +0000 (10:11 -0400)]
ggml : fix SSM_SCAN for n_groups > 1 (#15625)

5 weeks agokv-cache : fix find_slot to not search for continuous slot (#15638)
Georgi Gerganov [Thu, 28 Aug 2025 14:09:05 +0000 (17:09 +0300)]
kv-cache : fix find_slot to not search for continuous slot (#15638)

ggml-ci

5 weeks agomodel : jina-embeddings-v3 support (#13693)
Sigbjørn Skjæret [Thu, 28 Aug 2025 13:49:50 +0000 (15:49 +0200)]
model : jina-embeddings-v3 support (#13693)

* initial jina-embeddings-v3 support

* initial jina-embeddings-v3 support

* initial jina-embeddings-v3 support

* fix vocab parsing with only tokenizer.json

* set mask token lstrip attribute

* additional unk_token_id fallback just in case [no ci]

* revert vocab_size() change [no ci]

* merge tensor loading into general bert

* rope

* add lora embedding and loading (non-functional)

* export separate lora ggufs instead

* add adapter metadata api

* use std::string

* convert_hf_to_lora compatibility

* fix assert

* apply suggestions from review

* apply suggestion from review

5 weeks agoscripts: add sqlite3 check for compare-commits.sh (#15633)
Aman Gupta [Thu, 28 Aug 2025 11:23:22 +0000 (19:23 +0800)]
scripts: add sqlite3 check for compare-commits.sh (#15633)

5 weeks agokv-cache : remove LLAMA_SET_ROWS checks (#15505)
Georgi Gerganov [Thu, 28 Aug 2025 09:27:02 +0000 (12:27 +0300)]
kv-cache : remove LLAMA_SET_ROWS checks (#15505)

ggml-ci

5 weeks agogguf-py: byteswapping improvements (#12851)
Aleksei Nikiforov [Thu, 28 Aug 2025 08:56:41 +0000 (10:56 +0200)]
gguf-py: byteswapping improvements (#12851)

* gguf-py: implement byteswapping for Q4_0

This is needed to byteswap Mistral model.

Also restore original shapes after byteswapping tensors.
It is not needed at the moment, but do it in case
they'd be used in future.

* Rework byteswapping code in gguf-py

Move out details from byteswapping tensor blocks code

5 weeks agocli : change log to warning to explain reason for stopping (#15604)
Joshua Cogliati [Thu, 28 Aug 2025 07:48:20 +0000 (01:48 -0600)]
cli : change log to warning to explain reason for stopping (#15604)

* Change to warn instead of debug, to explain reason for stopping.

* Update tools/main/main.cpp

Fix printing --2

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
5 weeks agomodel-conversion : add mmproj conversion target (#15628)
Daniel Bevenius [Thu, 28 Aug 2025 07:26:48 +0000 (09:26 +0200)]
model-conversion : add mmproj conversion target (#15628)

This commit adds a new target to the Makefile for converting models that
are multimodal. This target will convert the original model and in
addition also create the mmproj GGUF model.

The motivation for this change is that for models that are multimodal,
for example those that contain a vision encoders, we will often want to
upload both the quantized model and the vision encoder model to
HuggingFace.

Example usage:
```console
$ make causal-convert-mm-model MODEL_PATH=~/work/ai/models/gemma-3-4b-it-qat-q4_0-unquantized/
...
The environment variable CONVERTED_MODEL can be set to this path using:
export CONVERTED_MODEL=/home/danbev/work/ai/llama.cpp/models/gemma-3-4b-it-qat-q4_0-unquantized.gguf
The mmproj model was created in /home/danbev/work/ai/llama.cpp/models/mmproj-gemma-3-4b-it-qat-q4_0-unquantized.gguf
```
The converted original model can then be quantized, and after that both
the quantized model and the mmproj file can then be uploaded to
HuggingFace.

Refs: https://huggingface.co/ggml-org/gemma-3-4b-it-qat-GGUF/tree/main

5 weeks agocuda: Add cublasLt_static linking when GGML_STATIC is enabled (#15622)
matiaslin [Thu, 28 Aug 2025 00:32:36 +0000 (17:32 -0700)]
cuda: Add cublasLt_static linking when GGML_STATIC is enabled (#15622)

Prior to this change, we faced undefined cublasLt references when
attempting to compile 'llama-cli' with GGML_STATIC=ON on Linux.

We add linking with CUDA::cublasLt_static when CUDA version is greater
than 10.1.

5 weeks agoserver: higher timeout for tests (#15621)
Johannes Gäßler [Wed, 27 Aug 2025 18:58:09 +0000 (20:58 +0200)]
server: higher timeout for tests (#15621)

5 weeks agopresets : add qwen3-30B-a3b FIM (#15616)
Georgi Gerganov [Wed, 27 Aug 2025 12:48:07 +0000 (15:48 +0300)]
presets : add qwen3-30B-a3b FIM (#15616)

5 weeks agoHIP: Enable support for ggml_backend_cuda_register_host_buffer (#15615)
uvos [Wed, 27 Aug 2025 11:58:54 +0000 (13:58 +0200)]
HIP: Enable support for ggml_backend_cuda_register_host_buffer (#15615)

5 weeks agokv-cache : better estimate of n_kv for multi-sequence batches (#15610)
Georgi Gerganov [Wed, 27 Aug 2025 10:55:12 +0000 (13:55 +0300)]
kv-cache : better estimate of n_kv for multi-sequence batches (#15610)

ggml-ci

5 weeks agoCANN: refactor mask handling and improve performance in FA (#15561)
Chenguang Li [Wed, 27 Aug 2025 09:21:41 +0000 (17:21 +0800)]
CANN: refactor mask handling and improve performance in FA (#15561)

* CANN(flash-attn): refactor mask handling and improve performance

1. Refactored the mask computation in Flash Attention, unified the logic without separating prefill and decode.
2. Optimized performance in non-alibi scenarios by reducing one repeat operation.
3. Updated operator management to explicitly mark unsupported cases on 310P devices and when dim is not divisible by 16.

Signed-off-by: noemotiovon <redacted>
* [CANN]: fix review

Signed-off-by: noemotiovon <redacted>
* [CANN]: Optimization FA BNSD to BSND

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
5 weeks agoggml-cpu : add basic RVV support for vector f32 ops (#15057)
xctan [Wed, 27 Aug 2025 08:44:22 +0000 (16:44 +0800)]
ggml-cpu : add basic RVV support for vector f32 ops (#15057)

* ggml-cpu : add basic RVV support for vector f32 ops

* ggml-cpu : add RVV support for f32 softmax

5 weeks agocommon : add -m to bash completion for --model [no ci] (#15591)
Daniel Bevenius [Wed, 27 Aug 2025 08:28:53 +0000 (10:28 +0200)]
common : add -m to bash completion for --model [no ci] (#15591)

This commit updates the bash completion script to include the -m
short option for the --model argument.

The motivation for this is that currently tab completion only works the
full --model option, and it is nice to have it work for the short option
as well.

5 weeks agoOpenCL: add fused group_norm/norm, mul, add (#15314)
rmatif [Wed, 27 Aug 2025 06:36:05 +0000 (08:36 +0200)]
OpenCL: add fused group_norm/norm, mul, add (#15314)

* add fused group_norm/norm, mul, add

* fix spacing

* revert rms_norm logic

* fix trailing whitespace

5 weeks agotests : fix test-opt with GGML_BACKEND_DL (#15599)
Diego Devesa [Tue, 26 Aug 2025 20:14:38 +0000 (13:14 -0700)]
tests : fix test-opt with GGML_BACKEND_DL (#15599)

5 weeks agoSYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (#15592)
Akarshan Biswas [Tue, 26 Aug 2025 18:57:49 +0000 (00:27 +0530)]
SYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (#15592)

The original implementation unconditionally returned true for this operation, leading to a failure when the tensor's first dimension (ne[0]) was not a multiple of WARP_SIZE. This caused an GGML_ASSERT(ncols % WARP_SIZE == 0) failure in ggml-sycl/norm.cpp.

This change updates the ggml_backend_sycl_device_supports_op check to correctly return true for GGML_OP_RMS_NORM only when the first dimension of the tensor is a multiple of WARP_SIZE, ensuring the operation can be performed without error.

5 weeks agomtmd : fix mtmd ios build (#15579)
fidoriel [Tue, 26 Aug 2025 18:05:50 +0000 (20:05 +0200)]
mtmd : fix mtmd ios build (#15579)

5 weeks agotests: add performance test for mul mat id (#15543)
Eve [Tue, 26 Aug 2025 15:42:49 +0000 (15:42 +0000)]
tests: add performance test for mul mat id (#15543)

5 weeks agollamafile: PowerPC Sgemm Optimization (#15558)
shalinib-ibm [Tue, 26 Aug 2025 15:35:25 +0000 (21:05 +0530)]
llamafile: PowerPC Sgemm Optimization (#15558)

This patch improves GEMM for FP32 Data Type on PowerPC

Implements GEMM on large blocks with configurable block size mc, nc, kc
(default: 256, 256, 256).
Packing Function optimized to access blocks as per memory layout.
GEMM Optimized to work on larger blocks.
Isolated Packing from GEMM Operations for better MMA utilization.

Verified functionality and correctness uing llama-cli and stand alone
test case (performs matmul and compares final mattrix C result with base).

Minor code refactoring changes:
Replace macro with inline function
Code Indent made consistent with 4 spaces

Performance Testing:

Observed 50% ~ 70% improvement in Prompt Processing Speed mesured using
llama-bench with Meta-Llama3-8B FP32 Model.  Similar gains observed with
Mistral-7b-Instruct-v0.3 Model.

model                   Size                Params     Backend       Threads   Test    Patch   Base
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp512   98.58   60.3
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp1024  95.88   57.36
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp2048  85.46   53.26
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp4096  68.66   45.78
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp6144  57.35   40.44

25 ~ 30% improvement in llama-batched-bench with Metla-Llama3-8B in
Prompt Processing Speed for large prompts (256, 512, 1024, 2048, 4096)tokens with various batch
sizes ( 1, 2, 4, 8, 16)

Signed-off-by: Shalini Salomi Bodapati <redacted>
5 weeks agograph : fix assert in memory-less build_attn (#15590)
Georgi Gerganov [Tue, 26 Aug 2025 14:45:17 +0000 (17:45 +0300)]
graph : fix assert in memory-less build_attn (#15590)

ggml-ci

5 weeks agomodel-conversion : add qat-q4 quantization targets (#15588)
Daniel Bevenius [Tue, 26 Aug 2025 14:12:29 +0000 (16:12 +0200)]
model-conversion : add qat-q4 quantization targets (#15588)

This commit adds two targets to the Makefile for quantizing of
Quantization Aware Trained (QAT) models to Q4_0 format.

The motivation for this is that this sets the token embedding and the
output tensors data types to Q8_0 instead of the default Q6_K. This is
someting that we wish to enforce for QAT Q4_0 models that are to be
uploaded to ggml-org on Huggingface to guarantee the best quality.

5 weeks agoCUDA: return -1 for nonexistent compiled arch (#15587)
Johannes Gäßler [Tue, 26 Aug 2025 14:01:20 +0000 (16:01 +0200)]
CUDA: return -1 for nonexistent compiled arch (#15587)

5 weeks agometal : optimize FA vec for large sequences and BS <= 8 (#15566)
Georgi Gerganov [Tue, 26 Aug 2025 11:22:14 +0000 (14:22 +0300)]
metal : optimize FA vec for large sequences and BS <= 8 (#15566)

* metal : optmize FA vec for large heads and sequences

* metal : adjust small-batch mul mv kernels

ggml-ci

* batched-bench : fix total speed computation

ggml-ci

* cont : add comments

ggml-ci

5 weeks agomtmd : support Kimi VL model (#15458)
Xuan-Son Nguyen [Tue, 26 Aug 2025 10:54:19 +0000 (12:54 +0200)]
mtmd : support Kimi VL model (#15458)

* convert : fix tensor naming conflict for llama 4 vision

* convert ok

* support kimi vision model

* clean up

* fix style

* fix calc number of output tokens

* refactor resize_position_embeddings

* add test case

* rename build fn

* correct a small bug

5 weeks agocontext : print graph stats for memory-less contexts (#15586)
Georgi Gerganov [Tue, 26 Aug 2025 09:47:00 +0000 (12:47 +0300)]
context : print graph stats for memory-less contexts (#15586)

ggml-ci

5 weeks agometal : improve `MUL_MAT_ID` (#15541)
Georgi Gerganov [Tue, 26 Aug 2025 09:46:15 +0000 (12:46 +0300)]
metal : improve `MUL_MAT_ID` (#15541)

* metal : mul_mm_id remove hdst

* metal : remove mul_mm_id hsrc1

* metal : mul_mm_id simplify + add test

* metal : opt mul_mm_id map0

* metal : optimize mul_mm_id id gathering

* metal : mul/div opt

* metal : optimize mul_mm_id_map0

ggml-ci

5 weeks agomodel : support MiniCPM-V 4.5 (#15575)
tc-mb [Tue, 26 Aug 2025 08:05:55 +0000 (16:05 +0800)]
model : support MiniCPM-V 4.5 (#15575)

5 weeks agogguf-py : remove erroneous FFN_GATE entry (#15583)
Sigbjørn Skjæret [Tue, 26 Aug 2025 07:08:08 +0000 (09:08 +0200)]
gguf-py : remove erroneous FFN_GATE entry (#15583)

5 weeks agometal : remove contiguous assertion for src0 in IM2COL (#15577)
Sigbjørn Skjæret [Tue, 26 Aug 2025 06:51:43 +0000 (08:51 +0200)]
metal : remove contiguous assertion for src0 in IM2COL (#15577)

* remove contiguous assertion for src0 in IM2COL

* add contiguous check in supports_op