]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
5 weeks agoCUDA: Accelerate MXFP4 table lookup using `__byte_perm` (#15451)
Qeeweew [Mon, 25 Aug 2025 21:21:22 +0000 (05:21 +0800)]
CUDA: Accelerate MXFP4 table lookup using `__byte_perm` (#15451)

* CUDA: optimize get_int_from_table_16

* CUDA: use v_perm_b32 to replace byte_perm on AMD GPUs

* revise documentation

---------

Co-authored-by: xix <redacted>
Co-authored-by: Johannes Gäßler <redacted>
5 weeks agoopencl: fix support ops condition for `rms_norm` (#15560)
lhez [Mon, 25 Aug 2025 21:18:09 +0000 (14:18 -0700)]
opencl: fix support ops condition for `rms_norm` (#15560)

5 weeks agovulkan: fix min subgroup 16 condition for mmid subgroup optimization (#15565)
Ruben Ortlam [Mon, 25 Aug 2025 15:56:59 +0000 (17:56 +0200)]
vulkan: fix min subgroup 16 condition for mmid subgroup optimization (#15565)

5 weeks agotests: Generate unique input values for count_equal (#15487)
Jeff Bolz [Mon, 25 Aug 2025 15:47:16 +0000 (10:47 -0500)]
tests: Generate unique input values for count_equal (#15487)

This avoids backend-dependent behavior for argmax that leads to intermittent failures.

5 weeks agometal: fix regression when no metal devices are present (#15531)
Ihar Hrachyshka [Mon, 25 Aug 2025 15:27:34 +0000 (11:27 -0400)]
metal: fix regression when no metal devices are present (#15531)

5 weeks agoCUDA: MoE helper in device code, better tile sizes (#15525)
Johannes Gäßler [Mon, 25 Aug 2025 15:23:40 +0000 (17:23 +0200)]
CUDA: MoE helper in device code, better tile sizes (#15525)

* CUDA: MoE helper in device code, better tile sizes

* reduce superfluous CUDA blocks

5 weeks agomodel-conversion : set pooling type to none in logits.cpp (#15564)
Daniel Bevenius [Mon, 25 Aug 2025 13:00:43 +0000 (15:00 +0200)]
model-conversion : set pooling type to none in logits.cpp (#15564)

This commit explicitly sets the pooling type to 'none' in the logits.cpp
to support models that have a pooling type specified.

The motivation for this is that some models may have a pooling type set
in the model file (.gguf file) and for this specific case where we only
want to extract logits, we need to ensure that no pooling is used to
so that we are comparing raw logits and not pooled embeddings.

5 weeks agomodel-conversion : add model card template for embeddings [no ci] (#15557)
Daniel Bevenius [Mon, 25 Aug 2025 12:25:25 +0000 (14:25 +0200)]
model-conversion : add model card template for embeddings [no ci] (#15557)

* model-conversion: add model card template for embeddings [no ci]

This commit adds a separate model card template (model repository
README.md template) for embedding models.

The motivation for this is that there server command for the embedding
model is a little different and some addition information can be useful
in the model card for embedding models which might not be directly
relevant for causal models.

* squash! model-conversion: add model card template for embeddings [no ci]

Fix pyright lint error.

* remove --pooling override and clarify embd_normalize usage

5 weeks agobatched-bench : fix unified KV cache handling + pp timing (#15562)
Georgi Gerganov [Mon, 25 Aug 2025 10:56:43 +0000 (13:56 +0300)]
batched-bench : fix unified KV cache handling + pp timing (#15562)

* batched-bench : fix unified KV cache handling + pp timing

* cont : run dummy token only with split KV cache

5 weeks agoconvert : update Ernie 4.5 dense architecture name (#15555)
Weizhao Ouyang [Mon, 25 Aug 2025 09:15:06 +0000 (17:15 +0800)]
convert : update Ernie 4.5 dense architecture name (#15555)

Signed-off-by: Weizhao Ouyang <redacted>
5 weeks agometal : add FA kernels for HS=40 (#15559)
Georgi Gerganov [Mon, 25 Aug 2025 07:14:48 +0000 (10:14 +0300)]
metal : add FA kernels for HS=40 (#15559)

ggml-ci

5 weeks agoconvert : support interns1-mini (#15412)
RunningLeon [Mon, 25 Aug 2025 06:32:16 +0000 (14:32 +0800)]
convert : support interns1-mini (#15412)

* support interns1-mini

* fix comment

* update

5 weeks agoCANN: ROPE cache sin/cos repeat (#15501)
Chenguang Li [Mon, 25 Aug 2025 02:32:21 +0000 (10:32 +0800)]
CANN: ROPE cache sin/cos repeat (#15501)

Signed-off-by: noemotiovon <redacted>
5 weeks agovulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (#15524)
Ruben Ortlam [Sun, 24 Aug 2025 17:36:36 +0000 (19:36 +0200)]
vulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (#15524)

* vulkan: use subgroup function for mul_mat_id shader even without coopmat

* vulkan: fix compile warnings

* vulkan: properly check for subgroup size control and require full subgroups for subgroup mul_mat_id

* vulkan: disable subgroup mul_mat_id on devices with subgroups < 16

5 weeks agokv-cache : support layer reuse (#15504)
Georgi Gerganov [Sun, 24 Aug 2025 10:07:07 +0000 (13:07 +0300)]
kv-cache : support layer reuse (#15504)

* kv-cache : support layer reuse

ggml-ci

* cont : update comments [no ci]

5 weeks agovulkan: Support FA with any multiple of 8 head sizes (#15537)
Jeff Bolz [Sun, 24 Aug 2025 09:24:25 +0000 (04:24 -0500)]
vulkan: Support FA with any multiple of 8 head sizes (#15537)

The scalar FA shader already handled multiples of 8. The coopmat1 FA
shader assumed 16x16x16 and the shared memory allocations need the HSK
dimensions padded to a multiple of 16. NVIDIA's coopmat2 implementation
requires multiples of 16 for N and K, and needs the matrix dimensions
padded and loads clamped.

Store the FA pipelines in a map, indexed by the pipeline state.

5 weeks agovulkan: enable Conv2D for Apple after MoltenVK fixed the bug (#15526)
Ruben Ortlam [Sun, 24 Aug 2025 08:48:53 +0000 (10:48 +0200)]
vulkan: enable Conv2D for Apple after MoltenVK fixed the bug (#15526)

5 weeks agovulkan: workaround MoltenVK compile failure in multi_add (#15506)
Jeff Bolz [Sun, 24 Aug 2025 08:48:21 +0000 (03:48 -0500)]
vulkan: workaround MoltenVK compile failure in multi_add (#15506)

* vulkan: workaround MoltenVK compile failure in multi_add

* Update ggml/src/ggml-vulkan/vulkan-shaders/multi_add.comp

Co-authored-by: 0cc4m <redacted>
5 weeks agoCUDA: fix half2 -> half conversion for HIP (#15529)
Johannes Gäßler [Sat, 23 Aug 2025 19:37:06 +0000 (21:37 +0200)]
CUDA: fix half2 -> half conversion for HIP (#15529)

5 weeks agovulkan: optimize rms_norm, and allow the work to spread across multiple SMs (#15281)
Jeff Bolz [Sat, 23 Aug 2025 18:16:17 +0000 (13:16 -0500)]
vulkan: optimize rms_norm, and allow the work to spread across multiple SMs (#15281)

* vulkan: optimize rms_norm, and allow the work to spread across multiple SMs

There are really two parts to this change:
(1) Some optimizations similar to what we have in soft_max, to unroll with
different numbers of iterations.
(2) A fusion optimization where we detect add followed by rms_norm, and make
the add shader atomically accumulate the values^2 into memory. Then the
rms_norm shader can just load that sum. This allows the rms_norm to be
parallelized across multiple workgroups, it just becomes a simple per-element
multiply.

The fusion optimization is currently only applied when the rms_norm is on a
single vector. This previously always ran on a single SM. It could apply more
broadly, but when there are other dimensions the work can already spread across
SMs, and there would be some complexity to tracking multiple atomic sums.

* Change add+rms_norm optimization to write out an array of partial sums
rather than using atomic add, to make it deterministic. The rms_norm
shader fetches a subgroup's worth in parallel and uses subgroupAdd to
add them up.

* complete rebase against fused adds - multi_add shader can also compute partial sums

* fix validation errors

* disable add_rms_fusion for Intel due to possible driver bug

* resolve against #15489, sync after clearing partial sums

5 weeks agomodel : add support for Seed-OSS (#15490)
Piotr Wilkin (ilintar) [Sat, 23 Aug 2025 13:21:52 +0000 (15:21 +0200)]
model : add support for Seed-OSS (#15490)

* First draft

* Fix linter errors

* Added missing sinks nullptr

* Don't forget the llama-arch!

* We're through to the generation stage.

* Fix post-attention norm

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Fix RoPE type

* Fix tensor name and reorder llm_types

* Update gguf-py/gguf/constants.py

Remove nonexistent FFN_POST_NORM tensor

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.h

Co-authored-by: Sigbjørn Skjæret <redacted>
* Add basic chat template

* Add chat template tests

* Remake chat template test

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Reorder llm type descriptions

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agoscripts: fix compare-llama-bench.py (#15521)
Johannes Gäßler [Sat, 23 Aug 2025 10:58:58 +0000 (12:58 +0200)]
scripts: fix compare-llama-bench.py (#15521)

5 weeks agochat : fix debug build assertion in trim function (#15520)
LaffeyNyaa [Sat, 23 Aug 2025 08:38:30 +0000 (16:38 +0800)]
chat : fix debug build assertion in trim function (#15520)

5 weeks agovulkan: Rewrite synchronization to allow some overlap between nodes (#15489)
Jeff Bolz [Sat, 23 Aug 2025 07:33:36 +0000 (02:33 -0500)]
vulkan: Rewrite synchronization to allow some overlap between nodes (#15489)

Track a list of nodes that need synchronization, and only sync if the new node
depends on them (or overwrites them). This allows some overlap which can
improve performance, and centralizes a big chunk of the synchronization logic.

The remaining synchronization logic involves writes to memory other than the
nodes, e.g. for dequantization or split_k. Each of these allocations has a bool
indicating whether they were in use and need to be synced. This should be
checked before they are written to, and set to true after they are done being
consumed.

5 weeks agovulkan.Dockerfile: install vulkan SDK using tarball (#15282)
R0CKSTAR [Sat, 23 Aug 2025 06:58:57 +0000 (14:58 +0800)]
vulkan.Dockerfile: install vulkan SDK using tarball (#15282)

Signed-off-by: Xiaodong Ye <redacted>
5 weeks agovulkan : support ggml_mean (#15393)
Acly [Sat, 23 Aug 2025 06:35:21 +0000 (08:35 +0200)]
vulkan : support ggml_mean (#15393)

* vulkan : support ggml_mean

* vulkan : support sum, sum_rows and mean with non-contiguous tensors

* vulkan : fix subbuffer size not accounting for misalign offset

* tests : add backend-op tests for non-contiguous sum_rows

* cuda : require contiguous src for SUM_ROWS, MEAN support
* sycl : require contiguous src for SUM, SUM_ROWS, ARGSORT support

* require ggml_contiguous_rows in supports_op and expect nb00=1 in the shader

5 weeks agovulkan: optimize mul_mat_id loading row ids into shared memory (#15427)
Jeff Bolz [Sat, 23 Aug 2025 06:31:54 +0000 (01:31 -0500)]
vulkan: optimize mul_mat_id loading row ids into shared memory (#15427)

- Spread the work across the whole workgroup. Using more threads seems to
far outweigh the synchronization overhead.
- Specialize the code for when the division is by a power of two.

6 weeks agotest-opt: allow slight inprecision (#15503)
Johannes Gäßler [Fri, 22 Aug 2025 21:47:01 +0000 (23:47 +0200)]
test-opt: allow slight inprecision (#15503)

6 weeks agoggml WebGPU: add support for quantization types (#15440)
Reese Levine [Fri, 22 Aug 2025 18:28:03 +0000 (11:28 -0700)]
ggml WebGPU: add support for quantization types (#15440)

* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments

* Work on templating for different types in shaders

* Work on shader type generation

* Working q4_0 mul_mat and some templating for different types

* Add q4_0_f16 matmul and fix device init

* Add matmul support for basic quantization types

* Add q2_k and q3_k quantization

* Add rest of k-quants

* Get firt i-quant working

* Closer to supporting all i-quants

* Support rest of i-quants

* Cleanup code

* Fix python formatting

* debug

* Bugfix for memset

* Add padding to end of buffers on creation

* Simplify bit-shifting

* Update usage of StringView

6 weeks agomodel : gpt-oss add response_format support (#15494)
Aldehir Rojas [Fri, 22 Aug 2025 16:04:08 +0000 (11:04 -0500)]
model : gpt-oss add response_format support (#15494)

6 weeks agoggml: add `conv3d` op (#15182)
rmatif [Fri, 22 Aug 2025 13:33:15 +0000 (15:33 +0200)]
ggml: add `conv3d` op (#15182)

* add conv3d

* bump GGML_OP_COUNT

6 weeks agocuda : add Pad Reflect 1D support (#14659)
Yavor Ivanov [Fri, 22 Aug 2025 11:06:29 +0000 (14:06 +0300)]
cuda : add Pad Reflect 1D support (#14659)

* Add Pad Reflect 1D CUDA support

* Update ggml/src/ggml-cuda/pad_reflect_1d.cu

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
6 weeks agollama : remove KV cache defragmentation logic (#15473)
Georgi Gerganov [Fri, 22 Aug 2025 09:22:13 +0000 (12:22 +0300)]
llama : remove KV cache defragmentation logic (#15473)

ggml-ci

6 weeks agoggml-cpu: Support Q5_0 and Q5_1 on s390x (#15486)
Aaron Teo [Fri, 22 Aug 2025 08:11:04 +0000 (16:11 +0800)]
ggml-cpu: Support Q5_0 and Q5_1 on s390x (#15486)

* ggml-cpu: initial q5_0 impl for s390x

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: updated q5_0 code for better performance

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: use optimised hsum for better performance

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: introduce q5_1 simd + refactor q5_0

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix incorrect return type vec_hsum

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: q5_0 incomplete refactor + table_b2b_0 activation

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: refactor q5_1

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: q5_1 update loop unroll to 4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update q5_0 unroll to 4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update build-s390x docs

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update unused variables q5_0

Signed-off-by: Aaron Teo <redacted>
* docs: update the last update date

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
6 weeks agoserver : Support multimodal completion and embeddings prompts in JSON format (#15108)
65a [Fri, 22 Aug 2025 08:10:14 +0000 (08:10 +0000)]
server : Support multimodal completion and embeddings prompts in JSON format (#15108)

- Use server_tokens in more places in server and util.cpp
- Convert most functions that used llama_tokens to server_tokens
- Modify input tokenizer to handle JSON objects as subprompts
- Break out MTMD prompt parsing into utility function
- Support JSON objects with multimodal_data arrays for MTMD prompts along with other existing types
- Add capability to model endpoint to indicate if client can send multimodal data
- Add tests.

6 weeks agoreadme : model : mtdm : lfm2 improvements (#15476)
Tarek Dakhran [Fri, 22 Aug 2025 07:29:08 +0000 (09:29 +0200)]
readme : model : mtdm : lfm2 improvements (#15476)

* Support untied embeddings

* Increase number of image tokens to 1024

* Add LFM2-VL to readme

* Actually use untied embeddings

6 weeks agoCANN: Optimize RMS_NORM using cache (#15419)
Chenguang Li [Fri, 22 Aug 2025 06:12:07 +0000 (14:12 +0800)]
CANN: Optimize RMS_NORM using cache (#15419)

* [CANN] Optimize RMS_NORM using cache

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
* fix review comment

Signed-off-by: noemotiovon <redacted>
* codestyle adjustment

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
6 weeks agosched : fix possible use of wrong ids tensor when offloading moe prompt processing...
Diego Devesa [Thu, 21 Aug 2025 21:09:32 +0000 (14:09 -0700)]
sched : fix possible use of wrong ids tensor when offloading moe prompt processing (#15488)

6 weeks agollama : remove deprecated llama_kv_self API (#15472)
Georgi Gerganov [Thu, 21 Aug 2025 16:13:45 +0000 (19:13 +0300)]
llama : remove deprecated llama_kv_self API (#15472)

ggml-ci

6 weeks agograph : remove build_attn_with_sinks overload (#15469)
Georgi Gerganov [Thu, 21 Aug 2025 15:44:45 +0000 (18:44 +0300)]
graph : remove build_attn_with_sinks overload (#15469)

ggml-ci

6 weeks agovulkan : support conv_2d_dw with f16 weights (#15392)
Acly [Thu, 21 Aug 2025 15:01:51 +0000 (17:01 +0200)]
vulkan : support conv_2d_dw with f16 weights (#15392)

6 weeks agovulkan: add exp operation (#15456)
Dong Won Kim [Thu, 21 Aug 2025 15:00:16 +0000 (00:00 +0900)]
vulkan: add exp operation (#15456)

Co-authored-by: aeseulgi <redacted>
6 weeks agovulkan: Reuse conversion results in prealloc_y (#15410)
Jeff Bolz [Thu, 21 Aug 2025 14:55:00 +0000 (09:55 -0500)]
vulkan: Reuse conversion results in prealloc_y (#15410)

* vulkan: Reuse conversion results in prealloc_y

Cache the pipeline and tensor that were most recently used to fill prealloc_y,
and skip the conversion if the current pipeline/tensor match.

* don't use shared pointer for prealloc_y_last_pipeline_used

6 weeks agoexamples : fix some typos in examples/model-conversion/README.md (#15477)
Jie Fu (傅杰) [Thu, 21 Aug 2025 14:53:13 +0000 (22:53 +0800)]
examples : fix some typos in examples/model-conversion/README.md (#15477)

Signed-off-by: Jie Fu <redacted>
6 weeks agokv-cache : drop the "unified" prefix (#15467)
Georgi Gerganov [Thu, 21 Aug 2025 14:00:33 +0000 (17:00 +0300)]
kv-cache : drop the "unified" prefix (#15467)

* kv-cache : drop the "unified" prefix

ggml-ci

* cont : fix comment [no ci]

6 weeks agoexamples : install torch-cpu for model conversion tool/example (#15475)
Jie Fu (傅杰) [Thu, 21 Aug 2025 13:42:34 +0000 (21:42 +0800)]
examples : install torch-cpu for model conversion tool/example (#15475)

Signed-off-by: Jie Fu <redacted>
6 weeks agoci : enable RVV1.0 native build (#15386)
Ali Tariq [Thu, 21 Aug 2025 12:52:16 +0000 (17:52 +0500)]
ci : enable RVV1.0 native build (#15386)

* Changed the CI file to hw

* Changed the CI file to hw

* Added to sudoers for apt

* Removed the clone command and used checkout

* Added libcurl

* Added gcc-14

* Checking gcc --version

* added gcc-14 symlink

* added CC and C++ variables

* Added the gguf weight

* Changed the weights path

* Added system specification

* Removed white spaces

* ci: Replace Jenkins riscv native build Cloud-V pipeline with GitHub Actions workflow

Removed the legacy .devops/cloud-v-pipeline Jenkins CI configuration and introduced .github/workflows/build-riscv-native.yml for native RISC-V builds using GitHub Actions.

* removed trailing whitespaces

* Added the trigger at PR creation

* Corrected OS name

* Added ccache as setup package

* Added ccache for self-hosted runner

* Added directory for ccache size storage

Co-authored-by: Sigbjørn Skjæret <redacted>
* Changed the build command and added ccache debug log

* Added the base dir for the ccache

* Re-trigger CI

* Cleanup and refactored ccache steps

* Cleanup and refactored ccache steps

---------

Co-authored-by: Akif Ejaz <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoci : continue file download with wget (#15471)
Georgi Gerganov [Thu, 21 Aug 2025 10:42:55 +0000 (13:42 +0300)]
ci : continue file download with wget (#15471)

ggml-ci

6 weeks agoexamples : add model conversion tool/example (#15455)
Daniel Bevenius [Thu, 21 Aug 2025 10:16:54 +0000 (12:16 +0200)]
examples : add model conversion tool/example (#15455)

* examples : add model conversion tool/example

This commit adds an "example/tool" that is intended to help in the
process of converting models to GGUF. Currently it supports normal
causal models and embedding models. The readme contains instructions and
command to guide through the process.

The motivation for this to have a structured and repeatable process for
model conversions and hopefully with time improve upon it to make the
process easier and more reliable. We have started to use this for new
model conversions internally and will continue doing so and improve it
as we go along. Perhaps with time this should be placed in a different
directory than the examples directory, but for now it seems like a good
place to keep it while we are still developing it.

* squash! examples : add model conversion tool/example

Remove dependency on scikit-learn in model conversion example.

* squash! examples : add model conversion tool/example

Update transformer dep to use non-dev version. And also import
`AutoModelForCausalLM` instead of `AutoModel` to ensure compatibility
with the latest version.

* squash! examples : add model conversion tool/example

Remove the logits requirements file from the all requirements file.

6 weeks agoci : fix -Werror=return-type in clip.cpp so ci/run.sh can run without issue (#15221)
Michael Giba [Thu, 21 Aug 2025 10:06:46 +0000 (05:06 -0500)]
ci : fix -Werror=return-type in clip.cpp so ci/run.sh can run without issue (#15221)

* Fix -Werror=return-type so ci/run.sh can run

* Update tools/mtmd/clip.cpp

Co-authored-by: Diego Devesa <redacted>
* Remove false now that we have abort

---------

Co-authored-by: Diego Devesa <redacted>
6 weeks agoci : add copilot-instructions.md (#15286)
Copilot [Thu, 21 Aug 2025 09:47:52 +0000 (11:47 +0200)]
ci : add copilot-instructions.md (#15286)

* Initial plan

* Initialize copilot instructions exploration

* Add comprehensive .github/copilot-instructions.md file

* Update Python environment and tools directory documentation

- Add instructions for using .venv Python environment
- Include flake8 and pyright linting tools from virtual environment
- Add tools/ as core directory in project layout
- Reference existing configuration files (.flake8, pyrightconfig.json)

* add more python dependencies to .venv

* Update copilot instructions: add backend hardware note and server testing

* Apply suggestions from code review

* Apply suggestions from code review

* Replace clang-format with git clang-format to format only changed code

* Minor formatting improvements: remove extra blank line and add trailing newline

* try installing git-clang-format

* try just clang-format

* Remove --binary flag from git clang-format and add git-clang-format installation to CI

* download 18.x release

* typo--

* remove --binary flag

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoconvert : make Mistral community chat templates optional via parameter (#15420)
Julien Denize [Thu, 21 Aug 2025 09:19:50 +0000 (11:19 +0200)]
convert : make Mistral community chat templates optional via parameter (#15420)

* Make Mistral community chat templates optional

* Change the flag arg to disable instead of enable community chat templates

* Improve error message

* Improve help message

* Tone down the logger messages

6 weeks agocommon : fix incorrect print of non-ascii characters in the logging (#15466)
Jie Fu (傅杰) [Thu, 21 Aug 2025 08:54:34 +0000 (16:54 +0800)]
common : fix incorrect print of non-ascii characters in the logging (#15466)

Signed-off-by: Jie Fu <redacted>
6 weeks agoggml : fix condition of im2col on Metal backend (#15460)
Xuan-Son Nguyen [Thu, 21 Aug 2025 05:32:26 +0000 (07:32 +0200)]
ggml : fix condition of im2col on Metal backend (#15460)

6 weeks agoserver : fix webui (#15462)
stduhpf [Thu, 21 Aug 2025 05:19:22 +0000 (07:19 +0200)]
server : fix webui (#15462)

* Fix webui crash after streaming

* build webui

6 weeks agoexamples : remove references to `make` in examples [no ci] (#15457)
Daniel Bevenius [Thu, 21 Aug 2025 04:12:28 +0000 (06:12 +0200)]
examples : remove references to `make` in examples [no ci] (#15457)

This commit removes references to `make` in the examples, as the build
system has been updated to use CMake directly and using `make` will now
generate an error since Commit 37f10f955f70e0158d50343d0b9a3f92d194daae
("make : remove make in favor of CMake (#15449)").

6 weeks agomusa: add GGML_UNUSED_VARS (#15446)
R0CKSTAR [Thu, 21 Aug 2025 03:06:05 +0000 (11:06 +0800)]
musa: add GGML_UNUSED_VARS (#15446)

Signed-off-by: Xiaodong Ye <redacted>
6 weeks agosched : copy only the used experts when offloading prompt processing (#15346)
Diego Devesa [Wed, 20 Aug 2025 23:35:28 +0000 (16:35 -0700)]
sched : copy only the used experts when offloading prompt processing (#15346)

6 weeks agoserver: fix OpenAI API compatibility for usage statistics in chat streams (#15444)
teo [Wed, 20 Aug 2025 22:10:08 +0000 (07:10 +0900)]
server: fix OpenAI API compatibility for usage statistics in chat streams (#15444)

6 weeks agoCUDA: refactor FA support/selection code (#15454)
Johannes Gäßler [Wed, 20 Aug 2025 21:14:14 +0000 (23:14 +0200)]
CUDA: refactor FA support/selection code (#15454)

6 weeks agoCUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)
Johannes Gäßler [Wed, 20 Aug 2025 14:58:49 +0000 (16:58 +0200)]
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)

6 weeks agovulkan: shorten pipeline name strings (#15431)
Jeff Bolz [Wed, 20 Aug 2025 14:33:14 +0000 (09:33 -0500)]
vulkan: shorten pipeline name strings (#15431)

These detailed strings were causing increased build time on gcc.

6 weeks agochat: handle gpt-oss return/end token inconsistency (#15421)
Daniel Bevenius [Wed, 20 Aug 2025 12:26:01 +0000 (14:26 +0200)]
chat: handle gpt-oss return/end token inconsistency (#15421)

This commit addresses an inconsistency during inference by adding a new
member to the `templates_params` struct to indicate whether the chat is
in inference mode. This allows the gpt-oss specific function
`common_chat_params_init_gpt_oss` to check this flag and the
`add_generation_prompt` flag to determine if it should replace the
`<|return|>` token with the `<|end|>` token in the prompt.

The motivation for this change is to ensure that the formatted prompt of
past messages in `common_chat_format_single` matches the output of the
formatted new message. The issue is that the gpt-oss template returns
different end tags: `<|return|>` when `add_generation_prompt` is false,
and `<|end|>` when `add_generation_prompt` is true. This causes the
substring function to start at an incorrect position, resulting in
tokenization starting with 'tart|>' instead of '<|start|>'.

Resolves: https://github.com/ggml-org/llama.cpp/issues/15417

6 weeks agocommon : fix context shift help message (#15448)
Jie Fu (傅杰) [Wed, 20 Aug 2025 10:33:30 +0000 (18:33 +0800)]
common : fix context shift help message (#15448)

Signed-off-by: Jie Fu <redacted>
6 weeks agocmake : fix target include directories (#15450)
xiaobing318 [Wed, 20 Aug 2025 10:32:05 +0000 (18:32 +0800)]
cmake : fix target include directories (#15450)

* Update docker.yml

修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动

* feat:Modify the header file include path

1. There's no llava directory in the tools directory.
2. Because the command `target_include_directories(mtmd PUBLIC .)` is used in the `mtmd` CMakeLists.txt file, other targets that link against `mtmd` automatically include the `mtmd` directory as a search path for header files. Therefore, you can remove `target_include_directories(${TARGET} PRIVATE ../llava`` or use `target_include_directories(${TARGET} PRIVATE ../mtmd`` to explicitly require the `llama-server` target to use header files from `mtmd`.

* Restore the docker.yml file

6 weeks agomake : remove make in favor of CMake (#15449)
Daniel Bevenius [Wed, 20 Aug 2025 10:31:16 +0000 (12:31 +0200)]
make : remove make in favor of CMake (#15449)

This commit removes the content from the Makefile and updates the
current deprecation message to information that `make` has been
replaced by CMake instead.

The message when `make` is invoked will now be the following:
```console
$ make
Makefile:6: *** Build system changed:
 The Makefile build has been replaced by CMake.

 For build instructions see:
 https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md

.  Stop.
```

The motivation for this is that many, if not all targets fail to build
now, after changes to the system, and `make` has also been deprected for
some time now.

6 weeks agolookahead : add sample command to readme (#15447)
Georgi Gerganov [Wed, 20 Aug 2025 10:30:46 +0000 (13:30 +0300)]
lookahead : add sample command to readme (#15447)

* lookahead : add sample command to readme

* cont : build-agnostic command

6 weeks agomusa: fix build warnings (#15258)
R0CKSTAR [Wed, 20 Aug 2025 02:17:37 +0000 (10:17 +0800)]
musa: fix build warnings (#15258)

* musa: fix build warnings

Signed-off-by: Xiaodong Ye <redacted>
* fix warning: comparison of integers of different signs: 'const int' and 'unsigned int' [-Wsign-compare]

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
6 weeks agoopencl: mark `argsort` unsupported if cols exceed workgroup limit (#15375)
lhez [Tue, 19 Aug 2025 18:25:51 +0000 (02:25 +0800)]
opencl: mark `argsort` unsupported if cols exceed workgroup limit (#15375)

6 weeks agomodel : add gpt-oss type strings (#15424)
Georgi Gerganov [Tue, 19 Aug 2025 16:58:28 +0000 (19:58 +0300)]
model : add gpt-oss type strings (#15424)

6 weeks agocommon : Add top-nsigma sampler to help globally (#15428)
Gian-Carlo Pascutto [Tue, 19 Aug 2025 16:58:14 +0000 (18:58 +0200)]
common : Add top-nsigma sampler to help globally (#15428)

Fixes #15423.

6 weeks agoserver : disable context shift by default (#15416)
Georgi Gerganov [Tue, 19 Aug 2025 13:46:37 +0000 (16:46 +0300)]
server : disable context shift by default (#15416)

* server : disable context shift by default

ggml-ci

* server : make scopr of test parameters local

6 weeks agoCANN: optimize rope operator (#15335)
SHUAI YANG [Tue, 19 Aug 2025 13:28:22 +0000 (21:28 +0800)]
CANN: optimize rope operator (#15335)

* optimize rope ops

* amendment

* delete trailing whitespace

* change the variable name

6 weeks agomusa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (#15413)
R0CKSTAR [Tue, 19 Aug 2025 10:33:47 +0000 (18:33 +0800)]
musa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (#15413)

Signed-off-by: Xiaodong Ye <redacted>
6 weeks agoggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (#15385)
Marvin Gießing [Tue, 19 Aug 2025 08:54:31 +0000 (10:54 +0200)]
ggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (#15385)

* Added VSX intrinsics for Power9+ systems

Signed-off-by: mgiessing <redacted>
* Manual unrolling for minor perf improvement

Signed-off-by: mgiessing <redacted>
* Update ggml/src/ggml-cpu/arch/powerpc/quants.c

Co-authored-by: Georgi Gerganov <redacted>
---------

Signed-off-by: mgiessing <redacted>
Co-authored-by: Georgi Gerganov <redacted>
6 weeks agochat : clarify the meaning of reasoning_format (#15408)
Xuan-Son Nguyen [Tue, 19 Aug 2025 08:29:36 +0000 (10:29 +0200)]
chat : clarify the meaning of reasoning_format (#15408)

* chat : clarify the meaning of reasoning_format

* add link to this PR

6 weeks agoserver : remove swa_full warning (#15399) upstream/latest
Georgi Gerganov [Tue, 19 Aug 2025 05:45:26 +0000 (08:45 +0300)]
server : remove swa_full warning (#15399)

6 weeks agobatched-bench : use rand tokens (#15398)
Georgi Gerganov [Tue, 19 Aug 2025 05:45:12 +0000 (08:45 +0300)]
batched-bench : use rand tokens (#15398)

6 weeks agomtmd : clean up clip_n_output_tokens (#15391) upstream/0.0.6199
Xuan-Son Nguyen [Mon, 18 Aug 2025 20:53:52 +0000 (22:53 +0200)]
mtmd : clean up clip_n_output_tokens (#15391)

6 weeks agocodeowners : remove mmv.*
Georgi Gerganov [Mon, 18 Aug 2025 19:02:50 +0000 (22:02 +0300)]
codeowners : remove mmv.*

6 weeks agosync : ggml
Georgi Gerganov [Mon, 18 Aug 2025 19:02:11 +0000 (22:02 +0300)]
sync : ggml

6 weeks agoscripts : update sync scripts
Georgi Gerganov [Mon, 18 Aug 2025 17:35:47 +0000 (20:35 +0300)]
scripts : update sync scripts

6 weeks agollama : merge conts and reshapes and remove unnecessary cont (#15380)
Sigbjørn Skjæret [Mon, 18 Aug 2025 17:30:17 +0000 (19:30 +0200)]
llama : merge conts and reshapes and remove unnecessary cont (#15380)

* remove unnecessary conts and merge reshapes

* restore necessary conts

* merge more conts and reshapes

* merge even more conts and reshapes

6 weeks agoreadme : update hot topics (#15397)
Georgi Gerganov [Mon, 18 Aug 2025 15:11:44 +0000 (18:11 +0300)]
readme : update hot topics (#15397)

6 weeks agoserver : fix incoming tasks not process in order (#15395)
davidef [Mon, 18 Aug 2025 14:51:42 +0000 (16:51 +0200)]
server : fix incoming tasks not process in order (#15395)

6 weeks agoFix broken build: require updated pip to support --break-system-packages (#15357)
Dobri Danchev [Mon, 18 Aug 2025 10:50:48 +0000 (05:50 -0500)]
Fix broken build: require updated pip to support --break-system-packages (#15357)

* Revert "devops : fix compile bug when the BASE_CUDA_DEV_CONTAINER is based on Ubuntu 24.04 (#15005)"

This reverts commit e4e915912cfd2ee15c5a4a0074813232134892f6.

* devops: Allow pip to modify externally-managed python environment (system installation)

- Updated pip install commands to include the --break-system-packages
  flag, ensuring compatibility when working with system-managed Python
  environments (PEP 668).

- Note: The --break-system-packages option was introduced in 2023.
  Ensure pip is updated to a recent version before using this flag.

fixes [#15004](https://github.com/danchev/llama.cpp/issues/15004)

6 weeks agoggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (#15379)
compilade [Mon, 18 Aug 2025 07:23:56 +0000 (03:23 -0400)]
ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (#15379)

* ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors

* ggml-quants : avoid division by zero in make_q3_quants

6 weeks agovulkan: disable spirv-opt for bfloat16 shaders (#15352)
Jeff Bolz [Mon, 18 Aug 2025 05:56:29 +0000 (00:56 -0500)]
vulkan: disable spirv-opt for bfloat16 shaders (#15352)

6 weeks agoserver : export max observed n_past value (#15361)
Oleksandr Kuvshynov [Sun, 17 Aug 2025 22:28:58 +0000 (18:28 -0400)]
server : export max observed n_past value (#15361)

Add tracking for high watermark cache usage and make it available in /metrics endpoint.

Use-case: Tracking largest needed cache usage under realistic workload
to better understand memory requirements and be able to adjust
cache size/quantization for model/cache accordingly.

6 weeks agovulkan: Use larger workgroups for mul_mat_vec when M is small (#15355)
Jeff Bolz [Sun, 17 Aug 2025 16:08:57 +0000 (11:08 -0500)]
vulkan: Use larger workgroups for mul_mat_vec when M is small (#15355)

* vulkan: Use larger workgroups for mul_mat_vec when M is small

Also use subgroup instructions for (part of) the reduction when supported.
Without this, the more expensive reductions would eat into the benefits of
the larger workgroups.

* update heuristic for amd/intel

Co-authored-by: 0cc4m <redacted>
---------

Co-authored-by: 0cc4m <redacted>
6 weeks agovulkan: support sqrt (#15370)
Dong Won Kim [Sun, 17 Aug 2025 14:03:09 +0000 (23:03 +0900)]
vulkan: support sqrt (#15370)

6 weeks agoconvert : force patch_embd weights to F16 or F32 to avoid broken GGUFs (#15367)
Sigbjørn Skjæret [Sun, 17 Aug 2025 12:47:42 +0000 (14:47 +0200)]
convert : force patch_embd weights to F16 or F32 to avoid broken GGUFs (#15367)

* force patch_embd weights to f32

* use MmprojModel base tensor_force_quant instead

6 weeks agoci : fix hang in windows-hip build/release (#15365)
Sigbjørn Skjæret [Sun, 17 Aug 2025 11:30:23 +0000 (13:30 +0200)]
ci : fix hang in windows-hip build/release (#15365)

* fix hang in windows-latest-cmake-hip

* apply fix to release as well

6 weeks agovulkan: Optimize argsort (#15354)
Jeff Bolz [Sun, 17 Aug 2025 08:41:45 +0000 (03:41 -0500)]
vulkan: Optimize argsort (#15354)

- Launch an appropriate number of invocations (next larger power of two).
32 invocations is common and the barrier is much cheaper there.
- Specialize for "needs bounds checking" vs not.
- Make the code less branchy and [[unroll]] the loops. In the final code,
I see no branches inside the main loop (only predicated stores) when
needs_bounds_check is false.
- Always sort ascending, then apply the ascending vs descending option when
doing the final stores to memory.
- Copy the values into shared memory, makes them slightly cheaper to access.

6 weeks agomodel : support vision LiquidAI LFM2-VL family (#15347)
Tarek Dakhran [Sat, 16 Aug 2025 21:33:54 +0000 (23:33 +0200)]
model : support vision LiquidAI LFM2-VL family (#15347)

* wip lfm2 vision model

* Fix conv weight

* Implement dynamic resolution

* Fix cuda

* support LFM2-VL-450M

* happy CI

* Remove extra `ggml_conv` and put others into the right place

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agovulkan: fuse adds (#15252)
Jeff Bolz [Sat, 16 Aug 2025 16:48:22 +0000 (11:48 -0500)]
vulkan: fuse adds (#15252)

* vulkan: fuse adds

Fuse adds that have the same shape, which are common in MoE models.
It will currently fuse up to 6 adds, because we assume no more than
8 descriptors per dispatch. But this could be changed.

* check runtimeDescriptorArray feature

* disable multi_add for Intel due to likely driver bug

6 weeks agovulkan: Support mul_mat_id with f32 accumulators (#15337)
Jeff Bolz [Sat, 16 Aug 2025 09:18:31 +0000 (04:18 -0500)]
vulkan: Support mul_mat_id with f32 accumulators (#15337)

* vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id

* vulkan: Support mul_mat_id with f32 accumulators, but they are not hooked up

- There's no explicit way to request f32 precision for mul_mat_id, but there
probably should be, and this gets the code in place for that.
- A couple fixes to check_results.
- Remove casts to fp16 in coopmat1 FA shader (found by inspection).

6 weeks agovulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (#15334)
Jeff Bolz [Sat, 16 Aug 2025 08:58:38 +0000 (03:58 -0500)]
vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (#15334)

6 weeks agoOpenCL: add initial FA support (#14987)
rmatif [Sat, 16 Aug 2025 08:05:55 +0000 (10:05 +0200)]
OpenCL: add initial FA support (#14987)

* add F16/F16 fa support

* fix kernel init

* use mad instead of fma

* use inline function

* mark FA with sinks as unsupported for now

* add pragma unroll to loops

7 weeks agocommon : fix double bos, use common_chat_templates for add_bos and add_eos (#15326)
Daniel Bevenius [Fri, 15 Aug 2025 17:50:52 +0000 (19:50 +0200)]
common : fix double bos, use common_chat_templates for add_bos and add_eos (#15326)

This commit updates common_chat_templates_apply_jinja to use the
the add_bos and add_eos parameters from the chat template instead of
the inputs.

The motivation for this is that currently if the `add_bos` and `add_eos`
from the input parameters are used it is possible to there will be a
missmatch between the model and the chat template which can lead to the
the removal of duplicate BOS/EOS tokens in chat.cpp `apply` to not
happen leading to two BOS tokens being added to the template.