]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
8 weeks agoreadme : update hot topics (#15097)
Georgi Gerganov [Tue, 5 Aug 2025 17:19:33 +0000 (20:19 +0300)]
readme : update hot topics (#15097)

8 weeks agosycl: fix mul_mat selection (#15092)
Romain Biessy [Tue, 5 Aug 2025 16:39:55 +0000 (18:39 +0200)]
sycl: fix mul_mat selection (#15092)

8 weeks agoFix `glm4moe` bug (#15088)
Juk Armstrong [Tue, 5 Aug 2025 12:56:44 +0000 (13:56 +0100)]
Fix `glm4moe` bug (#15088)

8 weeks agowebui: fix markdown table (#15081)
Alex Wu [Tue, 5 Aug 2025 11:56:44 +0000 (19:56 +0800)]
webui: fix markdown table (#15081)

* webui: fix markdown table

* webui: fix table display with themes

8 weeks agocontext : fix index overflow on huge outputs (#15080)
compilade [Tue, 5 Aug 2025 09:27:45 +0000 (05:27 -0400)]
context : fix index overflow on huge outputs (#15080)

* context : fix overflow when re-ordering huge outputs

* context : fix logits size overflow for huge batches

8 weeks agollama : add --n-cpu-moe option (#15077)
Diego Devesa [Mon, 4 Aug 2025 23:05:36 +0000 (16:05 -0700)]
llama : add --n-cpu-moe option (#15077)

* llama : add --n-cpu-moe option

Keeps the MoE weights of the first N layers in the CPU

8 weeks agoimatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076)
compilade [Mon, 4 Aug 2025 21:26:52 +0000 (17:26 -0400)]
imatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076)

* imatrix : add warning when suffix is not .gguf for GGUF imatrix

* imatrix : only warn about suffix when output format is unspecified

8 weeks agocmake: Add GGML_BACKEND_DIR option (#15074)
Christian Kastner [Mon, 4 Aug 2025 19:29:14 +0000 (21:29 +0200)]
cmake: Add GGML_BACKEND_DIR option (#15074)

* cmake: Add GGML_BACKEND_DIR option

This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.

* Fix phrasing

8 weeks agogguf-py : add --chat-template-file to gguf_new_metadata (#15075)
Sigbjørn Skjæret [Mon, 4 Aug 2025 19:01:48 +0000 (21:01 +0200)]
gguf-py : add --chat-template-file to gguf_new_metadata (#15075)

8 weeks agomodel: support GLM 4.5 family of models (#14939)
Sam [Mon, 4 Aug 2025 18:29:25 +0000 (04:29 +1000)]
model: support GLM 4.5 family of models (#14939)

* model: Add GLM 4.5 (#14921)

Co-authored-by: Sigbjørn Skjæret <redacted>
* Merge in PR suggestions

Co-authored-by: Sigbjørn Skjæret <redacted>
* model: Add GLM 4.5 family of models (#14921)

1. Updated tensor_mapping.py with NextN tensor mappings

- Added proper tensor mappings for all NextN/MTP tensors in /Users/samm/git/llama.cpp/gguf-py/gguf/tensor_mapping.py
- Added mappings for: eh_proj, embed_tokens, enorm, hnorm, shared_head.head, shared_head.norm

2. Added num_nextn_predict_layers configuration

- Added LLM_KV_NUM_NEXTN_PREDICT_LAYERS constant to llama-arch.h and llama-arch.cpp
- Added num_nextn_predict_layers field to llama_hparams struct
- Updated GLM4_MOE parameter loading in llama-model.cpp to read this parameter
- Modified tensor loading logic to conditionally load NextN tensors based on num_nextn_predict_layers
- Added GGUF writer support in gguf_writer.py with add_num_nextn_predict_layers() method
- Updated conversion script to extract and write this parameter from HuggingFace config

3. Added FIM tokens for GLM4_MOE

- Added GLM-4.5's FIM tokens to llama-vocab.cpp:
  - <|code_prefix|> for FIM_PRE
  - <|code_suffix|> for FIM_SUF
  - <|code_middle|> for FIM_MID

4. Removed manual NextN tensor handling

- Removed the special-case handling in convert_hf_to_gguf.py that manually mapped NextN tensors
- NextN tensors are now handled automatically through the proper tensor mapping system

* glm 4.5 update tensors names

* model: glm 4.5 apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* model: glm 4.5 apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* model: glm 4.5 apply suggestions from code review

* Apply suggestions from code review

* patch broken chat template

* typings fix

* add TENSOR_SKIP flag

Co-authored-by: Diego Devesa <redacted>
* Update src/llama-model-loader.h

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Diego Devesa <redacted>
8 weeks agoquantize : fix confusing error message if ftype is invalid (#15071)
Sigbjørn Skjæret [Mon, 4 Aug 2025 16:11:02 +0000 (18:11 +0200)]
quantize : fix confusing error message if ftype is invalid (#15071)

8 weeks agoggml: WebGPU backend host improvements and style fixing (#14978)
Reese Levine [Mon, 4 Aug 2025 15:52:43 +0000 (08:52 -0700)]
ggml: WebGPU backend host improvements and style fixing (#14978)

* Add parameter buffer pool, batching of submissions, refactor command building/submission

* Add header for linux builds

* Free staged parameter buffers at once

* Format with clang-format

* Fix thread-safe implementation

* Use device implicit synchronization

* Update workflow to use custom release

* Remove testing branch workflow

8 weeks agovulkan: fix build when using glslang that does not support coopmat2 (#15062)
Jeff Bolz [Mon, 4 Aug 2025 05:09:19 +0000 (00:09 -0500)]
vulkan: fix build when using glslang that does not support coopmat2 (#15062)

2 months agoimatrix : use GGUF by default (#14842)
compilade [Sun, 3 Aug 2025 20:00:05 +0000 (16:00 -0400)]
imatrix : use GGUF by default (#14842)

* imatrix : use GGUF by default

* imatrix : use GGUF regardless of the output filename

The legacy format can only be produced with --output-format dat

2 months agoimatrix : fix 3d activation handling for hybrid and recurrent models (#14994)
compilade [Sun, 3 Aug 2025 19:49:13 +0000 (15:49 -0400)]
imatrix : fix 3d activation handling for hybrid and recurrent models (#14994)

* imatrix : use a single count for dense 3d tensors

* imatrix : fix 3d activations when model tensor is 2d

* imatrix : fix 3d tensor counts

2 months agomemory : handle kv_unified for hybrid models (#15050)
compilade [Sun, 3 Aug 2025 19:43:07 +0000 (15:43 -0400)]
memory : handle kv_unified for hybrid models (#15050)

2 months agovocab : JetBrains Mellum pre-tokenizer (#15045)
Csaba Kecskemeti [Sun, 3 Aug 2025 19:38:18 +0000 (12:38 -0700)]
vocab : JetBrains Mellum pre-tokenizer (#15045)

2 months agomodel : add text-only support for Kimi-VL (and find special tokens in text_config...
Gabriel Larson [Sun, 3 Aug 2025 14:56:25 +0000 (09:56 -0500)]
model : add text-only support for Kimi-VL (and find special tokens in text_config)  (#15051)

* basic kimi-vl textmodel conversion

* check config["text_config"] for special tokens

2 months agovulkan: Use coopmat2 for conv2d (#14982)
Jeff Bolz [Sun, 3 Aug 2025 12:23:57 +0000 (07:23 -0500)]
vulkan: Use coopmat2 for conv2d (#14982)

2 months agoopencl: fix adreno compiler detection logic (#15029)
lhez [Sat, 2 Aug 2025 17:51:18 +0000 (10:51 -0700)]
opencl: fix adreno compiler detection logic (#15029)

2 months agoCUDA: use mma FA kernel for gqa > 4 on RTX 4000 (#15035)
Johannes Gäßler [Sat, 2 Aug 2025 14:37:08 +0000 (16:37 +0200)]
CUDA: use mma FA kernel for gqa > 4 on RTX 4000 (#15035)

2 months agocuda: make im2col a little faster (#15025) upstream/0.0.6073
leejet [Sat, 2 Aug 2025 14:15:36 +0000 (22:15 +0800)]
cuda: make im2col a little faster (#15025)

2 months agokv-cache : skip alignment of n_stream in kv-cache log msg [no ci] (#15040)
Daniel Bevenius [Sat, 2 Aug 2025 14:14:57 +0000 (16:14 +0200)]
kv-cache : skip alignment of n_stream in kv-cache log msg [no ci] (#15040)

This commit removes the right alignment the `n_stream` value in the
log message in the `llama_kv_cache_unified` constructor.

The motivation for this change is to enhance the readability of log
message. Currently the output looks like this:
```console
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers,  1/ 1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```
Notice that the `n_stream` value is right aligned, which makes it a
little harder to read.

With the change in this commit the output will look like
```console
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers, 1/1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```

2 months agollama : enable LLAMA_SET_ROWS=1 by default (#14959)
Georgi Gerganov [Sat, 2 Aug 2025 14:14:21 +0000 (17:14 +0300)]
llama : enable LLAMA_SET_ROWS=1 by default (#14959)

ggml-ci

2 months agocuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (#15038)
Georgi Gerganov [Sat, 2 Aug 2025 14:13:05 +0000 (17:13 +0300)]
cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (#15038)

* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1

ggml-ci

* cont : fix cont types

ggml-ci

* cont : adopt variable names and comment from the other branch

2 months agoci : check that pre-tokenizer hashes are up-to-date (#15032)
Sigbjørn Skjæret [Sat, 2 Aug 2025 12:39:01 +0000 (14:39 +0200)]
ci : check that pre-tokenizer hashes are up-to-date (#15032)

* torch is not required for convert_hf_to_gguf_update

* add --check-missing parameter

* check that pre-tokenizer hashes are up-to-date

2 months agoconvert : fix Qwen3-Embedding pre-tokenizer hash (#15030)
Douglas Hanley [Sat, 2 Aug 2025 10:51:02 +0000 (05:51 -0500)]
convert : fix Qwen3-Embedding pre-tokenizer hash (#15030)

2 months agochat : fix multiple tool_calls on hermes-2-pro (#14962)
Jhen-Jie Hong [Sat, 2 Aug 2025 10:04:48 +0000 (18:04 +0800)]
chat : fix multiple tool_calls on hermes-2-pro (#14962)

2 months agovulkan: coopmat2 mul_mat optimizations (#14934)
Jeff Bolz [Sat, 2 Aug 2025 09:21:37 +0000 (04:21 -0500)]
vulkan: coopmat2 mul_mat optimizations (#14934)

- Increase tile size for k-quants, to match non-k-quants
- Choose more carefully between large and medium tiles, considering how it
  interacts with split_k
- Allow larger/non-power of two split_k, and make the splits a multiple of 256
- Use split_k==3 to when >1/2 and <=2/3 of the SMs would hae been used

2 months agollama-bench: rename DB table name from test to llama_bench (#15003)
R0CKSTAR [Sat, 2 Aug 2025 09:20:40 +0000 (17:20 +0800)]
llama-bench: rename DB table name from test to llama_bench (#15003)

Signed-off-by: Xiaodong Ye <redacted>
2 months agovulkan: Support ne[3]>1 in noncontig matrix-vector multiply (#15015)
Jeff Bolz [Sat, 2 Aug 2025 08:48:30 +0000 (03:48 -0500)]
vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (#15015)

2 months agomodel : support Qwen3-Embedding (#15023)
Douglas Hanley [Sat, 2 Aug 2025 08:44:50 +0000 (03:44 -0500)]
model : support Qwen3-Embedding (#15023)

2 months agoserver: enable token array inputs for OAI API (#15001)
Johannes Gäßler [Sat, 2 Aug 2025 08:12:41 +0000 (10:12 +0200)]
server: enable token array inputs for OAI API (#15001)

2 months agovulkan: optimizations for direct convolution (#14933)
Jeff Bolz [Sat, 2 Aug 2025 07:57:04 +0000 (02:57 -0500)]
vulkan: optimizations for direct convolution (#14933)

* vulkan: optimizations for direct convolution

- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
  the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.

* Three tiles sizes for CONV_2D, and a heuristic to choose

* reallow collectives for pre-Turing

* make SHMEM_PAD a spec constant

* fixes for intel perf - no shmem padding, placeholder shader core count

* shader variants with/without unrolling

* 0cc4m's fixes for AMD perf

Co-authored-by: 0cc4m <redacted>
---------

Co-authored-by: 0cc4m <redacted>
2 months agoCUDA: fix MMQ nwarps for AMD with warp_size==32 (#15014)
Johannes Gäßler [Fri, 1 Aug 2025 18:47:32 +0000 (20:47 +0200)]
CUDA: fix MMQ nwarps for AMD with warp_size==32 (#15014)

2 months agovendor : update vendored copy of google/minja (#15011)
l-austenfeld [Fri, 1 Aug 2025 14:59:06 +0000 (16:59 +0200)]
vendor : update vendored copy of google/minja (#15011)

* vendor : update vendored copy of google/minja

Signed-off-by: Lennart Austenfeld <redacted>
* Re-remove trailing whitespace

Signed-off-by: Lennart Austenfeld <redacted>
* Remove another trailing whitespace

Signed-off-by: Lennart Austenfeld <redacted>
---------

Signed-off-by: Lennart Austenfeld <redacted>
2 months agomodel : add hunyuan dense (#14878)
stevenkuang [Fri, 1 Aug 2025 13:31:12 +0000 (21:31 +0800)]
model : add hunyuan dense (#14878)

* support hunyuan_v1_dense

Signed-off-by: stevenkuang <redacted>
* update hunyuan_moe to hunyuan_v1_moe

Signed-off-by: stevenkuang <redacted>
* fix rope alpha assert and bos token

Signed-off-by: stevenkuang <redacted>
* add blank line

Signed-off-by: stevenkuang <redacted>
* Revert "update hunyuan_moe to hunyuan_v1_moe"

This reverts commit aa973ca21913aba77f6e81a935270ef7be222e75.

* use hunyuan_dense instead of hunyuan_v1_dense

Signed-off-by: stevenkuang <redacted>
* fix hunyuan_moe chat template

Signed-off-by: stevenkuang <redacted>
* remove leftover code

Signed-off-by: stevenkuang <redacted>
* update hunyuan dense chat template

Signed-off-by: stevenkuang <redacted>
* fix hunyuan dense vocab and chat template

Signed-off-by: stevenkuang <redacted>
---------

Signed-off-by: stevenkuang <redacted>
2 months agoopencl: add f16 for `add`, `sub`, `mul`, `div` (#14984)
lhez [Fri, 1 Aug 2025 11:15:44 +0000 (04:15 -0700)]
opencl: add f16 for `add`, `sub`, `mul`, `div` (#14984)

2 months agoggml : Q2k interleaving implementation - x86/x64 SIMD (#14373)
Srihari-mcw [Fri, 1 Aug 2025 06:20:33 +0000 (11:50 +0530)]
ggml : Q2k interleaving implementation - x86/x64 SIMD (#14373)

* Initial Q2_K Block Interleaving Implementation

* Addressed review comments and clean up of the code

* Post rebase fixes

* Initial CI/CD fixes

* Update declarations in arch-fallback.h

* Changes for GEMV Q2_K in arch-fallback.h

* Enable repacking only on AVX-512 machines

* Update comments in repack.cpp

* Address q2k comments

---------

Co-authored-by: Manogna-Sree <redacted>
2 months agograph : fix equal_seq() check (#14986)
Georgi Gerganov [Fri, 1 Aug 2025 03:38:12 +0000 (06:38 +0300)]
graph : fix equal_seq() check (#14986)

ggml-ci

2 months agodocker : add cann build pipline (#14591)
diannao [Fri, 1 Aug 2025 02:02:34 +0000 (10:02 +0800)]
docker : add cann build pipline (#14591)

* docker: add cann build pipline

* docker: add cann build pipline

* docker: fix cann devops

* cann : fix multi card hccl

* Update ggml/src/ggml-cann/ggml-cann.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Update ggml-cann.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
2 months agocompare-commits.sh: support both llama-bench and test-backend-ops (#14392)
R0CKSTAR [Fri, 1 Aug 2025 00:47:27 +0000 (08:47 +0800)]
compare-commits.sh: support both llama-bench and test-backend-ops (#14392)

* compare-commits.sh: support both llama-bench and test-backend-ops

Signed-off-by: Xiaodong Ye <redacted>
* Speed up the build by specifying -j 12

Signed-off-by: Xiaodong Ye <redacted>
* Remove build_number from test-backend-ops db

Signed-off-by: Xiaodong Ye <redacted>
* Apply suggestion from @JohannesGaessler

Co-authored-by: Johannes Gäßler <redacted>
* Refine tool selection logic

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: Johannes Gäßler <redacted>
2 months agoquantize : skip tensor override when in fallback mode (#14995)
Ed Addario [Thu, 31 Jul 2025 19:32:18 +0000 (20:32 +0100)]
quantize : skip tensor override when in fallback mode (#14995)

2 months agollama : add simple option to enable CPU for MoE weights (--cpu-moe) (#14992)
Diego Devesa [Thu, 31 Jul 2025 18:15:41 +0000 (11:15 -0700)]
llama : add simple option to enable CPU for MoE weights (--cpu-moe) (#14992)

2 months agoFix params bug in diffusion example (#14993)
Aman Gupta [Thu, 31 Jul 2025 17:22:58 +0000 (01:22 +0800)]
Fix params bug in diffusion example (#14993)

2 months agollama : allow other bufts when overriding to CPU, add --no-repack option (#14990)
Diego Devesa [Thu, 31 Jul 2025 16:11:34 +0000 (09:11 -0700)]
llama : allow other bufts when overriding to CPU, add --no-repack option (#14990)

2 months agoVulkan: Fix minor debug mode issues (#14899)
Ruben Ortlam [Thu, 31 Jul 2025 15:46:54 +0000 (17:46 +0200)]
Vulkan: Fix minor debug mode issues (#14899)

* vulkan: fix debug mode issues

* vulkan: remove broken check_results GGML_OP_SET_ROWS support

2 months agomtmd : support MiniCPM-V 4.0 (#14983)
tc-mb [Thu, 31 Jul 2025 15:22:17 +0000 (23:22 +0800)]
mtmd : support MiniCPM-V 4.0 (#14983)

* support minicpm-v 4

* add md

* support MiniCPM-o 4.0

* add default location

* temp rm MiniCPM-o 4.0

* fix code

* fix "minicpmv_projector" default path

2 months agoMODEL_TENSOR.SSM_DT_NORM has defined twice (#14991)
Csaba Kecskemeti [Thu, 31 Jul 2025 14:59:49 +0000 (07:59 -0700)]
MODEL_TENSOR.SSM_DT_NORM has defined twice (#14991)

* MODEL_TENSOR.SSM_DT_NORM has defined twice, and second overwritten the jamba model's layername

* correct order

2 months agoserver : implement universal assisted decoding (#12635)
g2mt [Thu, 31 Jul 2025 12:25:23 +0000 (05:25 -0700)]
server : implement universal assisted decoding (#12635)

* llama-server : implement universal assisted decoding

* Erase prompt tail for kv-cache

* set vocab_dft_compatible in common_speculative

* rename ctx_main to ctx_tgt

* move vocab_dft_compatible to spec struct

* clear mem_dft, remove mem

* detokenize id_last for incompatible models

* update comment

* add --spec-replace flag

* accept special tokens when translating between draft/main models

* Escape spec-replace

* clamp draft result to size to params.n_draft

* fix comment

* clean up code

* restore old example

* log common_speculative_are_compatible in speculative example

* fix

* Update common/speculative.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update common/speculative.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update common/speculative.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 months agollama : merge build_moe_ffn_from_probs function into build_moe_ffn (#14968)
Dongliang Wei [Thu, 31 Jul 2025 12:12:20 +0000 (20:12 +0800)]
llama : merge build_moe_ffn_from_probs function into build_moe_ffn (#14968)

2 months agoserver : add openai-style logit_bias support (#14946)
Lukas Straub [Thu, 31 Jul 2025 12:08:23 +0000 (14:08 +0200)]
server : add openai-style logit_bias support (#14946)

Signed-off-by: Lukas Straub <redacted>
2 months agoAdd LLaDA 8b Diffusion model (#14771)
Aman Gupta [Thu, 31 Jul 2025 11:49:09 +0000 (19:49 +0800)]
Add LLaDA 8b Diffusion model (#14771)

* Add support for Llada-8b: diffusion model

* Add README

* Fix README and convert_hf_to_gguf

* convert_hf_to_gguf.py: address review comments

* Make everything in a single example

* Remove model-specific sampling

* Remove unused argmax

* Remove braced initializers, improve README.md a bit

* Add diffusion specific gguf params in set_vocab, remove setting rope_theta and rms_norm_eps

* Remove adding the mask token

* Move add_add_bos_token to set_vocab

* use add_bool in gguf_writer.py

2 months agoCANN: Improve loading efficiency after converting weights to NZ format. (#14985)
hipudding [Thu, 31 Jul 2025 11:47:20 +0000 (19:47 +0800)]
CANN: Improve loading efficiency after converting weights to NZ format. (#14985)

* CANN: Improve loading efficiency after converting weights to NZ format.

* CANN: fix typo

2 months agograph : reduce splits for recurrent and hybrid models (#14825)
compilade [Thu, 31 Jul 2025 05:02:46 +0000 (01:02 -0400)]
graph : reduce splits for recurrent and hybrid models (#14825)

* graph : avoid creating redundant s_copy views

* graph : comment the s_copy views

2 months agoopencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (#14809)
lhez [Wed, 30 Jul 2025 21:56:55 +0000 (14:56 -0700)]
opencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (#14809)

2 months agoquantize : fix using combined imatrix GGUFs (multiple datasets) (#14973)
Ed Addario [Wed, 30 Jul 2025 19:11:56 +0000 (20:11 +0100)]
quantize : fix using combined imatrix GGUFs (multiple datasets) (#14973)

2 months agoserver : add support for `embd_normalize` parameter (#14964)
Daniel Bevenius [Wed, 30 Jul 2025 16:07:11 +0000 (18:07 +0200)]
server : add support for `embd_normalize` parameter (#14964)

This commit adds support for the `embd_normalize` parameter in the
server code.

The motivation for this is that currently if the server is started with
a pooling type that is not `none`, then Euclidean/L2 normalization will
be the normalization method used for embeddings. However, this is not
always the desired behavior, and users may want to use other
normalization (or none) and this commit allows that.

Example usage:
```console
curl --request POST \
    --url http://localhost:8080/embedding \
    --header "Content-Type: application/json" \
    --data '{"input": "Hello world today", "embd_normalize": -1}
```

2 months agoHIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (#14949)
uvos [Wed, 30 Jul 2025 15:38:06 +0000 (17:38 +0200)]
HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (#14949)

2 months agosync : ggml
Georgi Gerganov [Wed, 30 Jul 2025 13:03:13 +0000 (16:03 +0300)]
sync : ggml

ggml-ci

2 months agocmake : Fix BLAS link interface (ggml/1316)
Kai Pastor [Wed, 30 Jul 2025 12:53:16 +0000 (14:53 +0200)]
cmake : Fix BLAS link interface (ggml/1316)

2 months agovulkan : fix 32-bit builds (ggml/1313)
Kai Pastor [Wed, 30 Jul 2025 12:52:26 +0000 (14:52 +0200)]
vulkan : fix 32-bit builds (ggml/1313)

The pipeline member can be cast to VkPipeline.
This is a VkPipeline_T* on 64 bit but a uint64_t on 32 bit.
Cf. VK_DEFINE_NON_DISPATCHABLE_HANDLE documentation.

2 months agoCUDA: skip masked KV slices for all FA kernels (#14924)
Johannes Gäßler [Wed, 30 Jul 2025 13:46:13 +0000 (15:46 +0200)]
CUDA: skip masked KV slices for all FA kernels (#14924)

2 months agotests : update for LLAMA_SET_ROWS=1 (#14961)
Georgi Gerganov [Wed, 30 Jul 2025 12:12:02 +0000 (15:12 +0300)]
tests : update for LLAMA_SET_ROWS=1 (#14961)

* test-thread-safety : each context uses a single sequence

* embedding : handle --parallel argument

ggml-ci

* save-load : handle -np 1

ggml-ci

* thread-safety : avoid overriding threads, reduce test case arg

ggml-ci

2 months agograph : fix stack-use-after-return (#14960)
Georgi Gerganov [Wed, 30 Jul 2025 10:52:11 +0000 (13:52 +0300)]
graph : fix stack-use-after-return (#14960)

ggml-ci

2 months agoembeddings: fix extraction of CLS pooling results (#14927)
Douglas Hanley [Wed, 30 Jul 2025 05:25:05 +0000 (00:25 -0500)]
embeddings: fix extraction of CLS pooling results (#14927)

* embeddings: fix extraction of CLS pooling results

* merge RANK pooling into CLS case for inputs

2 months agoCANN: update ops docs (#14935)
Xinpeng Dou [Wed, 30 Jul 2025 00:39:24 +0000 (08:39 +0800)]
CANN: update ops docs (#14935)

* CANN:add ops docs

* CANN: update ops docs

2 months agoHIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets...
uvos [Tue, 29 Jul 2025 18:23:04 +0000 (20:23 +0200)]
HIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets (#14945)

2 months agoHIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (#14930)
uvos [Tue, 29 Jul 2025 15:44:30 +0000 (17:44 +0200)]
HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (#14930)

This is useful for testing for regressions on GCN with CDNA hardware.

With GGML_HIP_MMQ_MFMA=Off and GGML_CUDA_FORCE_MMQ=On we can conveniently test the GCN code path on CDNA. As CDNA is just GCN renamed with MFMA added and limited use ACC registers, this provides a good alternative for regression testing when GCN hardware is not available.

2 months agoHIP: Ignore unsupported unroll transformation in fattn-vec (#14931)
uvos [Tue, 29 Jul 2025 15:43:43 +0000 (17:43 +0200)]
HIP: Ignore unsupported unroll transformation in fattn-vec (#14931)

llvm with the amdgcn target dose not support unrolling loops with conditional break statements, when those statements can not be resolved at compile time. Similar to other places in GGML lets simply ignore this warning.

2 months agocommon : avoid logging partial messages (which can contain broken UTF-8 sequences...
kallewoof [Tue, 29 Jul 2025 15:05:38 +0000 (00:05 +0900)]
common : avoid logging partial messages (which can contain broken UTF-8 sequences) (#14937)

* bug-fix: don't attempt to log partial parsed messages to avoid crash due to unfinished UTF-8 sequences

2 months agoCANN: Add ggml_set_rows (#14943)
hipudding [Tue, 29 Jul 2025 14:36:43 +0000 (22:36 +0800)]
CANN: Add ggml_set_rows (#14943)

2 months agocuda : add softcap fusion (#14907)
Sigbjørn Skjæret [Tue, 29 Jul 2025 12:22:03 +0000 (14:22 +0200)]
cuda : add softcap fusion (#14907)

2 months agoserver-bench: make seed choice configurable (#14929)
Johannes Gäßler [Tue, 29 Jul 2025 08:40:50 +0000 (10:40 +0200)]
server-bench: make seed choice configurable (#14929)

* server-bench: make seed choice configurable

* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix error formatting

* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoCUDA: add roll (#14919)
Aman Gupta [Tue, 29 Jul 2025 06:45:18 +0000 (14:45 +0800)]
CUDA: add roll (#14919)

* CUDA: add roll

* Make everything const, use __restrict__

2 months agoopencl : add ops docs (#14910)
lhez [Mon, 28 Jul 2025 16:50:17 +0000 (09:50 -0700)]
opencl : add ops docs (#14910)

2 months agotest-backend-ops : extend test case filtering (#14865)
Leonard Mosescu [Mon, 28 Jul 2025 16:04:27 +0000 (09:04 -0700)]
test-backend-ops : extend test case filtering (#14865)

* Extend test case filtering

1. Allow passing multiple (comma-separated?) ops to test-backend-ops. This can be convenient when working on a set of ops, when you'd want to test them together (but without having to run every single op). For example:

`test-backend-ops.exe test -o "ADD,RMS_NORM,ROPE,SILU,SOFT_MAX"`

2. Support full test-case variation string in addition to basic op names. This would make it easy to select a single variation, either for testing or for benchmarking. It can be particularly useful for profiling a particular variation (ex. a CUDA kernel), for example:

`test-backend-ops.exe perf -b CUDA0 -o "MUL_MAT(type_a=f16,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3],v=2)"`

These two can be combined. As the current `-o`, this change doesn't try to detect/report an error if an filter doesn't name existing ops (ex. misspelled)

* Updating the usage help text

* Update tests/test-backend-ops.cpp

2 months agollama-bench : use local GPUs along with RPC servers (#14917)
Radoslav Gerganov [Mon, 28 Jul 2025 15:59:04 +0000 (18:59 +0300)]
llama-bench : use local GPUs along with RPC servers (#14917)

Currently if RPC servers are specified with '--rpc' and there is a local
GPU available (e.g. CUDA), the benchmark will be performed only on the
RPC device(s) but the backend result column will say "CUDA,RPC" which is
incorrect. This patch is adding all local GPU devices and makes
llama-bench consistent with llama-cli.

2 months agoggml-cpu : deduplicate scalar implementations (#14897)
xctan [Mon, 28 Jul 2025 15:40:24 +0000 (23:40 +0800)]
ggml-cpu : deduplicate scalar implementations (#14897)

* remove redundant code in riscv

* remove redundant code in arm

* remove redundant code in loongarch

* remove redundant code in ppc

* remove redundant code in s390

* remove redundant code in wasm

* remove redundant code in x86

* remove fallback headers

* fix x86 ggml_vec_dot_q8_0_q8_0

2 months agoSYCL: Add set_rows support for quantized types (#14883)
Akarshan Biswas [Mon, 28 Jul 2025 15:02:15 +0000 (20:32 +0530)]
SYCL: Add set_rows support for quantized types  (#14883)

* SYCL: Add set_rows support for quantized types

This commit adds support for GGML_OP_SET_ROWS operation for various
quantized tensor types (Q8_0, Q5_1, Q5_0, Q4_1, Q4_0, IQ4_NL) and BF16
type in the SYCL backend.

The quantization/dequantization copy kernels were moved from cpy.cpp
to cpy.hpp to make them available for set_rows.cpp.

This addresses part of the TODOs mentioned in the code.

* Use get_global_linear_id() instead

ggml-ci

* Fix formatting

ggml-ci

* Use const for ne11 and size_t variables in set_rows_sycl_q

ggml-ci

* Increase block size for q kernel to 256

ggml-ci

* Cleanup imports

* Add float.h to cpy.hpp

2 months agomtmd : add support for Voxtral (#14862)
Xuan-Son Nguyen [Mon, 28 Jul 2025 13:01:48 +0000 (15:01 +0200)]
mtmd : add support for Voxtral (#14862)

* mtmd : add support for Voxtral

* clean up

* fix python requirements

* add [BEGIN_AUDIO] token

* also support Devstral conversion

* add docs and tests

* fix regression for ultravox

* minor coding style improvement

* correct project activation fn

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoCUDA: fix pointer incrementation in FA (#14916)
Johannes Gäßler [Mon, 28 Jul 2025 12:30:22 +0000 (14:30 +0200)]
CUDA: fix pointer incrementation in FA (#14916)

2 months agomodel : add support for SmallThinker series (#14898)
Dongliang Wei [Mon, 28 Jul 2025 11:47:00 +0000 (19:47 +0800)]
model : add support for SmallThinker series (#14898)

* support smallthinker

* support 20b softmax, 4b no sliding window

* new build_moe_ffn_from_probs, and can run 4b

* fix 4b rope bug

* fix python type check

* remove is_moe judge

* remove set_dense_start_swa_pattern function and modify set_swa_pattern function

* trim trailing whitespace

* remove get_vocab_base of SmallThinkerModel in convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* better whitespace

Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* use GGML_ASSERT for expert count validation

Co-authored-by: Sigbjørn Skjæret <redacted>
* Improve null pointer check for probs

Co-authored-by: Sigbjørn Skjæret <redacted>
* use template parameter for SWA attention logic

* better whitespace

Co-authored-by: Georgi Gerganov <redacted>
* move the creation of inp_out_ids before the layer loop

* remove redundant judge for probs

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 months agosycl: refactor quantization to q8_1 (#14815)
Alberto Cabrera Pérez [Mon, 28 Jul 2025 10:05:53 +0000 (11:05 +0100)]
sycl: refactor quantization to q8_1  (#14815)

* sycl: quantization to q8_1 refactor

* Refactored src1 copy logic in op_mul_mat

2 months agoops : update BLAS (#14914)
Georgi Gerganov [Mon, 28 Jul 2025 08:01:03 +0000 (11:01 +0300)]
ops : update BLAS (#14914)

2 months agoops : update Metal (#14912)
Georgi Gerganov [Mon, 28 Jul 2025 05:22:56 +0000 (08:22 +0300)]
ops : update Metal (#14912)

2 months agosync : ggml
Georgi Gerganov [Mon, 28 Jul 2025 05:14:20 +0000 (08:14 +0300)]
sync : ggml

2 months agocmake : Indent ggml-config.cmake (ggml/1310)
Kai Pastor [Thu, 24 Jul 2025 17:58:02 +0000 (19:58 +0200)]
cmake : Indent ggml-config.cmake (ggml/1310)

2 months agoquantize : update README.md (#14905)
Ed Addario [Sun, 27 Jul 2025 21:31:11 +0000 (22:31 +0100)]
quantize : update README.md (#14905)

* Update README.md

* Fix trailing whitespace

* Update README.md

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agovulkan: add ops docs (#14900)
Ruben Ortlam [Sun, 27 Jul 2025 13:33:08 +0000 (15:33 +0200)]
vulkan: add ops docs (#14900)

2 months agoSYCL: add ops doc (#14901)
Akarshan Biswas [Sun, 27 Jul 2025 12:22:58 +0000 (17:52 +0530)]
SYCL: add ops doc (#14901)

2 months agollama : clarify comment about pp and tg graphs [no ci] (#14895)
Daniel Bevenius [Sun, 27 Jul 2025 10:10:51 +0000 (12:10 +0200)]
llama : clarify comment about pp and tg graphs [no ci] (#14895)

* llama : clarify comment about pp and tg graphs [no ci]

This commit clarifies the comment in `llama-context.cpp` regarding the
prefill prompt (pp), and token generation (tg) graphs.

The motivation for this is that I've struggled to remember these and had
to look them up more than once, so I thought it would be helpful to add
a comment that makes it clear what these stand for.

* squash! llama : clarify comment about pp and tg graphs [no ci]

Change "pp" to "prompt processing".

2 months agovulkan : add fp16 support for the conv_2d kernel (#14872)
Erik Scholz [Sun, 27 Jul 2025 10:04:33 +0000 (12:04 +0200)]
vulkan : add fp16 support for the conv_2d kernel (#14872)

* add f16 to conv_2d testing
* weaken conv2d test error threshold

2 months agovulkan: skip empty set_rows to avoid invalid API usage (#14860)
Jeff Bolz [Sun, 27 Jul 2025 09:05:34 +0000 (04:05 -0500)]
vulkan: skip empty set_rows to avoid invalid API usage (#14860)

2 months agomodel : make rope_yarn_log_mul optional for deepseek2 (#14896)
Gabriel Larson [Sun, 27 Jul 2025 08:18:37 +0000 (03:18 -0500)]
model : make rope_yarn_log_mul optional for deepseek2 (#14896)

* make rope_yarn_log_mul optional for deepseek2

* default rope_yarn_log_mul = 0.0f

2 months agollama : fix kq_scale for the attention layers of PLaMo2 (#14892)
Shunta Saito [Sun, 27 Jul 2025 07:38:44 +0000 (16:38 +0900)]
llama : fix kq_scale for the attention layers of PLaMo2 (#14892)

* Fix dimensions for expand

* Change dimensions to copy states to cache

* Fix the default value for plamo2 conversion

* Fix scale given to build_attn

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoDocs: add instructions for adding backends (#14889)
Aman Gupta [Sun, 27 Jul 2025 01:36:43 +0000 (09:36 +0800)]
Docs: add instructions for adding backends (#14889)

2 months agoHIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (#14624)
deepsek [Sat, 26 Jul 2025 22:28:14 +0000 (18:28 -0400)]
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (#14624)

This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.

2 months agoCANN: Implement GLU ops (#14884)
hipudding [Sat, 26 Jul 2025 09:56:18 +0000 (17:56 +0800)]
CANN: Implement GLU ops (#14884)

Implement REGLU, GEGLU, SWIGLU ops according to #14158

2 months agomusa: fix build warnings (unused variable) (#14869)
R0CKSTAR [Sat, 26 Jul 2025 02:36:02 +0000 (10:36 +0800)]
musa: fix build warnings (unused variable) (#14869)

Signed-off-by: Xiaodong Ye <redacted>