]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
4 weeks agovulkan: Skip syncing for prealloc_y when it is reused (#15544)
Jeff Bolz [Sat, 30 Aug 2025 09:11:22 +0000 (04:11 -0500)]
vulkan: Skip syncing for prealloc_y when it is reused (#15544)

4 weeks agoCANN: FIx compiler warnings (#15661)
Chenguang Li [Sat, 30 Aug 2025 02:18:35 +0000 (10:18 +0800)]
CANN: FIx compiler warnings (#15661)

Signed-off-by: noemotiovon <redacted>
5 weeks agoserver : removed obsolete doc (#15670)
Sergey Alirzaev [Fri, 29 Aug 2025 22:12:53 +0000 (00:12 +0200)]
server : removed obsolete doc (#15670)

completing a4090d1174aed22dde5cacce2a4c27656b987a2f

5 weeks agoscripts: strip "AMD Instinct" from GPU name (#15668)
Johannes Gäßler [Fri, 29 Aug 2025 20:04:08 +0000 (22:04 +0200)]
scripts: strip "AMD Instinct" from GPU name (#15668)

5 weeks agoserver : add documentation for `parallel_tool_calls` param (#15647)
ExtReMLapin [Fri, 29 Aug 2025 17:25:40 +0000 (19:25 +0200)]
server : add documentation for `parallel_tool_calls` param (#15647)

Co-authored-by: Pierre F <redacted>
5 weeks agoCUDA: fix bug in rms_norm fusion (#15660)
Aman Gupta [Fri, 29 Aug 2025 13:30:06 +0000 (21:30 +0800)]
CUDA: fix bug in rms_norm fusion (#15660)

* CUDA: fix bug in rms_norm fusion

* Fix bug for OP_REPEAT

* Fix index for add

5 weeks agochat : Seed OSS thinking + tool call support (#15552)
Piotr Wilkin (ilintar) [Fri, 29 Aug 2025 12:53:41 +0000 (14:53 +0200)]
chat : Seed OSS thinking + tool call support (#15552)

* Reasoning and tool-calling support for Seed OSS

* Fix grammar and partial parsing

* Whitespace

* New chat template

* Update common/chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update common/chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Remove unused 'purge_healing_marker' helper

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agoCUDA: fuse adds, fuse add with rms norm (#15631)
Aman Gupta [Fri, 29 Aug 2025 03:35:58 +0000 (11:35 +0800)]
CUDA: fuse adds, fuse add with rms norm (#15631)

* CUDA: fused add with rms_norm_mul

* Non-broadcast fuse works

* Add fused adds

* format

* Remove n_fuse from template params

* Address review comments

* Move template inside binbcast

5 weeks agonvidia nemotron nano v2 (nemotronh) (#15507)
Gabe Goodhart [Fri, 29 Aug 2025 00:39:31 +0000 (18:39 -0600)]
nvidia nemotron nano v2 (nemotronh) (#15507)

* feat: Add NEMOTRONH to python arch enum

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add NEMOTRONH to c++ arch enum

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add NEMOTRONH to llama-arch layer map

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: First pass at conversion for nemotronh

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: Add a verbose log for each tensor loaded

This is really helpful for diagnosing mismatches between the expected and
received tensors

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* feat: First (broken) pass at nemotronh model architecture

It generates tokens, just not valid ones!

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Explicitly enable add_bos_token during conversion

The `tokenizer.json`/`tokenizer_config.json` in the model are a bit
contradictory. In the config, add_bos_token is set to False, but the
tokenizer model itself has a post_processor that adds the BOS token via
type: TemplateProcessing

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use relu2 (LLM_FFN_RELU_SQR) for activation in FFN layers

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Only allocate attention cache for attention layers (not non-recurrent)

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Move residual add to after every block

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* fix: Use the correct norm tensor for the MLP blocks

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
* Nemotron-H: MLP gate cleanup (pass NULL for unused gate)

This model does not use a gate in MLP blocks; pass NULLs for gate tensors to make intent clear and avoid unused-pointer noise.

* SSM: respect ssm_dt_rank for dt_dim when provided

Use GGUF-provided time_step_rank (ssm_dt_rank) to set dt_dim when > 0; fallback to max(64, n_embd/16).

* fix: plamo2 - revert dt_dim to default (remove ssm_dt_rank usage)

* Rename nemotronh to nemotron_h for consistency

- Update architecture name from NEMOTRONH to NEMOTRON_H in constants.py
- Change architecture string from 'nemotronh' to 'nemotron_h' in all files
- Update enum LLM_ARCH_NEMOTRONH to LLM_ARCH_NEMOTRON_H
- Update class name llm_build_nemotronh to llm_build_nemotron_h
- Consistent naming with underscore convention (nemotron_h vs nemotronh)

* feat: Support conversion for older NemotronH models

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
---------

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Maicon Domingues <redacted>
Co-authored-by: weatherman <redacted>
5 weeks agofix: Compute the full sum in llama-eval-callback, not just the sum of printed values...
Gabe Goodhart [Thu, 28 Aug 2025 20:27:36 +0000 (15:27 -0500)]
fix: Compute the full sum in llama-eval-callback, not just the sum of printed values (#15637)

This makes it much easier to compare between llama.cpp and transformers!

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
5 weeks agoCUDA: add conv2d (#15635)
mnehete32 [Thu, 28 Aug 2025 18:33:03 +0000 (00:03 +0530)]
CUDA: add conv2d (#15635)

* CUDA: add conv2d

* CUDA: conv2d - correct formatting and added const

5 weeks agoggml-cpu: fix invalid hsum build in debug s390x (#15634)
Aaron Teo [Thu, 28 Aug 2025 14:39:27 +0000 (22:39 +0800)]
ggml-cpu: fix invalid hsum build in debug s390x (#15634)

Signed-off-by: Aaron Teo <redacted>
5 weeks agoggml : fix SSM_SCAN for n_groups > 1 (#15625)
compilade [Thu, 28 Aug 2025 14:11:36 +0000 (10:11 -0400)]
ggml : fix SSM_SCAN for n_groups > 1 (#15625)

5 weeks agokv-cache : fix find_slot to not search for continuous slot (#15638)
Georgi Gerganov [Thu, 28 Aug 2025 14:09:05 +0000 (17:09 +0300)]
kv-cache : fix find_slot to not search for continuous slot (#15638)

ggml-ci

5 weeks agomodel : jina-embeddings-v3 support (#13693)
Sigbjørn Skjæret [Thu, 28 Aug 2025 13:49:50 +0000 (15:49 +0200)]
model : jina-embeddings-v3 support (#13693)

* initial jina-embeddings-v3 support

* initial jina-embeddings-v3 support

* initial jina-embeddings-v3 support

* fix vocab parsing with only tokenizer.json

* set mask token lstrip attribute

* additional unk_token_id fallback just in case [no ci]

* revert vocab_size() change [no ci]

* merge tensor loading into general bert

* rope

* add lora embedding and loading (non-functional)

* export separate lora ggufs instead

* add adapter metadata api

* use std::string

* convert_hf_to_lora compatibility

* fix assert

* apply suggestions from review

* apply suggestion from review

5 weeks agoscripts: add sqlite3 check for compare-commits.sh (#15633)
Aman Gupta [Thu, 28 Aug 2025 11:23:22 +0000 (19:23 +0800)]
scripts: add sqlite3 check for compare-commits.sh (#15633)

5 weeks agokv-cache : remove LLAMA_SET_ROWS checks (#15505)
Georgi Gerganov [Thu, 28 Aug 2025 09:27:02 +0000 (12:27 +0300)]
kv-cache : remove LLAMA_SET_ROWS checks (#15505)

ggml-ci

5 weeks agogguf-py: byteswapping improvements (#12851)
Aleksei Nikiforov [Thu, 28 Aug 2025 08:56:41 +0000 (10:56 +0200)]
gguf-py: byteswapping improvements (#12851)

* gguf-py: implement byteswapping for Q4_0

This is needed to byteswap Mistral model.

Also restore original shapes after byteswapping tensors.
It is not needed at the moment, but do it in case
they'd be used in future.

* Rework byteswapping code in gguf-py

Move out details from byteswapping tensor blocks code

5 weeks agocli : change log to warning to explain reason for stopping (#15604)
Joshua Cogliati [Thu, 28 Aug 2025 07:48:20 +0000 (01:48 -0600)]
cli : change log to warning to explain reason for stopping (#15604)

* Change to warn instead of debug, to explain reason for stopping.

* Update tools/main/main.cpp

Fix printing --2

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
5 weeks agomodel-conversion : add mmproj conversion target (#15628)
Daniel Bevenius [Thu, 28 Aug 2025 07:26:48 +0000 (09:26 +0200)]
model-conversion : add mmproj conversion target (#15628)

This commit adds a new target to the Makefile for converting models that
are multimodal. This target will convert the original model and in
addition also create the mmproj GGUF model.

The motivation for this change is that for models that are multimodal,
for example those that contain a vision encoders, we will often want to
upload both the quantized model and the vision encoder model to
HuggingFace.

Example usage:
```console
$ make causal-convert-mm-model MODEL_PATH=~/work/ai/models/gemma-3-4b-it-qat-q4_0-unquantized/
...
The environment variable CONVERTED_MODEL can be set to this path using:
export CONVERTED_MODEL=/home/danbev/work/ai/llama.cpp/models/gemma-3-4b-it-qat-q4_0-unquantized.gguf
The mmproj model was created in /home/danbev/work/ai/llama.cpp/models/mmproj-gemma-3-4b-it-qat-q4_0-unquantized.gguf
```
The converted original model can then be quantized, and after that both
the quantized model and the mmproj file can then be uploaded to
HuggingFace.

Refs: https://huggingface.co/ggml-org/gemma-3-4b-it-qat-GGUF/tree/main

5 weeks agocuda: Add cublasLt_static linking when GGML_STATIC is enabled (#15622)
matiaslin [Thu, 28 Aug 2025 00:32:36 +0000 (17:32 -0700)]
cuda: Add cublasLt_static linking when GGML_STATIC is enabled (#15622)

Prior to this change, we faced undefined cublasLt references when
attempting to compile 'llama-cli' with GGML_STATIC=ON on Linux.

We add linking with CUDA::cublasLt_static when CUDA version is greater
than 10.1.

5 weeks agoserver: higher timeout for tests (#15621)
Johannes Gäßler [Wed, 27 Aug 2025 18:58:09 +0000 (20:58 +0200)]
server: higher timeout for tests (#15621)

5 weeks agopresets : add qwen3-30B-a3b FIM (#15616)
Georgi Gerganov [Wed, 27 Aug 2025 12:48:07 +0000 (15:48 +0300)]
presets : add qwen3-30B-a3b FIM (#15616)

5 weeks agoHIP: Enable support for ggml_backend_cuda_register_host_buffer (#15615)
uvos [Wed, 27 Aug 2025 11:58:54 +0000 (13:58 +0200)]
HIP: Enable support for ggml_backend_cuda_register_host_buffer (#15615)

5 weeks agokv-cache : better estimate of n_kv for multi-sequence batches (#15610)
Georgi Gerganov [Wed, 27 Aug 2025 10:55:12 +0000 (13:55 +0300)]
kv-cache : better estimate of n_kv for multi-sequence batches (#15610)

ggml-ci

5 weeks agoCANN: refactor mask handling and improve performance in FA (#15561)
Chenguang Li [Wed, 27 Aug 2025 09:21:41 +0000 (17:21 +0800)]
CANN: refactor mask handling and improve performance in FA (#15561)

* CANN(flash-attn): refactor mask handling and improve performance

1. Refactored the mask computation in Flash Attention, unified the logic without separating prefill and decode.
2. Optimized performance in non-alibi scenarios by reducing one repeat operation.
3. Updated operator management to explicitly mark unsupported cases on 310P devices and when dim is not divisible by 16.

Signed-off-by: noemotiovon <redacted>
* [CANN]: fix review

Signed-off-by: noemotiovon <redacted>
* [CANN]: Optimization FA BNSD to BSND

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
5 weeks agoggml-cpu : add basic RVV support for vector f32 ops (#15057)
xctan [Wed, 27 Aug 2025 08:44:22 +0000 (16:44 +0800)]
ggml-cpu : add basic RVV support for vector f32 ops (#15057)

* ggml-cpu : add basic RVV support for vector f32 ops

* ggml-cpu : add RVV support for f32 softmax

5 weeks agocommon : add -m to bash completion for --model [no ci] (#15591)
Daniel Bevenius [Wed, 27 Aug 2025 08:28:53 +0000 (10:28 +0200)]
common : add -m to bash completion for --model [no ci] (#15591)

This commit updates the bash completion script to include the -m
short option for the --model argument.

The motivation for this is that currently tab completion only works the
full --model option, and it is nice to have it work for the short option
as well.

5 weeks agoOpenCL: add fused group_norm/norm, mul, add (#15314)
rmatif [Wed, 27 Aug 2025 06:36:05 +0000 (08:36 +0200)]
OpenCL: add fused group_norm/norm, mul, add (#15314)

* add fused group_norm/norm, mul, add

* fix spacing

* revert rms_norm logic

* fix trailing whitespace

5 weeks agotests : fix test-opt with GGML_BACKEND_DL (#15599)
Diego Devesa [Tue, 26 Aug 2025 20:14:38 +0000 (13:14 -0700)]
tests : fix test-opt with GGML_BACKEND_DL (#15599)

5 weeks agoSYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (#15592)
Akarshan Biswas [Tue, 26 Aug 2025 18:57:49 +0000 (00:27 +0530)]
SYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (#15592)

The original implementation unconditionally returned true for this operation, leading to a failure when the tensor's first dimension (ne[0]) was not a multiple of WARP_SIZE. This caused an GGML_ASSERT(ncols % WARP_SIZE == 0) failure in ggml-sycl/norm.cpp.

This change updates the ggml_backend_sycl_device_supports_op check to correctly return true for GGML_OP_RMS_NORM only when the first dimension of the tensor is a multiple of WARP_SIZE, ensuring the operation can be performed without error.

5 weeks agomtmd : fix mtmd ios build (#15579)
fidoriel [Tue, 26 Aug 2025 18:05:50 +0000 (20:05 +0200)]
mtmd : fix mtmd ios build (#15579)

5 weeks agotests: add performance test for mul mat id (#15543)
Eve [Tue, 26 Aug 2025 15:42:49 +0000 (15:42 +0000)]
tests: add performance test for mul mat id (#15543)

5 weeks agollamafile: PowerPC Sgemm Optimization (#15558)
shalinib-ibm [Tue, 26 Aug 2025 15:35:25 +0000 (21:05 +0530)]
llamafile: PowerPC Sgemm Optimization (#15558)

This patch improves GEMM for FP32 Data Type on PowerPC

Implements GEMM on large blocks with configurable block size mc, nc, kc
(default: 256, 256, 256).
Packing Function optimized to access blocks as per memory layout.
GEMM Optimized to work on larger blocks.
Isolated Packing from GEMM Operations for better MMA utilization.

Verified functionality and correctness uing llama-cli and stand alone
test case (performs matmul and compares final mattrix C result with base).

Minor code refactoring changes:
Replace macro with inline function
Code Indent made consistent with 4 spaces

Performance Testing:

Observed 50% ~ 70% improvement in Prompt Processing Speed mesured using
llama-bench with Meta-Llama3-8B FP32 Model.  Similar gains observed with
Mistral-7b-Instruct-v0.3 Model.

model                   Size                Params     Backend       Threads   Test    Patch   Base
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp512   98.58   60.3
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp1024  95.88   57.36
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp2048  85.46   53.26
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp4096  68.66   45.78
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp6144  57.35   40.44

25 ~ 30% improvement in llama-batched-bench with Metla-Llama3-8B in
Prompt Processing Speed for large prompts (256, 512, 1024, 2048, 4096)tokens with various batch
sizes ( 1, 2, 4, 8, 16)

Signed-off-by: Shalini Salomi Bodapati <redacted>
5 weeks agograph : fix assert in memory-less build_attn (#15590)
Georgi Gerganov [Tue, 26 Aug 2025 14:45:17 +0000 (17:45 +0300)]
graph : fix assert in memory-less build_attn (#15590)

ggml-ci

5 weeks agomodel-conversion : add qat-q4 quantization targets (#15588)
Daniel Bevenius [Tue, 26 Aug 2025 14:12:29 +0000 (16:12 +0200)]
model-conversion : add qat-q4 quantization targets (#15588)

This commit adds two targets to the Makefile for quantizing of
Quantization Aware Trained (QAT) models to Q4_0 format.

The motivation for this is that this sets the token embedding and the
output tensors data types to Q8_0 instead of the default Q6_K. This is
someting that we wish to enforce for QAT Q4_0 models that are to be
uploaded to ggml-org on Huggingface to guarantee the best quality.

5 weeks agoCUDA: return -1 for nonexistent compiled arch (#15587)
Johannes Gäßler [Tue, 26 Aug 2025 14:01:20 +0000 (16:01 +0200)]
CUDA: return -1 for nonexistent compiled arch (#15587)

5 weeks agometal : optimize FA vec for large sequences and BS <= 8 (#15566)
Georgi Gerganov [Tue, 26 Aug 2025 11:22:14 +0000 (14:22 +0300)]
metal : optimize FA vec for large sequences and BS <= 8 (#15566)

* metal : optmize FA vec for large heads and sequences

* metal : adjust small-batch mul mv kernels

ggml-ci

* batched-bench : fix total speed computation

ggml-ci

* cont : add comments

ggml-ci

5 weeks agomtmd : support Kimi VL model (#15458)
Xuan-Son Nguyen [Tue, 26 Aug 2025 10:54:19 +0000 (12:54 +0200)]
mtmd : support Kimi VL model (#15458)

* convert : fix tensor naming conflict for llama 4 vision

* convert ok

* support kimi vision model

* clean up

* fix style

* fix calc number of output tokens

* refactor resize_position_embeddings

* add test case

* rename build fn

* correct a small bug

5 weeks agocontext : print graph stats for memory-less contexts (#15586)
Georgi Gerganov [Tue, 26 Aug 2025 09:47:00 +0000 (12:47 +0300)]
context : print graph stats for memory-less contexts (#15586)

ggml-ci

5 weeks agometal : improve `MUL_MAT_ID` (#15541)
Georgi Gerganov [Tue, 26 Aug 2025 09:46:15 +0000 (12:46 +0300)]
metal : improve `MUL_MAT_ID` (#15541)

* metal : mul_mm_id remove hdst

* metal : remove mul_mm_id hsrc1

* metal : mul_mm_id simplify + add test

* metal : opt mul_mm_id map0

* metal : optimize mul_mm_id id gathering

* metal : mul/div opt

* metal : optimize mul_mm_id_map0

ggml-ci

5 weeks agomodel : support MiniCPM-V 4.5 (#15575)
tc-mb [Tue, 26 Aug 2025 08:05:55 +0000 (16:05 +0800)]
model : support MiniCPM-V 4.5 (#15575)

5 weeks agogguf-py : remove erroneous FFN_GATE entry (#15583)
Sigbjørn Skjæret [Tue, 26 Aug 2025 07:08:08 +0000 (09:08 +0200)]
gguf-py : remove erroneous FFN_GATE entry (#15583)

5 weeks agometal : remove contiguous assertion for src0 in IM2COL (#15577)
Sigbjørn Skjæret [Tue, 26 Aug 2025 06:51:43 +0000 (08:51 +0200)]
metal : remove contiguous assertion for src0 in IM2COL (#15577)

* remove contiguous assertion for src0 in IM2COL

* add contiguous check in supports_op

5 weeks agoAdd a warning for special devices (#15563)
Yoshi_likes_e4 [Tue, 26 Aug 2025 06:15:33 +0000 (13:15 +0700)]
Add a warning for special devices (#15563)

* Add warning

* Print the devices names

* Add newlines

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Fix vector names

---------

Co-authored-by: Johannes Gäßler <redacted>
5 weeks agovulkan: Remove splitting for mul_mat_id (#15568)
Jeff Bolz [Tue, 26 Aug 2025 04:42:44 +0000 (23:42 -0500)]
vulkan: Remove splitting for mul_mat_id (#15568)

row_ids only needs to hold the BN rows for the current tile.

5 weeks agoCUDA: Accelerate MXFP4 table lookup using `__byte_perm` (#15451)
Qeeweew [Mon, 25 Aug 2025 21:21:22 +0000 (05:21 +0800)]
CUDA: Accelerate MXFP4 table lookup using `__byte_perm` (#15451)

* CUDA: optimize get_int_from_table_16

* CUDA: use v_perm_b32 to replace byte_perm on AMD GPUs

* revise documentation

---------

Co-authored-by: xix <redacted>
Co-authored-by: Johannes Gäßler <redacted>
5 weeks agoopencl: fix support ops condition for `rms_norm` (#15560)
lhez [Mon, 25 Aug 2025 21:18:09 +0000 (14:18 -0700)]
opencl: fix support ops condition for `rms_norm` (#15560)

5 weeks agovulkan: fix min subgroup 16 condition for mmid subgroup optimization (#15565)
Ruben Ortlam [Mon, 25 Aug 2025 15:56:59 +0000 (17:56 +0200)]
vulkan: fix min subgroup 16 condition for mmid subgroup optimization (#15565)

5 weeks agotests: Generate unique input values for count_equal (#15487)
Jeff Bolz [Mon, 25 Aug 2025 15:47:16 +0000 (10:47 -0500)]
tests: Generate unique input values for count_equal (#15487)

This avoids backend-dependent behavior for argmax that leads to intermittent failures.

5 weeks agometal: fix regression when no metal devices are present (#15531)
Ihar Hrachyshka [Mon, 25 Aug 2025 15:27:34 +0000 (11:27 -0400)]
metal: fix regression when no metal devices are present (#15531)

5 weeks agoCUDA: MoE helper in device code, better tile sizes (#15525)
Johannes Gäßler [Mon, 25 Aug 2025 15:23:40 +0000 (17:23 +0200)]
CUDA: MoE helper in device code, better tile sizes (#15525)

* CUDA: MoE helper in device code, better tile sizes

* reduce superfluous CUDA blocks

5 weeks agomodel-conversion : set pooling type to none in logits.cpp (#15564)
Daniel Bevenius [Mon, 25 Aug 2025 13:00:43 +0000 (15:00 +0200)]
model-conversion : set pooling type to none in logits.cpp (#15564)

This commit explicitly sets the pooling type to 'none' in the logits.cpp
to support models that have a pooling type specified.

The motivation for this is that some models may have a pooling type set
in the model file (.gguf file) and for this specific case where we only
want to extract logits, we need to ensure that no pooling is used to
so that we are comparing raw logits and not pooled embeddings.

5 weeks agomodel-conversion : add model card template for embeddings [no ci] (#15557)
Daniel Bevenius [Mon, 25 Aug 2025 12:25:25 +0000 (14:25 +0200)]
model-conversion : add model card template for embeddings [no ci] (#15557)

* model-conversion: add model card template for embeddings [no ci]

This commit adds a separate model card template (model repository
README.md template) for embedding models.

The motivation for this is that there server command for the embedding
model is a little different and some addition information can be useful
in the model card for embedding models which might not be directly
relevant for causal models.

* squash! model-conversion: add model card template for embeddings [no ci]

Fix pyright lint error.

* remove --pooling override and clarify embd_normalize usage

5 weeks agobatched-bench : fix unified KV cache handling + pp timing (#15562)
Georgi Gerganov [Mon, 25 Aug 2025 10:56:43 +0000 (13:56 +0300)]
batched-bench : fix unified KV cache handling + pp timing (#15562)

* batched-bench : fix unified KV cache handling + pp timing

* cont : run dummy token only with split KV cache

5 weeks agoconvert : update Ernie 4.5 dense architecture name (#15555)
Weizhao Ouyang [Mon, 25 Aug 2025 09:15:06 +0000 (17:15 +0800)]
convert : update Ernie 4.5 dense architecture name (#15555)

Signed-off-by: Weizhao Ouyang <redacted>
5 weeks agometal : add FA kernels for HS=40 (#15559)
Georgi Gerganov [Mon, 25 Aug 2025 07:14:48 +0000 (10:14 +0300)]
metal : add FA kernels for HS=40 (#15559)

ggml-ci

5 weeks agoconvert : support interns1-mini (#15412)
RunningLeon [Mon, 25 Aug 2025 06:32:16 +0000 (14:32 +0800)]
convert : support interns1-mini (#15412)

* support interns1-mini

* fix comment

* update

5 weeks agoCANN: ROPE cache sin/cos repeat (#15501)
Chenguang Li [Mon, 25 Aug 2025 02:32:21 +0000 (10:32 +0800)]
CANN: ROPE cache sin/cos repeat (#15501)

Signed-off-by: noemotiovon <redacted>
5 weeks agovulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (#15524)
Ruben Ortlam [Sun, 24 Aug 2025 17:36:36 +0000 (19:36 +0200)]
vulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (#15524)

* vulkan: use subgroup function for mul_mat_id shader even without coopmat

* vulkan: fix compile warnings

* vulkan: properly check for subgroup size control and require full subgroups for subgroup mul_mat_id

* vulkan: disable subgroup mul_mat_id on devices with subgroups < 16

5 weeks agokv-cache : support layer reuse (#15504)
Georgi Gerganov [Sun, 24 Aug 2025 10:07:07 +0000 (13:07 +0300)]
kv-cache : support layer reuse (#15504)

* kv-cache : support layer reuse

ggml-ci

* cont : update comments [no ci]

5 weeks agovulkan: Support FA with any multiple of 8 head sizes (#15537)
Jeff Bolz [Sun, 24 Aug 2025 09:24:25 +0000 (04:24 -0500)]
vulkan: Support FA with any multiple of 8 head sizes (#15537)

The scalar FA shader already handled multiples of 8. The coopmat1 FA
shader assumed 16x16x16 and the shared memory allocations need the HSK
dimensions padded to a multiple of 16. NVIDIA's coopmat2 implementation
requires multiples of 16 for N and K, and needs the matrix dimensions
padded and loads clamped.

Store the FA pipelines in a map, indexed by the pipeline state.

5 weeks agovulkan: enable Conv2D for Apple after MoltenVK fixed the bug (#15526)
Ruben Ortlam [Sun, 24 Aug 2025 08:48:53 +0000 (10:48 +0200)]
vulkan: enable Conv2D for Apple after MoltenVK fixed the bug (#15526)

5 weeks agovulkan: workaround MoltenVK compile failure in multi_add (#15506)
Jeff Bolz [Sun, 24 Aug 2025 08:48:21 +0000 (03:48 -0500)]
vulkan: workaround MoltenVK compile failure in multi_add (#15506)

* vulkan: workaround MoltenVK compile failure in multi_add

* Update ggml/src/ggml-vulkan/vulkan-shaders/multi_add.comp

Co-authored-by: 0cc4m <redacted>
5 weeks agoCUDA: fix half2 -> half conversion for HIP (#15529)
Johannes Gäßler [Sat, 23 Aug 2025 19:37:06 +0000 (21:37 +0200)]
CUDA: fix half2 -> half conversion for HIP (#15529)

5 weeks agovulkan: optimize rms_norm, and allow the work to spread across multiple SMs (#15281)
Jeff Bolz [Sat, 23 Aug 2025 18:16:17 +0000 (13:16 -0500)]
vulkan: optimize rms_norm, and allow the work to spread across multiple SMs (#15281)

* vulkan: optimize rms_norm, and allow the work to spread across multiple SMs

There are really two parts to this change:
(1) Some optimizations similar to what we have in soft_max, to unroll with
different numbers of iterations.
(2) A fusion optimization where we detect add followed by rms_norm, and make
the add shader atomically accumulate the values^2 into memory. Then the
rms_norm shader can just load that sum. This allows the rms_norm to be
parallelized across multiple workgroups, it just becomes a simple per-element
multiply.

The fusion optimization is currently only applied when the rms_norm is on a
single vector. This previously always ran on a single SM. It could apply more
broadly, but when there are other dimensions the work can already spread across
SMs, and there would be some complexity to tracking multiple atomic sums.

* Change add+rms_norm optimization to write out an array of partial sums
rather than using atomic add, to make it deterministic. The rms_norm
shader fetches a subgroup's worth in parallel and uses subgroupAdd to
add them up.

* complete rebase against fused adds - multi_add shader can also compute partial sums

* fix validation errors

* disable add_rms_fusion for Intel due to possible driver bug

* resolve against #15489, sync after clearing partial sums

5 weeks agomodel : add support for Seed-OSS (#15490)
Piotr Wilkin (ilintar) [Sat, 23 Aug 2025 13:21:52 +0000 (15:21 +0200)]
model : add support for Seed-OSS (#15490)

* First draft

* Fix linter errors

* Added missing sinks nullptr

* Don't forget the llama-arch!

* We're through to the generation stage.

* Fix post-attention norm

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Fix RoPE type

* Fix tensor name and reorder llm_types

* Update gguf-py/gguf/constants.py

Remove nonexistent FFN_POST_NORM tensor

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.h

Co-authored-by: Sigbjørn Skjæret <redacted>
* Add basic chat template

* Add chat template tests

* Remake chat template test

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-chat.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Reorder llm type descriptions

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agoscripts: fix compare-llama-bench.py (#15521)
Johannes Gäßler [Sat, 23 Aug 2025 10:58:58 +0000 (12:58 +0200)]
scripts: fix compare-llama-bench.py (#15521)

5 weeks agochat : fix debug build assertion in trim function (#15520)
LaffeyNyaa [Sat, 23 Aug 2025 08:38:30 +0000 (16:38 +0800)]
chat : fix debug build assertion in trim function (#15520)

5 weeks agovulkan: Rewrite synchronization to allow some overlap between nodes (#15489)
Jeff Bolz [Sat, 23 Aug 2025 07:33:36 +0000 (02:33 -0500)]
vulkan: Rewrite synchronization to allow some overlap between nodes (#15489)

Track a list of nodes that need synchronization, and only sync if the new node
depends on them (or overwrites them). This allows some overlap which can
improve performance, and centralizes a big chunk of the synchronization logic.

The remaining synchronization logic involves writes to memory other than the
nodes, e.g. for dequantization or split_k. Each of these allocations has a bool
indicating whether they were in use and need to be synced. This should be
checked before they are written to, and set to true after they are done being
consumed.

5 weeks agovulkan.Dockerfile: install vulkan SDK using tarball (#15282)
R0CKSTAR [Sat, 23 Aug 2025 06:58:57 +0000 (14:58 +0800)]
vulkan.Dockerfile: install vulkan SDK using tarball (#15282)

Signed-off-by: Xiaodong Ye <redacted>
5 weeks agovulkan : support ggml_mean (#15393)
Acly [Sat, 23 Aug 2025 06:35:21 +0000 (08:35 +0200)]
vulkan : support ggml_mean (#15393)

* vulkan : support ggml_mean

* vulkan : support sum, sum_rows and mean with non-contiguous tensors

* vulkan : fix subbuffer size not accounting for misalign offset

* tests : add backend-op tests for non-contiguous sum_rows

* cuda : require contiguous src for SUM_ROWS, MEAN support
* sycl : require contiguous src for SUM, SUM_ROWS, ARGSORT support

* require ggml_contiguous_rows in supports_op and expect nb00=1 in the shader

5 weeks agovulkan: optimize mul_mat_id loading row ids into shared memory (#15427)
Jeff Bolz [Sat, 23 Aug 2025 06:31:54 +0000 (01:31 -0500)]
vulkan: optimize mul_mat_id loading row ids into shared memory (#15427)

- Spread the work across the whole workgroup. Using more threads seems to
far outweigh the synchronization overhead.
- Specialize the code for when the division is by a power of two.

6 weeks agotest-opt: allow slight inprecision (#15503)
Johannes Gäßler [Fri, 22 Aug 2025 21:47:01 +0000 (23:47 +0200)]
test-opt: allow slight inprecision (#15503)

6 weeks agoggml WebGPU: add support for quantization types (#15440)
Reese Levine [Fri, 22 Aug 2025 18:28:03 +0000 (11:28 -0700)]
ggml WebGPU: add support for quantization types (#15440)

* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments

* Work on templating for different types in shaders

* Work on shader type generation

* Working q4_0 mul_mat and some templating for different types

* Add q4_0_f16 matmul and fix device init

* Add matmul support for basic quantization types

* Add q2_k and q3_k quantization

* Add rest of k-quants

* Get firt i-quant working

* Closer to supporting all i-quants

* Support rest of i-quants

* Cleanup code

* Fix python formatting

* debug

* Bugfix for memset

* Add padding to end of buffers on creation

* Simplify bit-shifting

* Update usage of StringView

6 weeks agomodel : gpt-oss add response_format support (#15494)
Aldehir Rojas [Fri, 22 Aug 2025 16:04:08 +0000 (11:04 -0500)]
model : gpt-oss add response_format support (#15494)

6 weeks agoggml: add `conv3d` op (#15182)
rmatif [Fri, 22 Aug 2025 13:33:15 +0000 (15:33 +0200)]
ggml: add `conv3d` op (#15182)

* add conv3d

* bump GGML_OP_COUNT

6 weeks agocuda : add Pad Reflect 1D support (#14659)
Yavor Ivanov [Fri, 22 Aug 2025 11:06:29 +0000 (14:06 +0300)]
cuda : add Pad Reflect 1D support (#14659)

* Add Pad Reflect 1D CUDA support

* Update ggml/src/ggml-cuda/pad_reflect_1d.cu

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
6 weeks agollama : remove KV cache defragmentation logic (#15473)
Georgi Gerganov [Fri, 22 Aug 2025 09:22:13 +0000 (12:22 +0300)]
llama : remove KV cache defragmentation logic (#15473)

ggml-ci

6 weeks agoggml-cpu: Support Q5_0 and Q5_1 on s390x (#15486)
Aaron Teo [Fri, 22 Aug 2025 08:11:04 +0000 (16:11 +0800)]
ggml-cpu: Support Q5_0 and Q5_1 on s390x (#15486)

* ggml-cpu: initial q5_0 impl for s390x

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: updated q5_0 code for better performance

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: use optimised hsum for better performance

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: introduce q5_1 simd + refactor q5_0

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix incorrect return type vec_hsum

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: q5_0 incomplete refactor + table_b2b_0 activation

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: refactor q5_1

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: q5_1 update loop unroll to 4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update q5_0 unroll to 4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update build-s390x docs

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update unused variables q5_0

Signed-off-by: Aaron Teo <redacted>
* docs: update the last update date

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
6 weeks agoserver : Support multimodal completion and embeddings prompts in JSON format (#15108)
65a [Fri, 22 Aug 2025 08:10:14 +0000 (08:10 +0000)]
server : Support multimodal completion and embeddings prompts in JSON format (#15108)

- Use server_tokens in more places in server and util.cpp
- Convert most functions that used llama_tokens to server_tokens
- Modify input tokenizer to handle JSON objects as subprompts
- Break out MTMD prompt parsing into utility function
- Support JSON objects with multimodal_data arrays for MTMD prompts along with other existing types
- Add capability to model endpoint to indicate if client can send multimodal data
- Add tests.

6 weeks agoreadme : model : mtdm : lfm2 improvements (#15476)
Tarek Dakhran [Fri, 22 Aug 2025 07:29:08 +0000 (09:29 +0200)]
readme : model : mtdm : lfm2 improvements (#15476)

* Support untied embeddings

* Increase number of image tokens to 1024

* Add LFM2-VL to readme

* Actually use untied embeddings

6 weeks agoCANN: Optimize RMS_NORM using cache (#15419)
Chenguang Li [Fri, 22 Aug 2025 06:12:07 +0000 (14:12 +0800)]
CANN: Optimize RMS_NORM using cache (#15419)

* [CANN] Optimize RMS_NORM using cache

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
* fix review comment

Signed-off-by: noemotiovon <redacted>
* codestyle adjustment

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
6 weeks agosched : fix possible use of wrong ids tensor when offloading moe prompt processing...
Diego Devesa [Thu, 21 Aug 2025 21:09:32 +0000 (14:09 -0700)]
sched : fix possible use of wrong ids tensor when offloading moe prompt processing (#15488)

6 weeks agollama : remove deprecated llama_kv_self API (#15472)
Georgi Gerganov [Thu, 21 Aug 2025 16:13:45 +0000 (19:13 +0300)]
llama : remove deprecated llama_kv_self API (#15472)

ggml-ci

6 weeks agograph : remove build_attn_with_sinks overload (#15469)
Georgi Gerganov [Thu, 21 Aug 2025 15:44:45 +0000 (18:44 +0300)]
graph : remove build_attn_with_sinks overload (#15469)

ggml-ci

6 weeks agovulkan : support conv_2d_dw with f16 weights (#15392)
Acly [Thu, 21 Aug 2025 15:01:51 +0000 (17:01 +0200)]
vulkan : support conv_2d_dw with f16 weights (#15392)

6 weeks agovulkan: add exp operation (#15456)
Dong Won Kim [Thu, 21 Aug 2025 15:00:16 +0000 (00:00 +0900)]
vulkan: add exp operation (#15456)

Co-authored-by: aeseulgi <redacted>
6 weeks agovulkan: Reuse conversion results in prealloc_y (#15410)
Jeff Bolz [Thu, 21 Aug 2025 14:55:00 +0000 (09:55 -0500)]
vulkan: Reuse conversion results in prealloc_y (#15410)

* vulkan: Reuse conversion results in prealloc_y

Cache the pipeline and tensor that were most recently used to fill prealloc_y,
and skip the conversion if the current pipeline/tensor match.

* don't use shared pointer for prealloc_y_last_pipeline_used

6 weeks agoexamples : fix some typos in examples/model-conversion/README.md (#15477)
Jie Fu (傅杰) [Thu, 21 Aug 2025 14:53:13 +0000 (22:53 +0800)]
examples : fix some typos in examples/model-conversion/README.md (#15477)

Signed-off-by: Jie Fu <redacted>
6 weeks agokv-cache : drop the "unified" prefix (#15467)
Georgi Gerganov [Thu, 21 Aug 2025 14:00:33 +0000 (17:00 +0300)]
kv-cache : drop the "unified" prefix (#15467)

* kv-cache : drop the "unified" prefix

ggml-ci

* cont : fix comment [no ci]

6 weeks agoexamples : install torch-cpu for model conversion tool/example (#15475)
Jie Fu (傅杰) [Thu, 21 Aug 2025 13:42:34 +0000 (21:42 +0800)]
examples : install torch-cpu for model conversion tool/example (#15475)

Signed-off-by: Jie Fu <redacted>
6 weeks agoci : enable RVV1.0 native build (#15386)
Ali Tariq [Thu, 21 Aug 2025 12:52:16 +0000 (17:52 +0500)]
ci : enable RVV1.0 native build (#15386)

* Changed the CI file to hw

* Changed the CI file to hw

* Added to sudoers for apt

* Removed the clone command and used checkout

* Added libcurl

* Added gcc-14

* Checking gcc --version

* added gcc-14 symlink

* added CC and C++ variables

* Added the gguf weight

* Changed the weights path

* Added system specification

* Removed white spaces

* ci: Replace Jenkins riscv native build Cloud-V pipeline with GitHub Actions workflow

Removed the legacy .devops/cloud-v-pipeline Jenkins CI configuration and introduced .github/workflows/build-riscv-native.yml for native RISC-V builds using GitHub Actions.

* removed trailing whitespaces

* Added the trigger at PR creation

* Corrected OS name

* Added ccache as setup package

* Added ccache for self-hosted runner

* Added directory for ccache size storage

Co-authored-by: Sigbjørn Skjæret <redacted>
* Changed the build command and added ccache debug log

* Added the base dir for the ccache

* Re-trigger CI

* Cleanup and refactored ccache steps

* Cleanup and refactored ccache steps

---------

Co-authored-by: Akif Ejaz <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoci : continue file download with wget (#15471)
Georgi Gerganov [Thu, 21 Aug 2025 10:42:55 +0000 (13:42 +0300)]
ci : continue file download with wget (#15471)

ggml-ci

6 weeks agoexamples : add model conversion tool/example (#15455)
Daniel Bevenius [Thu, 21 Aug 2025 10:16:54 +0000 (12:16 +0200)]
examples : add model conversion tool/example (#15455)

* examples : add model conversion tool/example

This commit adds an "example/tool" that is intended to help in the
process of converting models to GGUF. Currently it supports normal
causal models and embedding models. The readme contains instructions and
command to guide through the process.

The motivation for this to have a structured and repeatable process for
model conversions and hopefully with time improve upon it to make the
process easier and more reliable. We have started to use this for new
model conversions internally and will continue doing so and improve it
as we go along. Perhaps with time this should be placed in a different
directory than the examples directory, but for now it seems like a good
place to keep it while we are still developing it.

* squash! examples : add model conversion tool/example

Remove dependency on scikit-learn in model conversion example.

* squash! examples : add model conversion tool/example

Update transformer dep to use non-dev version. And also import
`AutoModelForCausalLM` instead of `AutoModel` to ensure compatibility
with the latest version.

* squash! examples : add model conversion tool/example

Remove the logits requirements file from the all requirements file.

6 weeks agoci : fix -Werror=return-type in clip.cpp so ci/run.sh can run without issue (#15221)
Michael Giba [Thu, 21 Aug 2025 10:06:46 +0000 (05:06 -0500)]
ci : fix -Werror=return-type in clip.cpp so ci/run.sh can run without issue (#15221)

* Fix -Werror=return-type so ci/run.sh can run

* Update tools/mtmd/clip.cpp

Co-authored-by: Diego Devesa <redacted>
* Remove false now that we have abort

---------

Co-authored-by: Diego Devesa <redacted>
6 weeks agoci : add copilot-instructions.md (#15286)
Copilot [Thu, 21 Aug 2025 09:47:52 +0000 (11:47 +0200)]
ci : add copilot-instructions.md (#15286)

* Initial plan

* Initialize copilot instructions exploration

* Add comprehensive .github/copilot-instructions.md file

* Update Python environment and tools directory documentation

- Add instructions for using .venv Python environment
- Include flake8 and pyright linting tools from virtual environment
- Add tools/ as core directory in project layout
- Reference existing configuration files (.flake8, pyrightconfig.json)

* add more python dependencies to .venv

* Update copilot instructions: add backend hardware note and server testing

* Apply suggestions from code review

* Apply suggestions from code review

* Replace clang-format with git clang-format to format only changed code

* Minor formatting improvements: remove extra blank line and add trailing newline

* try installing git-clang-format

* try just clang-format

* Remove --binary flag from git clang-format and add git-clang-format installation to CI

* download 18.x release

* typo--

* remove --binary flag

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoconvert : make Mistral community chat templates optional via parameter (#15420)
Julien Denize [Thu, 21 Aug 2025 09:19:50 +0000 (11:19 +0200)]
convert : make Mistral community chat templates optional via parameter (#15420)

* Make Mistral community chat templates optional

* Change the flag arg to disable instead of enable community chat templates

* Improve error message

* Improve help message

* Tone down the logger messages

6 weeks agocommon : fix incorrect print of non-ascii characters in the logging (#15466)
Jie Fu (傅杰) [Thu, 21 Aug 2025 08:54:34 +0000 (16:54 +0800)]
common : fix incorrect print of non-ascii characters in the logging (#15466)

Signed-off-by: Jie Fu <redacted>
6 weeks agoggml : fix condition of im2col on Metal backend (#15460)
Xuan-Son Nguyen [Thu, 21 Aug 2025 05:32:26 +0000 (07:32 +0200)]
ggml : fix condition of im2col on Metal backend (#15460)