]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
4 weeks agoopencl: support imrope (#16914)
lhez [Mon, 3 Nov 2025 19:47:57 +0000 (11:47 -0800)]
opencl: support imrope (#16914)

* opencl: support imrope

* opencl: fix whitespace

4 weeks agofix: Viewing multiple PDF attachments (#16974)
Aleksander Grygier [Mon, 3 Nov 2025 17:53:26 +0000 (18:53 +0100)]
fix: Viewing multiple PDF attachments (#16974)

4 weeks agomodel-conversion : pass config to from_pretrained (#16963)
Daniel Bevenius [Mon, 3 Nov 2025 17:01:59 +0000 (18:01 +0100)]
model-conversion : pass config to from_pretrained (#16963)

This commit modifies the script `run-org-model.py` to ensure that the
model configuration is explicitly passed to the `from_pretrained` method
when loading the model. It also removes a duplicate configuration
loading which was a mistake.

The motivation for this change is that enables the config object to be
modified and then passed to the model loading function, which can be
useful when testing new models.

4 weeks agoserver : add props.model_alias (#16943)
Georgi Gerganov [Mon, 3 Nov 2025 13:38:23 +0000 (15:38 +0200)]
server : add props.model_alias (#16943)

* server : add props.model_alias

* webui : npm run format

4 weeks agoggml: CUDA: add head size 72 for flash-attn (#16962)
theo77186 [Mon, 3 Nov 2025 13:29:11 +0000 (14:29 +0100)]
ggml: CUDA: add head size 72 for flash-attn (#16962)

4 weeks agomtmd: add --image-min/max-tokens (#16921)
Xuan-Son Nguyen [Mon, 3 Nov 2025 10:11:18 +0000 (11:11 +0100)]
mtmd: add --image-min/max-tokens (#16921)

4 weeks agomtmd: pad mask for qwen2.5vl (#16954)
Xuan-Son Nguyen [Mon, 3 Nov 2025 09:25:55 +0000 (10:25 +0100)]
mtmd: pad mask for qwen2.5vl (#16954)

* mtmd: pad mask for qwen2.5vl

* improve

4 weeks agoggml : LoongArch fixes (#16958)
Jinyang He [Mon, 3 Nov 2025 06:40:02 +0000 (14:40 +0800)]
ggml : LoongArch fixes (#16958)

* Fix test-quantize-fns f16 and q4_0 failed when use LSX

* Fix LoongArch set float intrinsic when use LSX/LASX

4 weeks agosync: minja (glm 4.6 & minmax m2 templates) (#16949)
Olivier Chafik [Mon, 3 Nov 2025 05:33:56 +0000 (05:33 +0000)]
sync: minja (glm 4.6 & minmax m2 templates) (#16949)

* sync: minja

* Sync https://github.com/ochafik/minja/pull/7 (MinMax M2)

4 weeks agoSYCL: optimized repeat_back kernel (3× fewer asm instructions, 2× faster)Feature...
shani-f [Mon, 3 Nov 2025 01:35:33 +0000 (03:35 +0200)]
SYCL: optimized repeat_back kernel (3× fewer asm instructions, 2× faster)Feature/sycl repeat back opt (#16869)

* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* SYCL: optimize repeat_back kernel

* Remove Hebrew comment from repeat_back.cpp

* Remove comments for code clarity

Removed comments to clean up the code.

* Fix formatting in ggml-sycl.cpp

* Formatted lambda according to legacy style. No logic changes

* Remove blank line in repeat_back.cpp

Remove unnecessary blank line before assigning acc to dst_dd.

4 weeks agofeat(webui): improve LaTeX rendering with currency detection (#16508)
Sascha Rogmann [Sun, 2 Nov 2025 23:41:08 +0000 (00:41 +0100)]
feat(webui): improve LaTeX rendering with currency detection (#16508)

* webui : Revised LaTeX formula recognition

* webui : Further examples containg amounts

* webui : vitest for maskInlineLaTeX

* webui: Moved preprocessLaTeX to lib/utils

* webui: LaTeX in table-cells

* chore: update webui build output (use theirs)

* webui: backslash in LaTeX-preprocessing

* chore: update webui build output

* webui: look-behind backslash-check

* chore: update webui build output

* Apply suggestions from code review

Code maintenance (variable names, code formatting, string handling)

Co-authored-by: Aleksander Grygier <redacted>
* webui: Moved constants to lib/constants.

* webui: package woff2 inside base64 data

* webui: LaTeX-line-break in display formula

* chore: update webui build output

* webui: Bugfix (font embedding)

* webui: Bugfix (font embedding)

* webui: vite embeds assets

* webui: don't suppress 404 (fonts)

* refactor: KaTeX integration with SCSS

Moves KaTeX styling to SCSS for better customization and font embedding.

This change includes:
- Adding `sass` as a dev dependency.
- Introducing a custom SCSS file to override KaTeX variables and disable TTF/WOFF fonts, relying solely on WOFF2 for embedding.
- Adjusting the Vite configuration to resolve `katex-fonts` alias and inject SCSS variables.

* fix: LaTeX processing within blockquotes

* webui: update webui build output

---------

Co-authored-by: Aleksander Grygier <redacted>
4 weeks agotest-backend-ops : fix segfault in moe-expert-reduce test in support mode and coverag...
Shagun Bera [Sun, 2 Nov 2025 23:10:30 +0000 (04:40 +0530)]
test-backend-ops : fix segfault in moe-expert-reduce test in support mode and coverage (#16936)

* tests: fix segfault in moe-expert-reduce test in support mode and --show-coverage

* tests: init gf and filter out fusion tests for support mode

* tests: filter out fusion cases before calling eval_support

* tests: filter out fusion cases from show_test_coverage as well, fix lint

4 weeks agoci : disable failing riscv cross build (#16952)
Sigbjørn Skjæret [Sun, 2 Nov 2025 22:11:21 +0000 (23:11 +0100)]
ci : disable failing riscv cross build (#16952)

4 weeks agomodel: add Janus Pro for image understanding (#16906)
Zhiyong Wang [Sun, 2 Nov 2025 21:08:04 +0000 (13:08 -0800)]
model: add Janus Pro for image understanding (#16906)

* Add support for Janus Pro

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Address reviewer suggestions

Co-authored-by: Sigbjørn Skjæret <redacted>
* Add JANUS_PRO constant

* Update clip model handling

Co-authored-by: Xuan-Son Nguyen <redacted>
* Update tools/mtmd/clip.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Refactor JANUS_PRO handling in clip.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Update tools/mtmd/clip.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* em whitespace

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
4 weeks agoclip : use FA (#16837)
Georgi Gerganov [Sun, 2 Nov 2025 20:21:48 +0000 (22:21 +0200)]
clip : use FA (#16837)

* clip : use FA

* cont : add warning about unsupported ops

* implement "auto" mode for clip flash attn

* clip : print more detailed op support info during warmup

* cont : remove obsolete comment [no ci]

* improve debugging message

* trailing space

* metal : remove stray return

---------

Co-authored-by: Xuan Son Nguyen <redacted>
4 weeks agoserver : support unified cache across slots (#16736)
Georgi Gerganov [Sun, 2 Nov 2025 16:14:04 +0000 (18:14 +0200)]
server : support unified cache across slots (#16736)

* server : support unified context across slots

* cont : fix speculative decoding initialization

* context : fix n_ctx_per_seq computation

* server : purge slots one by one

* tests : add unified cache server tests

* llama : update per-seq context computation

* test-thread-safety : handle tiny training context of the input model

* server : fix server_tokens clear()

* server : use 4 slots + unified KV by default

* llama : add note about context size queries

* cont : update todos [no ci]

* context : do not cap the size of the context

* tests : adjust parameters to be CI friendlier

* context : add warning

4 weeks agocommon : move gpt-oss reasoning processing to init params (#16937)
Aldehir Rojas [Sun, 2 Nov 2025 14:56:28 +0000 (08:56 -0600)]
common : move gpt-oss reasoning processing to init params (#16937)

4 weeks agodocs: remove llama_sampler_accept reference in sampling sample usage (#16920)
Adrian Lundberg [Sun, 2 Nov 2025 09:28:37 +0000 (10:28 +0100)]
docs: remove llama_sampler_accept reference in sampling sample usage (#16920)

commit 5fb5e24811cb01d48b482c15a974bfbd9f433e1d (llama : minor
sampling refactor (2) (#9386)) moved the llama_sampler_accept call
into llama_sampler_sample, but the sampling sample usage in llama.h
was forgotten to be updated accordingly.

4 weeks agoCUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (#16917)
mnehete32 [Sun, 2 Nov 2025 03:12:57 +0000 (08:42 +0530)]
CUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (#16917)

4 weeks agodevops: fix failing s390x docker build (#16918)
Aaron Teo [Sun, 2 Nov 2025 00:48:46 +0000 (08:48 +0800)]
devops: fix failing s390x docker build (#16918)

4 weeks agoggml: add s390x cpu-feats (#16774)
Aaron Teo [Sun, 2 Nov 2025 00:48:23 +0000 (08:48 +0800)]
ggml: add s390x cpu-feats (#16774)

4 weeks agoscripts : add script to bench models (#16894)
Georgi Gerganov [Sat, 1 Nov 2025 22:15:31 +0000 (00:15 +0200)]
scripts : add script to bench models (#16894)

4 weeks agowebui: auto-refresh /props on inference start to resync model metadata (#16784)
Pascal [Sat, 1 Nov 2025 18:49:51 +0000 (19:49 +0100)]
webui: auto-refresh /props on inference start to resync model metadata (#16784)

* webui: auto-refresh /props on inference start to resync model metadata

- Add no-cache headers to /props and /slots
- Throttle slot checks to 30s
- Prevent concurrent fetches with promise guard
- Trigger refresh from chat streaming for legacy and ModelSelector
- Show dynamic serverWarning when using cached data

* fix: restore proper legacy behavior in webui by using unified /props refresh

Updated assistant message bubbles to show each message's stored model when available,
falling back to the current server model only when the per-message value is missing

When the model selector is disabled, now fetches /props and prioritizes that model name
over chunk metadata, then persists it with the streamed message so legacy mode properly
reflects the backend configuration

* fix: detect first valid SSE chunk and refresh server props once

* fix: removed the slots availability throttle constant and state

* webui: purge ai-generated cruft

* chore: update webui static build

4 weeks agowebui: add HTML/JS preview support to MarkdownContent with sandboxed iframe (#16757)
Pascal [Sat, 1 Nov 2025 16:14:54 +0000 (17:14 +0100)]
webui: add HTML/JS preview support to MarkdownContent with sandboxed iframe (#16757)

* webui: add HTML/JS preview support to MarkdownContent with sandboxed iframe dialog

Extended MarkdownContent to flag previewable code languages,
add a preview button alongside copy controls, manage preview
dialog state, and share styling for the new button group

Introduced CodePreviewDialog.svelte, a sandboxed iframe modal
for rendering HTML/JS previews with consistent dialog controls

* webui: fullscreen HTML preview dialog using bits-ui

* Update tools/server/webui/src/lib/components/app/misc/CodePreviewDialog.svelte

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/components/app/misc/MarkdownContent.svelte

Co-authored-by: Aleksander Grygier <redacted>
* webui: pedantic style tweak for CodePreviewDialog close button

* webui: remove overengineered preview language logic

* chore: update webui static build

---------

Co-authored-by: Aleksander Grygier <redacted>
4 weeks agovendor : update cpp-httplib to 0.27.0 (#16846)
Adrien Gallouët [Sat, 1 Nov 2025 15:52:17 +0000 (16:52 +0100)]
vendor : update cpp-httplib to 0.27.0 (#16846)

Signed-off-by: Adrien Gallouët <redacted>
4 weeks agomtmd: refactor preprocessing + support max/min pixels (#16878)
Xuan-Son Nguyen [Sat, 1 Nov 2025 14:51:36 +0000 (15:51 +0100)]
mtmd: refactor preprocessing + support max/min pixels (#16878)

* mtmd: refactor preprocessing + support max/min pixels

* fix mlp type

* implement mix/max pixels

* improve hparams

* better image preproc for qwen

* fix

* fix out of bound composite

* fix (2)

* fix token calculation

* get_merge_kernel_size()

* fix llama4 and lfm2

* gonna fix them all

* use simple resize for qwen

* qwen: increase min tokens

* no resize if dst size == src size

* restore to initial min/max tokens value for qwen

4 weeks agoAdd a setting to display message generation statistics (#16901)
Aleksander Grygier [Sat, 1 Nov 2025 14:35:57 +0000 (15:35 +0100)]
Add a setting to display message generation statistics (#16901)

* feat: Add setting to display message generation statistics

* chore: build static webui output

4 weeks agowebui: recognize AsciiDoc files as valid text files (#16850)
Jaromír Hradílek [Sat, 1 Nov 2025 14:02:57 +0000 (15:02 +0100)]
webui: recognize AsciiDoc files as valid text files (#16850)

* webui: recognize AsciiDoc files as valid text files

* webui: add an updated static webui build

* webui: add the updated dependency list

* webui: re-add an updated static webui build

This also reverts commit 742dbb837939c176a813868c268d28ebd3fafb7c.

4 weeks agocommon : allow --system-prompt-file for diffusion-cli (#16903)
Sigbjørn Skjæret [Sat, 1 Nov 2025 10:01:42 +0000 (11:01 +0100)]
common : allow --system-prompt-file for diffusion-cli (#16903)

4 weeks agocodeowners : update after refactor (#16905)
Sigbjørn Skjæret [Sat, 1 Nov 2025 07:55:25 +0000 (08:55 +0100)]
codeowners : update after refactor (#16905)

4 weeks agovulkan: Fix multi_add invalid descriptor usage (#16899)
Jeff Bolz [Sat, 1 Nov 2025 05:52:14 +0000 (00:52 -0500)]
vulkan: Fix multi_add invalid descriptor usage (#16899)

4 weeks agovulkan: fuse mul_mat+add and mul_mat_id+add_id (#16868)
Jeff Bolz [Sat, 1 Nov 2025 05:45:28 +0000 (00:45 -0500)]
vulkan: fuse mul_mat+add and mul_mat_id+add_id (#16868)

* vulkan: fuse mul_mat+add and mul_mat_id+add_id

The fusion is only applied for the mat-vec mul paths.

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix 32b build

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoCUDA: Remove unneded bias/gate dims in fused mmvq (#16858)
Oliver Simons [Sat, 1 Nov 2025 05:13:26 +0000 (06:13 +0100)]
CUDA: Remove unneded bias/gate dims in fused mmvq (#16858)

* CUDA: Remove unneded bias/gate dims in fused mmvq

Pointed out
[here](https://github.com/ggml-org/llama.cpp/pull/16847#discussion_r2476798989)
that only a single value is needed per target col per thread

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Fix "Error 991-D: extra braces are nonstandard" during compilation

---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agorefactor : llama-model.cpp (#16252)
Piotr Wilkin (ilintar) [Fri, 31 Oct 2025 22:40:23 +0000 (23:40 +0100)]
refactor : llama-model.cpp (#16252)

* Sqashed: llama-model.cpp refactoring

* Fix formatting of attn / ffn / ffn_moe calls

* Fix import regression / unify spacing in models.h

* totally DID NOT miss those!

* Add missing qwen3vl(moe) models

* Add missing new .cpp files to build

* Remove extra semicolons

* Editor checker

* Update src/models/models.h

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agomodel : Minimax M2 (#16831)
Piotr Wilkin (ilintar) [Fri, 31 Oct 2025 20:20:47 +0000 (21:20 +0100)]
model : Minimax M2 (#16831)

* Model: Minimax M2

* Cleanup

* Cleanup pt. 2

* Cleanup pt. 3

* Update convert_hf_to_gguf_update.py - merge catch blocks

Co-authored-by: Sigbjørn Skjæret <redacted>
* Remove vocab models and test

* Remove all redundant hparam settings covered by TextModel

* Move super to start, don't set block_count

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/constants.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agomodel : add Granite Hybrid nano types (#16896)
Giuseppe Scrivano [Fri, 31 Oct 2025 20:20:07 +0000 (21:20 +0100)]
model : add Granite Hybrid nano types (#16896)

Signed-off-by: Giuseppe Scrivano <redacted>
4 weeks agoCUDA: Volta tensor core support for MMF (#16843)
Johannes Gäßler [Fri, 31 Oct 2025 14:57:19 +0000 (15:57 +0100)]
CUDA: Volta tensor core support for MMF (#16843)

* CUDA: Volta tensor core support for MMF

* more generic checks for hardware support

* Update ggml/src/ggml-cuda/mmf.cuh

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Aman Gupta <redacted>
4 weeks agosync : ggml
Georgi Gerganov [Fri, 31 Oct 2025 14:25:50 +0000 (16:25 +0200)]
sync : ggml

4 weeks agoCUDA: add expert reduce kernel (#16857)
Aman Gupta [Fri, 31 Oct 2025 12:05:07 +0000 (20:05 +0800)]
CUDA: add expert reduce kernel (#16857)

* CUDA: add expert reduce kernel

* contigous checks, better formatting, use std::vector instead of array

* use vector empty instead of size

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agobatch : fix consistency checks for the input positions (#16890)
Georgi Gerganov [Fri, 31 Oct 2025 11:50:33 +0000 (13:50 +0200)]
batch : fix consistency checks for the input positions (#16890)

4 weeks agoserver : don't print user inputs to console (#16871)
Georgi Gerganov [Fri, 31 Oct 2025 08:54:19 +0000 (10:54 +0200)]
server : don't print user inputs to console (#16871)

4 weeks agoserver : fix typos in server.cpp comments [no ci] (#16883)
Daniel Bevenius [Fri, 31 Oct 2025 08:51:26 +0000 (09:51 +0100)]
server : fix typos in server.cpp comments [no ci] (#16883)

4 weeks agovulkan: disable spirv-opt for rope shaders (#16872)
Jeff Bolz [Fri, 31 Oct 2025 07:34:47 +0000 (02:34 -0500)]
vulkan: disable spirv-opt for rope shaders (#16872)

4 weeks agovulkan: Fix crash when FP16 mul_mat accumulation is not supported (#16796)
Masato Nakasaka [Fri, 31 Oct 2025 07:18:59 +0000 (16:18 +0900)]
vulkan: Fix crash when FP16 mul_mat accumulation is not supported (#16796)

* Experimenting crash fix

* added assert for aborting and fixed comment

* changed to check if a pipeline is empty or not

* Moved function in class definition

* replaced with is_empty

* Modified is_empty to check only unaligned pipelines

4 weeks agovulkan: fix shmem overrun in mmq id shader (#16873)
Ruben Ortlam [Fri, 31 Oct 2025 07:14:49 +0000 (08:14 +0100)]
vulkan: fix shmem overrun in mmq id shader (#16873)

* vulkan: fix shmem overrun in mmq id shader

* metal : fix mul_mm_id

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agoggml-hexagon: respect input size when getting/setting tensor data (#16836)
l3utterfly [Fri, 31 Oct 2025 04:46:31 +0000 (12:46 +0800)]
ggml-hexagon: respect input size when getting/setting tensor data (#16836)

* respect input size when getting/setting tensor data

allows partial repacking/copying when get tensor size is smaller than the actual tensor

* Removed duplicate repack_mxfp4_mxfp4x4x2 function

4 weeks agoci : enable free-disk-space on cuda docker build (#16877)
Sigbjørn Skjæret [Thu, 30 Oct 2025 23:34:27 +0000 (00:34 +0100)]
ci : enable free-disk-space on cuda docker build (#16877)

4 weeks agoopencl: fix boundary handling for mul_mm (#16875)
lhez [Thu, 30 Oct 2025 23:00:20 +0000 (16:00 -0700)]
opencl: fix boundary handling for mul_mm (#16875)

4 weeks agoconvert : update transformers requirements (#16866)
RodriMora [Thu, 30 Oct 2025 22:15:03 +0000 (23:15 +0100)]
convert : update transformers requirements (#16866)

* Update requirements-convert_legacy_llama.txt

Updated requirements to support Qwen3-VL in transformers 4.57.1 version

* Update requirements/requirements-convert_legacy_llama.txt

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoserver : bump request URI max length to 32768 (#16862)
chansikpark [Thu, 30 Oct 2025 18:22:23 +0000 (14:22 -0400)]
server : bump request URI max length to 32768 (#16862)

4 weeks agoserver : remove n_past (#16818)
Georgi Gerganov [Thu, 30 Oct 2025 16:42:57 +0000 (18:42 +0200)]
server : remove n_past (#16818)

* server : remove n_past

* server : replace slot.n_prompt_tokens() with slot.task->n_tokens()

* server : fixes + clean-up

* cont : fix context shift

* server : add server_tokens::pos_next()

Co-authored-by: Xuan-Son Nguyen <redacted>
* server : fix pos_next() usage

Co-authored-by: Xuan-Son Nguyen <redacted>
---------

Co-authored-by: Xuan-Son Nguyen <redacted>
4 weeks agocpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64...
Max Krasnyansky [Thu, 30 Oct 2025 16:06:13 +0000 (09:06 -0700)]
cpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64 (#16833)

Very similar implementation to the flash-attention chunking, with similar benefits.

4 weeks agocommon: fix typo in cli help text (#16864)
Shagun Bera [Thu, 30 Oct 2025 15:47:31 +0000 (21:17 +0530)]
common: fix typo in cli help text (#16864)

4 weeks agomodel: add support for qwen3vl series (#16780)
JJJYmmm [Thu, 30 Oct 2025 15:19:14 +0000 (23:19 +0800)]
model: add support for qwen3vl series (#16780)

* support qwen3vl series.

Co-authored-by: Thireus ☠ <redacted>
Co-authored-by: yairpatch <redacted>
Co-authored-by: LETS-BEE <redacted>
* bugfix: fix the arch check for qwen3vl-moe.

* use build_ffn

* optimize deepstack structure

* optimize deepstack feature saving

* Revert "optimize deepstack feature saving" for temporal fix

This reverts commit f321b9fdf13e59527408152e73b1071e19a87e71.

* code clean

* use fused qkv in clip

* clean up / rm is_deepstack_layers for simplification

* add test model

* move test model to "big" section

* fix imrope check

* remove trailing whitespace

* fix rope fail

* metal : add imrope support

* add imrope support for sycl

* vulkan: add imrope w/o check

* fix vulkan

* webgpu: add imrope w/o check

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix tensor mapping

---------

Co-authored-by: Thireus ☠ <redacted>
Co-authored-by: yairpatch <redacted>
Co-authored-by: LETS-BEE <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agocpu: introduce chunking for flash attention (#16829)
Max Krasnyansky [Thu, 30 Oct 2025 12:26:05 +0000 (05:26 -0700)]
cpu: introduce chunking for flash attention (#16829)

Factor out the core FA loop into flash_atten_f16_one_chunk and add an outter loop
on top that handles the chunks.

4 weeks agomodel: Add support for CogVLM model (#15002)
Tianyue-Zhao [Thu, 30 Oct 2025 11:18:50 +0000 (07:18 -0400)]
model: Add support for CogVLM model (#15002)

* Added GGUF mappings for CogVLM model

* Add tensor mapping for CogVLM visual encoder

* Add CogVLM to conversion script, no vision part yet

* Added CogVLM vision model to conversion script

* Add graph for CogVLM CLIP model

* Add graph for CogVLM

* Fixes for CogVLM. Now compiles.

* Model now runs

* Fixes for cogvlm graph

* Account for graph context change after rebase

* Changes for whitespace

* Changes in convert script according to comments

* Switch CogVLM LLM graph to merged QKV tensor

* Use rope_type variable instead of direct definition

* Change CogVLM CLIP encoder to use SWIGLU

* Switch CogVLM CLIP to use merged QKV

* Apply rebase edits and remove ggml_cont call that is now unnecessary

* clean up

---------

Co-authored-by: Xuan Son Nguyen <redacted>
4 weeks agocuda : fix argsort with 64k+ rows (#16849)
Sigbjørn Skjæret [Thu, 30 Oct 2025 07:56:28 +0000 (08:56 +0100)]
cuda : fix argsort with 64k+ rows (#16849)

4 weeks agollama : use std::abs instead of abs (#16853)
Jan Boon [Thu, 30 Oct 2025 06:30:58 +0000 (14:30 +0800)]
llama : use std::abs instead of abs (#16853)

4 weeks agovulkan: Handle argsort with a large number of rows (#16851)
Jeff Bolz [Thu, 30 Oct 2025 06:27:41 +0000 (01:27 -0500)]
vulkan: Handle argsort with a large number of rows (#16851)

4 weeks agoHide latency of bias and gate-loading (#16847)
Oliver Simons [Thu, 30 Oct 2025 03:34:15 +0000 (04:34 +0100)]
Hide latency of bias and gate-loading (#16847)

This is realised by loading them into registers before computation of
the dot-product, effectively batching them together with said
dot-product. As a lot of threads are alive here, the warp scheduler has
enough threads available to effectively hide the cost of additionally
loading those two floats.

4 weeks agovulkan: Fuse rope+set_rows (#16769)
Jeff Bolz [Wed, 29 Oct 2025 20:13:10 +0000 (15:13 -0500)]
vulkan: Fuse rope+set_rows (#16769)

This pattern appears in a lot of models, the rope operation is applied right
before storing into the KV cache (usually on the K tensor).

Add a path to some of the rope shaders that computes the destination address
based on the set_rows tensor. Compile variants of the shader with D_TYPE of
f16 (the usual KV cache type).

Add a src3 operand to ggml_vk_op_f32 - sometimes rope uses three srcs and needs
the fourth for the row indices.

Add fused_ops_write_mask to indicate which intermediate tensors need to write
their results to memory. Skipping writing the roped K value helps to allow more
nodes to run concurrently.

Add logic to ggml_vk_graph_optimize to make ROPE+VIEW+SET_ROWS consecutive. It
rarely starts out that way in the graph.

Add new backend tests.

4 weeks agollama: fix ASAN error with M-RoPE (#16848)
Xuan-Son Nguyen [Wed, 29 Oct 2025 19:11:39 +0000 (20:11 +0100)]
llama: fix ASAN error with M-RoPE (#16848)

4 weeks agollama: store mrope data in KV cell (#16825)
Xuan-Son Nguyen [Wed, 29 Oct 2025 17:09:18 +0000 (18:09 +0100)]
llama: store mrope data in KV cell (#16825)

* llama: store mrope data in KV cell

* correct x,y ordering

* address review comments

* add consistency checks

* Update src/llama-kv-cache.cpp

Co-authored-by: Georgi Gerganov <redacted>
* add TODO

* fix asan error

* kv-cells : improve ext handling

* cont : fix headers

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agovulkan: Update topk_moe fusion to handle gpt's late softmax (#16656)
Jeff Bolz [Wed, 29 Oct 2025 13:44:29 +0000 (08:44 -0500)]
vulkan: Update topk_moe fusion to handle gpt's late softmax (#16656)

* vulkan: Update topk_moe fusion to handle gpt's late softmax

Based on #16649.

* Add ggml_check_edges

* Add sync logging to show fusion effects

* handle clamp added in #16655

* Update ggml/src/ggml-impl.h

Co-authored-by: Diego Devesa <redacted>
4 weeks agoVulkan MMQ Integer Dot Refactor and K-Quant support (#16536)
Ruben Ortlam [Wed, 29 Oct 2025 13:39:03 +0000 (14:39 +0100)]
Vulkan MMQ Integer Dot Refactor and K-Quant support (#16536)

* vulkan: add mmq q2_k integer dot support

* Refactor mmq caching

* Reduce mmq register use

* Load 4 quant blocks into shared memory in one step

* Pack q2_k blocks into caches of 32

* Use 32-bit accumulators for integer dot matmul

* Add q4_k mmq

* Add q3_k mmq

* Add q5_k mmq

* Add q6_k mmq

* Add mxfp4 mmq, enable MMQ MUL_MAT_ID

* Fix mmv dm loads

4 weeks agoHexagon Op queue & dispatch optimizations (#16820)
Max Krasnyansky [Wed, 29 Oct 2025 13:29:12 +0000 (06:29 -0700)]
Hexagon Op queue & dispatch optimizations (#16820)

* hexagon: remove dspqueue callbacks and do all read processing inplace

* hexagon: there is no need to ref/deref the buffers at this point

We're not going to release the buffers without flushing the session queue.
So there is no need to inc/dec the refcounts for every request.
We also don't need to include those bufs in the response.

* hexagon: bump the thread count in the adb wrapper scripts

We can use more CPU cores now that the dedicated dspqueue polling threads are not used (ie no contention).
Also enable more agressive polling for now since we still map Flash Attention (and a few other kernels) to
the CPU and those dspqueue threads were keeping the CPU cores are higher clock freqs.

* hexagon: add lhez as the second code owner

4 weeks agoCUDA: use fastdiv in set-rows (#16834)
Aman Gupta [Wed, 29 Oct 2025 13:11:53 +0000 (21:11 +0800)]
CUDA: use fastdiv in set-rows (#16834)

* CUDA: use fastdiv in set-rows

* add assert about value fitting in u32

4 weeks agovendor : sync minja (#16500)
Sigbjørn Skjæret [Wed, 29 Oct 2025 13:09:50 +0000 (14:09 +0100)]
vendor : sync minja (#16500)

* sync minja.hpp

Adds Call/EndCall support, used in MiniCPM3 and MiniCPM4-MCP.

* remove spurious semicolon

* sync from ochafik/minja

4 weeks agovulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (#16793)
Jeff Bolz [Wed, 29 Oct 2025 08:53:04 +0000 (03:53 -0500)]
vulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (#16793)

This lets the copy to the destination device use the host-visible
vidmem optimization.

4 weeks agoCUDA: Fix bug in topk-moe for gpt-oss (#16821)
Aman Gupta [Wed, 29 Oct 2025 07:55:06 +0000 (15:55 +0800)]
CUDA: Fix bug in topk-moe for gpt-oss (#16821)

* CUDA: Fix bug in topk-moe for gpt-oss

When using ggml_can_fuse_subgraph, the output nodes which are passed are wrong. This causes `test-backend-ops` to still fuse ndoes (because the nodes are not used elsewhere in the graph),
but it actually doesn't fuse in the actual gpt-oss

* fix for qwen3 too

* change ifndef to ifdef

4 weeks agosycl: add RMS_NORM_BACK operation support (#16808)
YaelLogic [Wed, 29 Oct 2025 06:14:39 +0000 (08:14 +0200)]
sycl: add RMS_NORM_BACK operation support (#16808)

* sycl: add RMS_NORM_BACK operation support

* sycl: rms_norm_back: add dual reduction paths (FP64 and FP32) and savepoint before further changes

* sycl: add RMS_NORM_BACK support

Implement RMS_NORM_BACK for the SYCL backend using FP32 compensated parallel reduction. Minimal docs updates (ops.md / SYCL.csv).

* revert: restore .gitignore and tools/run/CMakeLists.txt to upstream

* revert: restore tests/CMakeLists.txt to upstream

* sycl: optimize rms_norm_back

* fix: restore SYCL.csv to correct state with RMS_NORM_BACK support

* Update ggml/src/ggml-sycl/norm.cpp

Co-authored-by: Neo Zhang Jianyu <redacted>
* fix: remove trailing whitespace and add missing newline (EditorConfig)

---------

Co-authored-by: Neo Zhang Jianyu <redacted>
4 weeks agocuda: add SET operation support (#16804)
YaelGitAccount [Tue, 28 Oct 2025 19:10:28 +0000 (21:10 +0200)]
cuda: add SET operation support (#16804)

* feat(cuda): add GGML_OP_SET support

Implement CUDA kernel for SET operation with f32 support.

All tests passing (14598/14598).

* cuda(set): add I32 support; keep F32

* refactor(cuda): use ggml_cuda_cpy to unify SET operator logic and remove code duplication

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-cuda/set.cu

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agomemory : remove KV cache size padding (#16812)
Georgi Gerganov [Tue, 28 Oct 2025 18:19:44 +0000 (20:19 +0200)]
memory : remove KV cache size padding (#16812)

* memory : remove KV cache size padding

* cont : restore padding for n_kv tensor shape

* server : use slot context size instead of training context size

* server : simplify context limit logic

4 weeks agollama-bench : clarify benchmarked parts of the computation (#16823)
Georgi Gerganov [Tue, 28 Oct 2025 17:41:43 +0000 (19:41 +0200)]
llama-bench : clarify benchmarked parts of the computation (#16823)

4 weeks agoinitialise buffer.device in ggml_hexagon_session (#16816)
l3utterfly [Tue, 28 Oct 2025 15:16:20 +0000 (23:16 +0800)]
initialise buffer.device in ggml_hexagon_session (#16816)

5 weeks agoembedding: add raw option for --embd-output-format (#16541)
Sam Malayek [Tue, 28 Oct 2025 10:51:41 +0000 (03:51 -0700)]
embedding: add raw option for --embd-output-format (#16541)

* Add --embd-output-format raw for plain numeric embedding output

This new option outputs embeddings as raw space-separated floats, without JSON or 'embedding N:' prefixes. Useful for downstream vector pipelines and scripting.

* Move raw output handling into format handling section

* Move raw output handling into else-if block with other format handlers

* Use LOG instead of printf for raw embedding output

* docs: document 'raw' embedding output format in arg.cpp and README

5 weeks agollama: consistent ctx <-> buf order for KV cache (#16746)
Johannes Gäßler [Tue, 28 Oct 2025 10:23:54 +0000 (11:23 +0100)]
llama: consistent ctx <-> buf order for KV cache (#16746)

5 weeks agogrammar : support array references in json schema (#16792)
Aldehir Rojas [Tue, 28 Oct 2025 08:37:52 +0000 (03:37 -0500)]
grammar : support array references in json schema (#16792)

* grammar : support array references in json schema

* Update json-schema-to-grammar.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* grammar : improve regex when naming ref derived rules

* grammar : replace non-conformant definitions array with anyOf test case

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agoCANN: Improve device ID handling and aclnnArange checks (#16752)
Chenguang Li [Tue, 28 Oct 2025 02:54:53 +0000 (10:54 +0800)]
CANN: Improve device ID handling and aclnnArange checks (#16752)

* cann: improve device ID handling and aclnnArange checks

- Stop relying on CANN's internal device ID retrieval; use a global variable instead.
- Enforce stricter dimension validation in aclnnArange for better compatibility across CANN versions.

* cann: use thread local var

5 weeks agoCUDA: add unused vars to mmvf and mmvq (#16807)
Aman Gupta [Tue, 28 Oct 2025 02:31:21 +0000 (10:31 +0800)]
CUDA: add unused vars to mmvf and mmvq (#16807)

5 weeks agosycl: add SSM_CONV operation support (#16800)
tamarPal [Tue, 28 Oct 2025 01:50:33 +0000 (03:50 +0200)]
sycl: add SSM_CONV operation support (#16800)

* feat: Add SYCL backend support for SSM_CONV operator

* Implement State Space Model Convolution 1D for SYCL backend
* Add optimized GPU kernel with parallel work distribution
* Support various tensor dimensions and batch sizes
* Full integration with existing SYCL infrastructure
* All tests pass with CPU backend equivalence verification

* feat: Implement SYCL backend support for SSM_CONV operation

- Add ggml-sycl/ssm_conv.cpp and ssm_conv.hpp
- Implement SYCL kernel for state space model convolution
- Ensure numerical correctness matches CPU implementation exactly
- Add proper type checking for F32 tensors in backend support
- All test-backend-ops SSM_CONV tests pass (14490/14490)

* Perfect SSM_CONV SYCL implementation - 100% CPU parity

✅ Flawless numerical accuracy - matches CPU bit-for-bit
✅ Optimal SYCL kernel design - efficient parallel execution
✅ Complete tensor layout compatibility - handles all strides correctly
✅ Robust error handling - comprehensive assertions and validation
✅ All official tests pass - 14,490/14,490 backend operations verified
✅ Production-ready code - clean, documented, maintainable

Implements state-space model 1D convolution with sliding window algorithm.
Eliminates blocking queue.wait() for better async performance.

* Clean SSM_CONV code - remove all comments for production

Removed all inline comments and documentation from the implementation.
Clean, minimal code ready for production merge.

* fix: Final formatting corrections for CI compliance

- Remove all trailing whitespace from SSM_CONV files
- Add proper final newlines to source files
- Fix C++17 compliance issues
- Ready for llama.cpp CI validation

* sycl: fix trailing whitespace and minor safety casts in ssm_conv

* fix: Clean up duplicated content in ssm_conv.hpp header file

---------

Co-authored-by: tamarPal <redacted>
5 weeks agochat: Add LFM2 tool handling (#16763)
Yuri Khrustalev [Mon, 27 Oct 2025 22:54:01 +0000 (18:54 -0400)]
chat: Add LFM2 tool handling (#16763)

* Add LFM2 tool handling

* fmt

* Apply suggestion from @ykhrustalev

5 weeks agomtmd : fix idefics3 preprocessing (#16806)
Xuan-Son Nguyen [Mon, 27 Oct 2025 22:12:16 +0000 (23:12 +0100)]
mtmd : fix idefics3 preprocessing (#16806)

* mtmd : fix idefics3 preprocessing

* disable granite test

* fix test for granite

5 weeks agollama : disable pipeline parallelism if compute buffer allocation fails (#16748)
Diego Devesa [Mon, 27 Oct 2025 20:51:28 +0000 (13:51 -0700)]
llama : disable pipeline parallelism if compute buffer allocation fails (#16748)

5 weeks agoggml : fix interpolate with align-corners and ne=1 (#16700)
Acly [Mon, 27 Oct 2025 20:50:22 +0000 (21:50 +0100)]
ggml : fix interpolate with align-corners and ne=1 (#16700)

* ggml : fix interpolate with align-corners and ne=1

* avoid division by zero if one of the spatial dimensions is 1
* cpu, cuda, opencl returned correct result anyway due to clamp
* vulkan didn't clamp for align-corners so results were broken

* fix clang warning

5 weeks agoHIP: fix AMDGPU_TARGETS, update documentation (#16803)
Johannes Gäßler [Mon, 27 Oct 2025 20:39:49 +0000 (21:39 +0100)]
HIP: fix AMDGPU_TARGETS, update documentation (#16803)

5 weeks agomodel : add LightOnOCR-1B model (#16764)
Xuan-Son Nguyen [Mon, 27 Oct 2025 15:02:58 +0000 (16:02 +0100)]
model : add LightOnOCR-1B model (#16764)

* model : add LightOnOCR-1B model

* add test

5 weeks agollama: fix leaked buffers for mmap + split files (#16765)
Johannes Gäßler [Mon, 27 Oct 2025 08:17:31 +0000 (09:17 +0100)]
llama: fix leaked buffers for mmap + split files (#16765)

5 weeks agotest-backend-ops: print failed tests at the end (#16785)
Aman Gupta [Mon, 27 Oct 2025 01:25:10 +0000 (09:25 +0800)]
test-backend-ops: print failed tests at the end (#16785)

5 weeks agosycl: add ROLL operation support (#16665)
tamarPal [Mon, 27 Oct 2025 01:20:24 +0000 (03:20 +0200)]
sycl: add ROLL operation support (#16665)

* sycl: add ROLL operation support

- Implement ggml_sycl_roll function for F32 tensors
- Add multi-axis roll operation with SYCL kernel
- Support all 4 tensor dimensions with proper shift normalization
- Add roll.cpp and roll.hpp to SYCL backend
- Update backend dispatch and supports_op for GGML_OP_ROLL
- Tests: 17662/17662 pass with identical CPU reference results

* fix: remove trailing whitespace from roll.cpp

- Fix EditorConfig violations in ggml/src/ggml-sycl/roll.cpp
- Remove trailing spaces from lines 6, 11, 28, 47, 58, 60

* ci: retrigger

* sycl: remove wait() calls from ROLL operation

* fix: editorconfig — LF endings + final newline for roll.hpp

---------

Co-authored-by: tamarPal <redacted>
5 weeks agosycl: add REPEAT_BACK operation support (#16734)
shani-f [Mon, 27 Oct 2025 01:19:50 +0000 (03:19 +0200)]
sycl: add REPEAT_BACK operation support (#16734)

* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* Update ggml/src/ggml-sycl/repeat_back.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-sycl/repeat_back.hpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agoCUDA: support for weight clamp in top-k norm (#16702)
Aman Gupta [Mon, 27 Oct 2025 01:06:16 +0000 (09:06 +0800)]
CUDA: support for weight clamp in top-k norm (#16702)

5 weeks agoggml-alloc : make gallocr prefer chunks that allow memory reuse (#16788)
Acly [Sun, 26 Oct 2025 22:19:03 +0000 (23:19 +0100)]
ggml-alloc : make gallocr prefer chunks that allow memory reuse (#16788)

5 weeks agocuda : use fast copy when src and dst are of different type and contiguous (#16789)
Sigbjørn Skjæret [Sun, 26 Oct 2025 20:31:41 +0000 (21:31 +0100)]
cuda : use fast copy when src and dst are of different type and contiguous (#16789)

* use fast copy when src and dst are contiguous and same shape

* use int64_t ne and ignore shape

5 weeks agoggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support...
leejet [Sun, 26 Oct 2025 18:13:31 +0000 (02:13 +0800)]
ggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support large batch (#16744)

* fix k_compute_batched_ptrs

* add backend ops test

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* reduce the batch size

---------

Co-authored-by: Johannes Gäßler <redacted>
5 weeks agoconvert : enable expert group selection for all models with it (#16691)
Sigbjørn Skjæret [Sun, 26 Oct 2025 16:21:23 +0000 (17:21 +0100)]
convert : enable expert group selection for all models with it (#16691)

5 weeks agograph : add clamping to ffn_moe_weights_sum to avoid div-by-zero (#16655)
Sigbjørn Skjæret [Sun, 26 Oct 2025 16:20:32 +0000 (17:20 +0100)]
graph : add clamping to ffn_moe_weights_sum to avoid div-by-zero (#16655)

* add missing norm topk bias

* use clamping instead, update number and add comment

5 weeks agomodel : set res->t_embd in SmallThinker models (#16782)
Sigbjørn Skjæret [Sun, 26 Oct 2025 15:08:52 +0000 (16:08 +0100)]
model : set res->t_embd in SmallThinker models (#16782)

5 weeks agodocs : add Jamba to Text-only models list (#16778)
amirai21 [Sun, 26 Oct 2025 12:01:20 +0000 (14:01 +0200)]
docs : add Jamba to Text-only models list (#16778)

5 weeks agoCUDA: General GEMV fusion (#16715)
Aman Gupta [Sun, 26 Oct 2025 11:28:04 +0000 (19:28 +0800)]
CUDA: General GEMV fusion (#16715)