]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
3 weeks agovulkan: Use spec constants for conv2d s/d/p and kernel W/H (#16978)
Jeff Bolz [Sat, 8 Nov 2025 19:24:29 +0000 (13:24 -0600)]
vulkan: Use spec constants for conv2d s/d/p and kernel W/H (#16978)

* vulkan: Use spec constants for conv2d s/d/p and kernel W/H

Also add some additional unroll hints, which seems to help.

* lock around map lookup

3 weeks agoserver: fix correct time_ms calculation in prompt_progress (#17093)
Aidan [Sat, 8 Nov 2025 13:12:11 +0000 (13:12 +0000)]
server: fix correct time_ms calculation in prompt_progress (#17093)

* fix: correct time_ms calculation in send_partial_response

The time_ms field was incorrectly calculated. The division was happening
before the subtraction leading to incorrect values.

Before: (ggml_time_us() - slot.t_start_process_prompt / 1000) After:
(ggml_time_us() - slot.t_start_process_prompt) / 1000

* docs : document time_ms field in prompt_progress

3 weeks agoRevert "CUDA: add expert reduce kernel (#16857)" (#17100)
Aman Gupta [Sat, 8 Nov 2025 13:05:19 +0000 (21:05 +0800)]
Revert "CUDA: add expert reduce kernel (#16857)" (#17100)

3 weeks agoCUDA: skip fusion for repeating adds in bias (#17080)
Aman Gupta [Sat, 8 Nov 2025 08:58:05 +0000 (16:58 +0800)]
CUDA: skip fusion for repeating adds in bias (#17080)

3 weeks agovulkan: Increase BK to 32; use BK/4 for non-CM mul_mm.comp (#16636)
SavicStefan [Sat, 8 Nov 2025 08:28:22 +0000 (09:28 +0100)]
vulkan: Increase BK to 32; use BK/4 for non-CM mul_mm.comp (#16636)

Signed-off-by: Stefan Savic <redacted>
Co-authored-by: Stefan Savic <redacted>
3 weeks agoggml: disable vxe for cross-compilation by default (#16966)
Aleksei Nikiforov [Sat, 8 Nov 2025 08:00:20 +0000 (09:00 +0100)]
ggml: disable vxe for cross-compilation by default (#16966)

Otherwise compilation will fail due to enabling -mvx -mzvector
and not setting corresponding -march options.

3 weeks agovulkan: fuse rms_norm + mul + rope (+ view + set_rows) (#16977)
Jeff Bolz [Sat, 8 Nov 2025 07:52:15 +0000 (01:52 -0600)]
vulkan: fuse rms_norm + mul + rope (+ view + set_rows) (#16977)

This change combines the rms_norm+mul and rope+view+set_rows fusions to
allow fusing the whole sequence together. This comes up in Qwen3, Bailing,
and some other models.

3 weeks agovulkan: Fix test-thread-safety crashes (#17024)
Jeff Bolz [Sat, 8 Nov 2025 07:39:45 +0000 (01:39 -0600)]
vulkan: Fix test-thread-safety crashes (#17024)

The std::map pipeline_flash_attn_f32_f16 could be searched and inserted at the
same time, which needs to hold the lock. To be safe, hold the lock for all of
ggml_vk_load_shaders.

3 weeks agoCUDA: fix MMQ stream-k fixup ne1 indices (#17089)
Johannes Gäßler [Sat, 8 Nov 2025 07:26:18 +0000 (08:26 +0100)]
CUDA: fix MMQ stream-k fixup ne1 indices (#17089)

3 weeks agoggml webgpu: faster matrix multiplication/matrix-vector multiplication (#17031)
Reese Levine [Sat, 8 Nov 2025 03:27:20 +0000 (19:27 -0800)]
ggml webgpu: faster matrix multiplication/matrix-vector multiplication (#17031)

* Faster tensors (#8)

Add fast matrix and matrix/vector multiplication.

* Use map for shader replacements instead of pair of strings

3 weeks agoCUDA: properly handle nb00=nb02 case for cpy (#17081)
bssrdf [Fri, 7 Nov 2025 22:41:58 +0000 (17:41 -0500)]
CUDA: properly handle nb00=nb02 case for cpy (#17081)

3 weeks agovulkan : refactor buffer handling in vk_op_f32 (#16840)
Acly [Fri, 7 Nov 2025 20:08:50 +0000 (21:08 +0100)]
vulkan : refactor buffer handling in vk_op_f32 (#16840)

* vulkan : refactor/simplify buffer handling in vk_op_* functions

* Combine UMA handling into ggml_vk_tensor_subbuffer

3 weeks agoCUDA: fix should_use_mmvf for ne11 == 1 (#17085)
Johannes Gäßler [Fri, 7 Nov 2025 19:53:14 +0000 (20:53 +0100)]
CUDA: fix should_use_mmvf for ne11 == 1 (#17085)

* CUDA: fix should_use_mmvf for ne11 == 1

* Apply suggestion from @am17an

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Aman Gupta <redacted>
3 weeks agobench : cache the llama_context state at computed depth (#16944)
Georgi Gerganov [Fri, 7 Nov 2025 19:23:11 +0000 (21:23 +0200)]
bench : cache the llama_context state at computed depth (#16944)

* bench : cache llama_context state at depth

* cont : handle failures to restore the old state

* cont : print information when the state is being reused

3 weeks agohparams : add n_embd_inp() to support extended embed (#16928)
Sigbjørn Skjæret [Fri, 7 Nov 2025 18:27:58 +0000 (19:27 +0100)]
hparams : add n_embd_inp() to support extended embed (#16928)

* add n_embd_full to support extended embed

* don't change output

* rename to n_embd_inp

* restore n_embd where applicable

3 weeks agokv-cache : pad the cache size to 256 for performance (#17046)
Georgi Gerganov [Fri, 7 Nov 2025 18:03:25 +0000 (20:03 +0200)]
kv-cache : pad the cache size to 256 for performance (#17046)

* kv-cache : pad the size of the small SWA cache for performance

* context : pad the total context to 256

* cont : future-proof the swa pad

* server : adjust test params to new logic

3 weeks agoRevert "ggml-cpu: detect correct cpu flags for arm64 (#16229) (#16239)" (#17084)
Adrien Gallouët [Fri, 7 Nov 2025 16:34:05 +0000 (17:34 +0100)]
Revert "ggml-cpu: detect correct cpu flags for arm64 (#16229) (#16239)" (#17084)

This reverts commit 7c23f3f0d4b9f5d6ea140756eb694b562d5acebb.

3 weeks agoggml-cpu: detect correct cpu flags for arm64 (#16229) (#16239)
iron [Fri, 7 Nov 2025 16:18:14 +0000 (00:18 +0800)]
ggml-cpu: detect correct cpu flags for arm64 (#16229) (#16239)

When using GCC 9 and GCC 12 on the arm64 platform of ubuntu 2004,
the command "gcc -mcpu=native -E -v -" fails to detect the correct CPU flags,
which results in compilation failures for certain extended instructions,
but the correct CPU flags can be obtained by using gcc -march.

Signed-off-by: lizhenneng <redacted>
Co-authored-by: lizhenneng <redacted>
3 weeks agoserver : print the samplers chain for each request (#17070)
Georgi Gerganov [Fri, 7 Nov 2025 10:24:47 +0000 (12:24 +0200)]
server : print the samplers chain for each request (#17070)

3 weeks agocommon: move download functions to download.(cpp|h) (#17059)
Xuan-Son Nguyen [Fri, 7 Nov 2025 10:23:34 +0000 (11:23 +0100)]
common: move download functions to download.(cpp|h) (#17059)

* common: move download functions to download.(cpp|h)

* rm unused includes

* minor cleanup

---------

Co-authored-by: Georgi Gerganov <redacted>
3 weeks agoggml-cpu : optimize RVV q2_k and q3_k kernels (#16887)
xctan [Thu, 6 Nov 2025 16:12:45 +0000 (00:12 +0800)]
ggml-cpu : optimize RVV q2_k and q3_k kernels (#16887)

3 weeks agoCUDA: fix crash on uneven context without FA (#16988)
Johannes Gäßler [Thu, 6 Nov 2025 13:05:47 +0000 (14:05 +0100)]
CUDA: fix crash on uneven context without FA (#16988)

3 weeks agometal : initial Metal4 tensor API support (#16634)
Georgi Gerganov [Thu, 6 Nov 2025 12:45:10 +0000 (14:45 +0200)]
metal : initial Metal4 tensor API support (#16634)

* metal : rework mat-mat multiplication

* metal : initial Metal4 support

* cont

* metal : detect tensor support

* cont : better ifdefs

* metal : support tensors in mul_mm_id

* metal : add env for disabling tensor API

* tests : restore

* metal : remove unused constants

* metal : fix check for bfloat tensor support

* cont : handle API incompatibilities

* cont : handle even more incompatibilities

* metal : use tensor API only on M5 and later

3 weeks agoserver : disable checkpoints with mtmd (#17045)
Georgi Gerganov [Thu, 6 Nov 2025 10:09:29 +0000 (12:09 +0200)]
server : disable checkpoints with mtmd (#17045)

3 weeks agoclip: implement minicpm-v sinusoidal embd using GGML (#17036)
Xuan-Son Nguyen [Thu, 6 Nov 2025 10:02:54 +0000 (11:02 +0100)]
clip: implement minicpm-v sinusoidal embd using GGML (#17036)

* clip: implement minicpm-v sinusoidal embd using GGML

* fix repeat op

3 weeks agosycl: add CONCAT operator support (#16047)
YehuditE [Thu, 6 Nov 2025 10:02:33 +0000 (12:02 +0200)]
sycl: add CONCAT operator support (#16047)

* sycl: add CONCAT operator support

* cleanup: remove stray lines added by mistake

* fix: code format issues in concat.cpp and tests/test-backend-ops.cpp

* chore: fix editorconfig violations

* cleanup: drop unnecessary i16 type support

* docs: update sycl-csv and regenerate ops.md

* update docs/ops.md

* fix: adapt to upstream master changes after rebase

* fix: remove empty files

* fix: drop whitespace

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agodocs: explain CUDA 11 compilation [no ci] (#16824)
Johannes Gäßler [Thu, 6 Nov 2025 07:14:35 +0000 (08:14 +0100)]
docs: explain CUDA 11 compilation [no ci] (#16824)

3 weeks agoggml-hexagon: graceful fallback for older socs where rpcmem_alloc2 and FASTRPC_GET_UR...
l3utterfly [Thu, 6 Nov 2025 05:46:38 +0000 (13:46 +0800)]
ggml-hexagon: graceful fallback for older socs where rpcmem_alloc2 and FASTRPC_GET_URI is unsupported (#16987)

* support older socs where FASTRPC_GET_URI is unsupported

* added graceful fallback when FASTRPC_GET_URI call fails

* use weak symbols instead of loading libcdsprpc.so dynamically

* Add weak pragma for rpcmem_alloc2

* Remove weak declaration for rpcmem_alloc2 in ggml-hexagon.cpp

Removed weak declaration for rpcmem_alloc2.

* Enforce ndev to 1 for archs below v75

Force ndev to 1 for SoCs architectures lower than v75.

3 weeks agoimprove CUDA cpy memory bandwidth when copying transposed tensor (#16841)
bssrdf [Wed, 5 Nov 2025 20:55:04 +0000 (15:55 -0500)]
improve CUDA cpy memory bandwidth when copying transposed tensor  (#16841)

* WIP

* added a cpy kernel specific to transposed tensor which uses smem to avoid uncoalesced access; test cases also added shwoing improved memory bandwidth

* added BF16 support

* more strict check to make sure src0 is a transpose

* reformulated to handle more complicated transpose cases

* bring back 2D transpose for higher performance

* allow build on windows

* tranpose copy more shapes

* minor tweak

* final clean up

* restore some test cases

* keep only the kernel for true tranposed case; updated with review suggestions

* make CI happy

* remove headers not needed

* reduced bank conflicts for fp16 and bf16

* add missing const*

* now bank conflicts free

* use padding instead of swizzling

---------

Co-authored-by: bssrdf <redacted>
3 weeks agovulkan: Fix GGML_VULKAN_CHECK_RESULTS to better handle fusion (#16919)
Jeff Bolz [Wed, 5 Nov 2025 18:51:03 +0000 (12:51 -0600)]
vulkan: Fix GGML_VULKAN_CHECK_RESULTS to better handle fusion (#16919)

3 weeks agoexamples(gguf): GGUF example outputs (#17025)
Gabe Goodhart [Wed, 5 Nov 2025 17:58:16 +0000 (10:58 -0700)]
examples(gguf): GGUF example outputs (#17025)

* feat(llama-gguf): Print out the tensor type in llama-gguf r

Branch: Mamba2Perf

Signed-off-by: Gabe Goodhart <redacted>
* feat(off-topic): print the number of elements in tensors with llama-gguf

Branch: Mamba2SSD

Signed-off-by: Gabe Goodhart <redacted>
* style: valign

Branch: GGUFToolOutputs

Signed-off-by: Gabe Goodhart <redacted>
* Update examples/gguf/gguf.cpp

---------

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
3 weeks agomtmd: allow QwenVL to process larger image by default (#17020)
Xuan-Son Nguyen [Wed, 5 Nov 2025 13:26:49 +0000 (14:26 +0100)]
mtmd: allow QwenVL to process larger image by default (#17020)

3 weeks agoserver : do not default to multiple slots with speculative decoding (#17017)
Georgi Gerganov [Wed, 5 Nov 2025 12:32:55 +0000 (14:32 +0200)]
server : do not default to multiple slots with speculative decoding (#17017)

* server : do not default to multiple slots with speculative decoding

* cont : fix

3 weeks agomtmd: improve struct initialization (#16981)
Xuan-Son Nguyen [Wed, 5 Nov 2025 10:26:37 +0000 (11:26 +0100)]
mtmd: improve struct initialization (#16981)

3 weeks agodocs: Clarify the endpoint that webui uses (#17001)
손희준 [Wed, 5 Nov 2025 10:20:28 +0000 (19:20 +0900)]
docs: Clarify the endpoint that webui uses (#17001)

3 weeks agomodel : add openPangu-Embedded (#16941)
Li Pengzhan [Wed, 5 Nov 2025 09:28:58 +0000 (17:28 +0800)]
model : add openPangu-Embedded (#16941)

* Model: add openPangu-Embedded

* fixed according to reviewer's comments

* fixed the chat template check condition

* Apply suggestions from code review

change the chat-template check condition and some formatting issue

Co-authored-by: Sigbjørn Skjæret <redacted>
* whitespace cleanup

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
3 weeks agoggml webgpu: minor set rows optimization (#16810)
Reese Levine [Wed, 5 Nov 2025 09:27:42 +0000 (01:27 -0800)]
ggml webgpu: minor set rows optimization (#16810)

* Add buffer label and enable dawn-specific toggles to turn off some checks

* Minor set_rows optimization (#4)

* updated optimization, fixed errors

* non vectorized version now dispatches one thread per element

* Simplify

* Change logic for set_rows pipelines

---------

Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
* Comment on dawn toggles

* Remove some comments

* Implement overlap binary operators

* Revert "Implement overlap binary operators"

This reverts commit ed710b36f51ab3f53fa13db15c1685dc8678a32a.

* Disable support for non-contiguous binary_op tensors and leave note for future support

---------

Co-authored-by: neha-ha <redacted>
Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Neha Abbas <redacted>
3 weeks agosync : ggml
Georgi Gerganov [Tue, 4 Nov 2025 18:44:18 +0000 (20:44 +0200)]
sync : ggml

3 weeks agoggml : fix conv2d_dw SVE path (ggml/1380)
Georgi Gerganov [Tue, 4 Nov 2025 18:40:52 +0000 (20:40 +0200)]
ggml : fix conv2d_dw SVE path (ggml/1380)

* Fix test-conv2d-dw failure on ARM SVE by using runtime vector length

The ggml_compute_forward_conv_2d_dw_cwhn function was using a hardcoded GGML_F32_EPR (8) for SIMD vectorization, but on ARM SVE the actual vector length varies by hardware. This caused incorrect computation when processing CWHN layout tensors on ARM machines.

Fix by using svcntw() to get the runtime SVE vector length instead of the compile-time constant.

Co-authored-by: ggerganov <redacted>
* ci : reduce sam score threshold

* ci : update bbox checks for sam test

---------

Co-authored-by: copilot-swe-agent[bot] <redacted>
Co-authored-by: ggerganov <redacted>
3 weeks agoCUDA: update ops.md (#17005)
mnehete32 [Wed, 5 Nov 2025 03:01:15 +0000 (08:31 +0530)]
CUDA: update ops.md (#17005)

3 weeks agoopencl: update doc (#17011)
lhez [Wed, 5 Nov 2025 00:02:36 +0000 (16:02 -0800)]
opencl: update doc (#17011)

* opencl: update docs

* opencl: update docs

* opencl: fix link

* opencl: update doc

3 weeks agorefactor: replace sprintf with snprintf for safer string handling in dump functions...
nullname [Tue, 4 Nov 2025 20:25:39 +0000 (04:25 +0800)]
refactor: replace sprintf with snprintf for safer string handling in dump functions (#16913)

3 weeks agovulkan: remove the need for the dryrun (#16826)
Jeff Bolz [Tue, 4 Nov 2025 19:28:17 +0000 (13:28 -0600)]
vulkan: remove the need for the dryrun (#16826)

* vulkan: remove the need for the dryrun

Allocate pipelines and descriptor sets when requested.

Reallocate the prealloc buffers when needed, and flush any pending work
before reallocating.

For rms_partials and total_mul_mat_bytes, use the sizes computed the last time
the graph was executed.

* remove dryrun parameters

3 weeks agoserver : do context shift only while generating (#17000)
Georgi Gerganov [Tue, 4 Nov 2025 17:21:36 +0000 (19:21 +0200)]
server : do context shift only while generating (#17000)

3 weeks agoreadme : update hot topics (#17002)
Georgi Gerganov [Tue, 4 Nov 2025 15:21:31 +0000 (17:21 +0200)]
readme : update hot topics (#17002)

3 weeks agoggml-cpu : bicubic interpolation (#16891)
Acly [Tue, 4 Nov 2025 12:12:20 +0000 (13:12 +0100)]
ggml-cpu : bicubic interpolation (#16891)

4 weeks agoci : apply model label to models (#16994)
Sigbjørn Skjæret [Tue, 4 Nov 2025 11:29:39 +0000 (12:29 +0100)]
ci : apply model label to models (#16994)

4 weeks agochore : fix models indent after refactor (#16992)
Sigbjørn Skjæret [Tue, 4 Nov 2025 11:29:15 +0000 (12:29 +0100)]
chore : fix models indent after refactor (#16992)

4 weeks agoFix garbled output with REPACK at high thread counts (#16956)
Noah [Tue, 4 Nov 2025 05:04:59 +0000 (05:04 +0000)]
Fix garbled output with REPACK at high thread counts (#16956)

* Fix garbled output with REPACK at high thread counts

Fixed a race condition in the REPACK matrix multiplication code that caused garbled output when using 26+ threads (model-dependent threshold). The issue occurred because with high thread counts, the code forced chunk count to equal thread count, creating many small chunks. After aligning these chunks to NB_COLS boundaries, adjacent chunks could overlap, causing data corruption and race conditions. The fix enforces minimum chunk sizes based on NB_COLS and caps maximum chunk count to prevent creating too many tiny chunks, ensuring proper alignment without overlaps.

* Update ggml/src/ggml-cpu/repack.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-cpu/repack.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agoCUDA: avoid mul + bias fusion when doing fusion (#16935)
Aman Gupta [Tue, 4 Nov 2025 02:53:48 +0000 (10:53 +0800)]
CUDA: avoid mul + bias fusion when doing fusion (#16935)

4 weeks agoopencl: support imrope (#16914)
lhez [Mon, 3 Nov 2025 19:47:57 +0000 (11:47 -0800)]
opencl: support imrope (#16914)

* opencl: support imrope

* opencl: fix whitespace

4 weeks agofix: Viewing multiple PDF attachments (#16974)
Aleksander Grygier [Mon, 3 Nov 2025 17:53:26 +0000 (18:53 +0100)]
fix: Viewing multiple PDF attachments (#16974)

4 weeks agomodel-conversion : pass config to from_pretrained (#16963)
Daniel Bevenius [Mon, 3 Nov 2025 17:01:59 +0000 (18:01 +0100)]
model-conversion : pass config to from_pretrained (#16963)

This commit modifies the script `run-org-model.py` to ensure that the
model configuration is explicitly passed to the `from_pretrained` method
when loading the model. It also removes a duplicate configuration
loading which was a mistake.

The motivation for this change is that enables the config object to be
modified and then passed to the model loading function, which can be
useful when testing new models.

4 weeks agoserver : add props.model_alias (#16943)
Georgi Gerganov [Mon, 3 Nov 2025 13:38:23 +0000 (15:38 +0200)]
server : add props.model_alias (#16943)

* server : add props.model_alias

* webui : npm run format

4 weeks agoggml: CUDA: add head size 72 for flash-attn (#16962)
theo77186 [Mon, 3 Nov 2025 13:29:11 +0000 (14:29 +0100)]
ggml: CUDA: add head size 72 for flash-attn (#16962)

4 weeks agomtmd: add --image-min/max-tokens (#16921)
Xuan-Son Nguyen [Mon, 3 Nov 2025 10:11:18 +0000 (11:11 +0100)]
mtmd: add --image-min/max-tokens (#16921)

4 weeks agomtmd: pad mask for qwen2.5vl (#16954)
Xuan-Son Nguyen [Mon, 3 Nov 2025 09:25:55 +0000 (10:25 +0100)]
mtmd: pad mask for qwen2.5vl (#16954)

* mtmd: pad mask for qwen2.5vl

* improve

4 weeks agoggml : LoongArch fixes (#16958)
Jinyang He [Mon, 3 Nov 2025 06:40:02 +0000 (14:40 +0800)]
ggml : LoongArch fixes (#16958)

* Fix test-quantize-fns f16 and q4_0 failed when use LSX

* Fix LoongArch set float intrinsic when use LSX/LASX

4 weeks agosync: minja (glm 4.6 & minmax m2 templates) (#16949)
Olivier Chafik [Mon, 3 Nov 2025 05:33:56 +0000 (05:33 +0000)]
sync: minja (glm 4.6 & minmax m2 templates) (#16949)

* sync: minja

* Sync https://github.com/ochafik/minja/pull/7 (MinMax M2)

4 weeks agoSYCL: optimized repeat_back kernel (3× fewer asm instructions, 2× faster)Feature...
shani-f [Mon, 3 Nov 2025 01:35:33 +0000 (03:35 +0200)]
SYCL: optimized repeat_back kernel (3× fewer asm instructions, 2× faster)Feature/sycl repeat back opt (#16869)

* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* SYCL: optimize repeat_back kernel

* Remove Hebrew comment from repeat_back.cpp

* Remove comments for code clarity

Removed comments to clean up the code.

* Fix formatting in ggml-sycl.cpp

* Formatted lambda according to legacy style. No logic changes

* Remove blank line in repeat_back.cpp

Remove unnecessary blank line before assigning acc to dst_dd.

4 weeks agofeat(webui): improve LaTeX rendering with currency detection (#16508)
Sascha Rogmann [Sun, 2 Nov 2025 23:41:08 +0000 (00:41 +0100)]
feat(webui): improve LaTeX rendering with currency detection (#16508)

* webui : Revised LaTeX formula recognition

* webui : Further examples containg amounts

* webui : vitest for maskInlineLaTeX

* webui: Moved preprocessLaTeX to lib/utils

* webui: LaTeX in table-cells

* chore: update webui build output (use theirs)

* webui: backslash in LaTeX-preprocessing

* chore: update webui build output

* webui: look-behind backslash-check

* chore: update webui build output

* Apply suggestions from code review

Code maintenance (variable names, code formatting, string handling)

Co-authored-by: Aleksander Grygier <redacted>
* webui: Moved constants to lib/constants.

* webui: package woff2 inside base64 data

* webui: LaTeX-line-break in display formula

* chore: update webui build output

* webui: Bugfix (font embedding)

* webui: Bugfix (font embedding)

* webui: vite embeds assets

* webui: don't suppress 404 (fonts)

* refactor: KaTeX integration with SCSS

Moves KaTeX styling to SCSS for better customization and font embedding.

This change includes:
- Adding `sass` as a dev dependency.
- Introducing a custom SCSS file to override KaTeX variables and disable TTF/WOFF fonts, relying solely on WOFF2 for embedding.
- Adjusting the Vite configuration to resolve `katex-fonts` alias and inject SCSS variables.

* fix: LaTeX processing within blockquotes

* webui: update webui build output

---------

Co-authored-by: Aleksander Grygier <redacted>
4 weeks agotest-backend-ops : fix segfault in moe-expert-reduce test in support mode and coverag...
Shagun Bera [Sun, 2 Nov 2025 23:10:30 +0000 (04:40 +0530)]
test-backend-ops : fix segfault in moe-expert-reduce test in support mode and coverage (#16936)

* tests: fix segfault in moe-expert-reduce test in support mode and --show-coverage

* tests: init gf and filter out fusion tests for support mode

* tests: filter out fusion cases before calling eval_support

* tests: filter out fusion cases from show_test_coverage as well, fix lint

4 weeks agoci : disable failing riscv cross build (#16952)
Sigbjørn Skjæret [Sun, 2 Nov 2025 22:11:21 +0000 (23:11 +0100)]
ci : disable failing riscv cross build (#16952)

4 weeks agomodel: add Janus Pro for image understanding (#16906)
Zhiyong Wang [Sun, 2 Nov 2025 21:08:04 +0000 (13:08 -0800)]
model: add Janus Pro for image understanding (#16906)

* Add support for Janus Pro

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Address reviewer suggestions

Co-authored-by: Sigbjørn Skjæret <redacted>
* Add JANUS_PRO constant

* Update clip model handling

Co-authored-by: Xuan-Son Nguyen <redacted>
* Update tools/mtmd/clip.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Refactor JANUS_PRO handling in clip.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Update tools/mtmd/clip.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* em whitespace

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
4 weeks agoclip : use FA (#16837)
Georgi Gerganov [Sun, 2 Nov 2025 20:21:48 +0000 (22:21 +0200)]
clip : use FA (#16837)

* clip : use FA

* cont : add warning about unsupported ops

* implement "auto" mode for clip flash attn

* clip : print more detailed op support info during warmup

* cont : remove obsolete comment [no ci]

* improve debugging message

* trailing space

* metal : remove stray return

---------

Co-authored-by: Xuan Son Nguyen <redacted>
4 weeks agoserver : support unified cache across slots (#16736)
Georgi Gerganov [Sun, 2 Nov 2025 16:14:04 +0000 (18:14 +0200)]
server : support unified cache across slots (#16736)

* server : support unified context across slots

* cont : fix speculative decoding initialization

* context : fix n_ctx_per_seq computation

* server : purge slots one by one

* tests : add unified cache server tests

* llama : update per-seq context computation

* test-thread-safety : handle tiny training context of the input model

* server : fix server_tokens clear()

* server : use 4 slots + unified KV by default

* llama : add note about context size queries

* cont : update todos [no ci]

* context : do not cap the size of the context

* tests : adjust parameters to be CI friendlier

* context : add warning

4 weeks agocommon : move gpt-oss reasoning processing to init params (#16937)
Aldehir Rojas [Sun, 2 Nov 2025 14:56:28 +0000 (08:56 -0600)]
common : move gpt-oss reasoning processing to init params (#16937)

4 weeks agodocs: remove llama_sampler_accept reference in sampling sample usage (#16920)
Adrian Lundberg [Sun, 2 Nov 2025 09:28:37 +0000 (10:28 +0100)]
docs: remove llama_sampler_accept reference in sampling sample usage (#16920)

commit 5fb5e24811cb01d48b482c15a974bfbd9f433e1d (llama : minor
sampling refactor (2) (#9386)) moved the llama_sampler_accept call
into llama_sampler_sample, but the sampling sample usage in llama.h
was forgotten to be updated accordingly.

4 weeks agoCUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (#16917)
mnehete32 [Sun, 2 Nov 2025 03:12:57 +0000 (08:42 +0530)]
CUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (#16917)

4 weeks agodevops: fix failing s390x docker build (#16918)
Aaron Teo [Sun, 2 Nov 2025 00:48:46 +0000 (08:48 +0800)]
devops: fix failing s390x docker build (#16918)

4 weeks agoggml: add s390x cpu-feats (#16774)
Aaron Teo [Sun, 2 Nov 2025 00:48:23 +0000 (08:48 +0800)]
ggml: add s390x cpu-feats (#16774)

4 weeks agoscripts : add script to bench models (#16894)
Georgi Gerganov [Sat, 1 Nov 2025 22:15:31 +0000 (00:15 +0200)]
scripts : add script to bench models (#16894)

4 weeks agowebui: auto-refresh /props on inference start to resync model metadata (#16784)
Pascal [Sat, 1 Nov 2025 18:49:51 +0000 (19:49 +0100)]
webui: auto-refresh /props on inference start to resync model metadata (#16784)

* webui: auto-refresh /props on inference start to resync model metadata

- Add no-cache headers to /props and /slots
- Throttle slot checks to 30s
- Prevent concurrent fetches with promise guard
- Trigger refresh from chat streaming for legacy and ModelSelector
- Show dynamic serverWarning when using cached data

* fix: restore proper legacy behavior in webui by using unified /props refresh

Updated assistant message bubbles to show each message's stored model when available,
falling back to the current server model only when the per-message value is missing

When the model selector is disabled, now fetches /props and prioritizes that model name
over chunk metadata, then persists it with the streamed message so legacy mode properly
reflects the backend configuration

* fix: detect first valid SSE chunk and refresh server props once

* fix: removed the slots availability throttle constant and state

* webui: purge ai-generated cruft

* chore: update webui static build

4 weeks agowebui: add HTML/JS preview support to MarkdownContent with sandboxed iframe (#16757)
Pascal [Sat, 1 Nov 2025 16:14:54 +0000 (17:14 +0100)]
webui: add HTML/JS preview support to MarkdownContent with sandboxed iframe (#16757)

* webui: add HTML/JS preview support to MarkdownContent with sandboxed iframe dialog

Extended MarkdownContent to flag previewable code languages,
add a preview button alongside copy controls, manage preview
dialog state, and share styling for the new button group

Introduced CodePreviewDialog.svelte, a sandboxed iframe modal
for rendering HTML/JS previews with consistent dialog controls

* webui: fullscreen HTML preview dialog using bits-ui

* Update tools/server/webui/src/lib/components/app/misc/CodePreviewDialog.svelte

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/components/app/misc/MarkdownContent.svelte

Co-authored-by: Aleksander Grygier <redacted>
* webui: pedantic style tweak for CodePreviewDialog close button

* webui: remove overengineered preview language logic

* chore: update webui static build

---------

Co-authored-by: Aleksander Grygier <redacted>
4 weeks agovendor : update cpp-httplib to 0.27.0 (#16846)
Adrien Gallouët [Sat, 1 Nov 2025 15:52:17 +0000 (16:52 +0100)]
vendor : update cpp-httplib to 0.27.0 (#16846)

Signed-off-by: Adrien Gallouët <redacted>
4 weeks agomtmd: refactor preprocessing + support max/min pixels (#16878)
Xuan-Son Nguyen [Sat, 1 Nov 2025 14:51:36 +0000 (15:51 +0100)]
mtmd: refactor preprocessing + support max/min pixels (#16878)

* mtmd: refactor preprocessing + support max/min pixels

* fix mlp type

* implement mix/max pixels

* improve hparams

* better image preproc for qwen

* fix

* fix out of bound composite

* fix (2)

* fix token calculation

* get_merge_kernel_size()

* fix llama4 and lfm2

* gonna fix them all

* use simple resize for qwen

* qwen: increase min tokens

* no resize if dst size == src size

* restore to initial min/max tokens value for qwen

4 weeks agoAdd a setting to display message generation statistics (#16901)
Aleksander Grygier [Sat, 1 Nov 2025 14:35:57 +0000 (15:35 +0100)]
Add a setting to display message generation statistics (#16901)

* feat: Add setting to display message generation statistics

* chore: build static webui output

4 weeks agowebui: recognize AsciiDoc files as valid text files (#16850)
Jaromír Hradílek [Sat, 1 Nov 2025 14:02:57 +0000 (15:02 +0100)]
webui: recognize AsciiDoc files as valid text files (#16850)

* webui: recognize AsciiDoc files as valid text files

* webui: add an updated static webui build

* webui: add the updated dependency list

* webui: re-add an updated static webui build

This also reverts commit 742dbb837939c176a813868c268d28ebd3fafb7c.

4 weeks agocommon : allow --system-prompt-file for diffusion-cli (#16903)
Sigbjørn Skjæret [Sat, 1 Nov 2025 10:01:42 +0000 (11:01 +0100)]
common : allow --system-prompt-file for diffusion-cli (#16903)

4 weeks agocodeowners : update after refactor (#16905)
Sigbjørn Skjæret [Sat, 1 Nov 2025 07:55:25 +0000 (08:55 +0100)]
codeowners : update after refactor (#16905)

4 weeks agovulkan: Fix multi_add invalid descriptor usage (#16899)
Jeff Bolz [Sat, 1 Nov 2025 05:52:14 +0000 (00:52 -0500)]
vulkan: Fix multi_add invalid descriptor usage (#16899)

4 weeks agovulkan: fuse mul_mat+add and mul_mat_id+add_id (#16868)
Jeff Bolz [Sat, 1 Nov 2025 05:45:28 +0000 (00:45 -0500)]
vulkan: fuse mul_mat+add and mul_mat_id+add_id (#16868)

* vulkan: fuse mul_mat+add and mul_mat_id+add_id

The fusion is only applied for the mat-vec mul paths.

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* fix 32b build

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoCUDA: Remove unneded bias/gate dims in fused mmvq (#16858)
Oliver Simons [Sat, 1 Nov 2025 05:13:26 +0000 (06:13 +0100)]
CUDA: Remove unneded bias/gate dims in fused mmvq (#16858)

* CUDA: Remove unneded bias/gate dims in fused mmvq

Pointed out
[here](https://github.com/ggml-org/llama.cpp/pull/16847#discussion_r2476798989)
that only a single value is needed per target col per thread

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* Fix "Error 991-D: extra braces are nonstandard" during compilation

---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agorefactor : llama-model.cpp (#16252)
Piotr Wilkin (ilintar) [Fri, 31 Oct 2025 22:40:23 +0000 (23:40 +0100)]
refactor : llama-model.cpp (#16252)

* Sqashed: llama-model.cpp refactoring

* Fix formatting of attn / ffn / ffn_moe calls

* Fix import regression / unify spacing in models.h

* totally DID NOT miss those!

* Add missing qwen3vl(moe) models

* Add missing new .cpp files to build

* Remove extra semicolons

* Editor checker

* Update src/models/models.h

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agomodel : Minimax M2 (#16831)
Piotr Wilkin (ilintar) [Fri, 31 Oct 2025 20:20:47 +0000 (21:20 +0100)]
model : Minimax M2 (#16831)

* Model: Minimax M2

* Cleanup

* Cleanup pt. 2

* Cleanup pt. 3

* Update convert_hf_to_gguf_update.py - merge catch blocks

Co-authored-by: Sigbjørn Skjæret <redacted>
* Remove vocab models and test

* Remove all redundant hparam settings covered by TextModel

* Move super to start, don't set block_count

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/constants.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agomodel : add Granite Hybrid nano types (#16896)
Giuseppe Scrivano [Fri, 31 Oct 2025 20:20:07 +0000 (21:20 +0100)]
model : add Granite Hybrid nano types (#16896)

Signed-off-by: Giuseppe Scrivano <redacted>
4 weeks agoCUDA: Volta tensor core support for MMF (#16843)
Johannes Gäßler [Fri, 31 Oct 2025 14:57:19 +0000 (15:57 +0100)]
CUDA: Volta tensor core support for MMF (#16843)

* CUDA: Volta tensor core support for MMF

* more generic checks for hardware support

* Update ggml/src/ggml-cuda/mmf.cuh

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Aman Gupta <redacted>
4 weeks agosync : ggml
Georgi Gerganov [Fri, 31 Oct 2025 14:25:50 +0000 (16:25 +0200)]
sync : ggml

4 weeks agoCUDA: add expert reduce kernel (#16857)
Aman Gupta [Fri, 31 Oct 2025 12:05:07 +0000 (20:05 +0800)]
CUDA: add expert reduce kernel (#16857)

* CUDA: add expert reduce kernel

* contigous checks, better formatting, use std::vector instead of array

* use vector empty instead of size

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
4 weeks agobatch : fix consistency checks for the input positions (#16890)
Georgi Gerganov [Fri, 31 Oct 2025 11:50:33 +0000 (13:50 +0200)]
batch : fix consistency checks for the input positions (#16890)

4 weeks agoserver : don't print user inputs to console (#16871)
Georgi Gerganov [Fri, 31 Oct 2025 08:54:19 +0000 (10:54 +0200)]
server : don't print user inputs to console (#16871)

4 weeks agoserver : fix typos in server.cpp comments [no ci] (#16883)
Daniel Bevenius [Fri, 31 Oct 2025 08:51:26 +0000 (09:51 +0100)]
server : fix typos in server.cpp comments [no ci] (#16883)

4 weeks agovulkan: disable spirv-opt for rope shaders (#16872)
Jeff Bolz [Fri, 31 Oct 2025 07:34:47 +0000 (02:34 -0500)]
vulkan: disable spirv-opt for rope shaders (#16872)

4 weeks agovulkan: Fix crash when FP16 mul_mat accumulation is not supported (#16796)
Masato Nakasaka [Fri, 31 Oct 2025 07:18:59 +0000 (16:18 +0900)]
vulkan: Fix crash when FP16 mul_mat accumulation is not supported (#16796)

* Experimenting crash fix

* added assert for aborting and fixed comment

* changed to check if a pipeline is empty or not

* Moved function in class definition

* replaced with is_empty

* Modified is_empty to check only unaligned pipelines

4 weeks agovulkan: fix shmem overrun in mmq id shader (#16873)
Ruben Ortlam [Fri, 31 Oct 2025 07:14:49 +0000 (08:14 +0100)]
vulkan: fix shmem overrun in mmq id shader (#16873)

* vulkan: fix shmem overrun in mmq id shader

* metal : fix mul_mm_id

---------

Co-authored-by: Georgi Gerganov <redacted>
4 weeks agoggml-hexagon: respect input size when getting/setting tensor data (#16836)
l3utterfly [Fri, 31 Oct 2025 04:46:31 +0000 (12:46 +0800)]
ggml-hexagon: respect input size when getting/setting tensor data (#16836)

* respect input size when getting/setting tensor data

allows partial repacking/copying when get tensor size is smaller than the actual tensor

* Removed duplicate repack_mxfp4_mxfp4x4x2 function

4 weeks agoci : enable free-disk-space on cuda docker build (#16877)
Sigbjørn Skjæret [Thu, 30 Oct 2025 23:34:27 +0000 (00:34 +0100)]
ci : enable free-disk-space on cuda docker build (#16877)

4 weeks agoopencl: fix boundary handling for mul_mm (#16875)
lhez [Thu, 30 Oct 2025 23:00:20 +0000 (16:00 -0700)]
opencl: fix boundary handling for mul_mm (#16875)

4 weeks agoconvert : update transformers requirements (#16866)
RodriMora [Thu, 30 Oct 2025 22:15:03 +0000 (23:15 +0100)]
convert : update transformers requirements (#16866)

* Update requirements-convert_legacy_llama.txt

Updated requirements to support Qwen3-VL in transformers 4.57.1 version

* Update requirements/requirements-convert_legacy_llama.txt

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
4 weeks agoserver : bump request URI max length to 32768 (#16862)
chansikpark [Thu, 30 Oct 2025 18:22:23 +0000 (14:22 -0400)]
server : bump request URI max length to 32768 (#16862)