]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
3 months agoreadme: update bindings (#16651)
Ron Evans [Mon, 20 Oct 2025 08:20:04 +0000 (10:20 +0200)]
readme: update bindings (#16651)

Signed-off-by: deadprogram <redacted>
3 months agoSYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (#16613)
safranowith [Mon, 20 Oct 2025 08:08:32 +0000 (11:08 +0300)]
SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (#16613)

* SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

Clean up unrelated changes from previous commit

* Chore: remove empty lines and fix indentation

* Clean up: remove leftover blank lines and fix spacing

* chore: fix trailing whitespace and ensure final newline

* Cleanup: remove redundant declarations already defined in header

* Sync docs/ops.md with updated backend operation support

* docs: update ops.md after rebase

* docs: update ops.md - Vulkan supports SSM_CONV and SSM_SCAN

3 months agollama-context: only warn on pooling_type when user specified (#16674)
takuya kodama [Mon, 20 Oct 2025 07:44:21 +0000 (15:44 +0800)]
llama-context: only warn on pooling_type when user specified (#16674)

The unexpeced pooling_type warning was incorrectly shown when users did not
specify the --pooling-type parameter. In this case, the parameter
defaults to `LLAMA_POOLING_TYPE_UNSPECIFIED (-1)`, and the code
automatically applies the model's default pooling type.

Example of spurious warning:
```
$ llama-embedding -hf ggml-org/bge-m3-Q8_0-GGUF -p "hello"
...
llama_init_from_model: model default pooling_type is [2], but [-1] was specified
...
```

This fix ensures the warning only appears when users explicitly specify
a pooling type that differs from the model's default (e.g., using
--pooling-type mean on a model that expects CLS pooling).

3 months agomodel : add Granite Hybrid types (#16635)
Giuseppe Scrivano [Sun, 19 Oct 2025 21:54:31 +0000 (23:54 +0200)]
model : add Granite Hybrid types (#16635)

add Granite 4 models mapping their embedding dimensions to the # of
parameters.

Information taken from https://huggingface.co/ibm-granite/granite-4.0-h-tiny

Signed-off-by: Giuseppe Scrivano <redacted>
3 months agoci : fix binaries release failure for s390x (binaries may not work yet) (#16664)
Aaron Teo [Sun, 19 Oct 2025 21:06:39 +0000 (05:06 +0800)]
ci : fix binaries release failure for s390x (binaries may not work yet) (#16664)

* devops: initial patch

Signed-off-by: Aaron Teo <redacted>
* devops: forgot the z15 suffix

Signed-off-by: Aaron Teo <redacted>
* devops: attempt at impl GGML_CPU_ALL_VARIANTS for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: rm baseline version

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
3 months agoci : avoid manual updates of docs/ops.md (#16663)
Sigbjørn Skjæret [Sun, 19 Oct 2025 12:03:25 +0000 (14:03 +0200)]
ci : avoid manual updates of docs/ops.md (#16663)

3 months agoci: include s390x release binaries (#16648)
Aaron Teo [Sun, 19 Oct 2025 10:37:47 +0000 (18:37 +0800)]
ci: include s390x release binaries (#16648)

Signed-off-by: Aaron Teo <redacted>
3 months agoCODEOWNERS: update for ggml-cuda/mmf (#16660)
Aman Gupta [Sun, 19 Oct 2025 07:37:12 +0000 (15:37 +0800)]
CODEOWNERS: update for ggml-cuda/mmf (#16660)

3 months agoHIP: fix GPU_TARGETS (#16642)
Johannes Gäßler [Sat, 18 Oct 2025 12:47:32 +0000 (14:47 +0200)]
HIP: fix GPU_TARGETS (#16642)

3 months agovulkan: Implement topk_moe fused shader, ported from CUDA (#16641)
Jeff Bolz [Sat, 18 Oct 2025 10:22:57 +0000 (05:22 -0500)]
vulkan: Implement topk_moe fused shader, ported from CUDA (#16641)

This is similar to the CUDA shader from #16130, but doesn't use shared memory
and handles different subgroup sizes.

3 months agoCUDA: use registers instead of smem in topk-moe (#16647)
Aman Gupta [Sat, 18 Oct 2025 09:52:53 +0000 (17:52 +0800)]
CUDA: use registers instead of smem in topk-moe (#16647)

Uses the technique used in the vulkan PR #16641. Neat trick!

3 months agoopencl: transposed gemm/gemv moe kernel with mxfp4,f32 (#16602)
Shawn Gu [Sat, 18 Oct 2025 00:55:32 +0000 (17:55 -0700)]
opencl: transposed gemm/gemv moe kernel with mxfp4,f32 (#16602)

* opencl: transposed gemm/gemv moe kernel with mxfp4,f32

* add restore kernel for moe transpose

* fix trailing whitespaces

* resolve compilation warnings

3 months agollama-model: fix insonsistent ctxs <-> bufs order (#16581)
Johannes Gäßler [Fri, 17 Oct 2025 15:41:09 +0000 (17:41 +0200)]
llama-model: fix insonsistent ctxs <-> bufs order (#16581)

3 months agorpc : report actual free memory (#16616)
Radoslav Gerganov [Fri, 17 Oct 2025 15:02:52 +0000 (18:02 +0300)]
rpc : report actual free memory (#16616)

* rpc : report actual free memory

Start reporting the free memory on every device instead of using
fixed values. Now llama-cli users can get a nice memory breakdown
when using RPC devices.

* drop --mem in rpc-server

3 months agovulkan: Add State Space Model (SSM) Operations Support (#16463)
Giuseppe Scrivano [Fri, 17 Oct 2025 12:23:47 +0000 (14:23 +0200)]
vulkan: Add State Space Model (SSM) Operations Support (#16463)

* vulkan: implement SSM scan operation

Add State Space Model scan operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <redacted>
* vulkan: implement SSM conv operation

Add State Space Model conv operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <redacted>
---------

Signed-off-by: Giuseppe Scrivano <redacted>
3 months agoggml : fix SpaceMit IME array out-of-bounds in task assignment (#16629)
muggle-stack [Fri, 17 Oct 2025 10:01:23 +0000 (18:01 +0800)]
ggml : fix SpaceMit IME array out-of-bounds in task assignment (#16629)

Fix incorrect task-to-batch index calculation in the quantization phase.

The bug caused out-of-bounds access to qnbitgemm_args array when
compute_idx exceeded per_gemm_block_count_m, leading to invalid
pointer dereferences and SIGBUS errors.

Correctly map tasks to batches by dividing compute_idx by
per_gemm_block_count_m instead of block_size_m.

Example:
  batch_feature=1, gemm_m=30, block_size_m=4
  per_gemm_block_count_m = 8, task_count = 8

  Old: gemm_idx = 4/4 = 1 (out of bounds  New: gemm_idx = 4/8 = 0 (correct)

Tested on SpaceMit K1 RISC-V64 with qwen2.5:0.5b model.

Co-authored-by: muggle <redacted>
3 months agowebui: reorganize settings layout (#16607)
Pascal [Fri, 17 Oct 2025 08:35:03 +0000 (10:35 +0200)]
webui: reorganize settings layout (#16607)

* webui: reorganize settings layout

* chore: update webui build output

* fix: remove unused variable

* chore: update webui build output

3 months agovulkan: fix debug build (add_rms_len/data not found) (#16624)
Jeff Bolz [Fri, 17 Oct 2025 07:31:04 +0000 (02:31 -0500)]
vulkan: fix debug build (add_rms_len/data not found) (#16624)

3 months agometal : add `CONV_TRANSPOSE_2D` (#16542)
Ilia Ilmer [Fri, 17 Oct 2025 06:33:58 +0000 (02:33 -0400)]
metal : add `CONV_TRANSPOSE_2D` (#16542)

* initial: headers and metal-device.cpp updates

* adding conv_transpose_2d

* fix type

* fix type: int32->int64

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* add checks for src[0] and src[1]; add type checks

* Update ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* add more tests, add optimization to threading

* add dynamic memory allocation in metal

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agogrammar : use int64_t to avoid int overflows in int schema to grammar conversion...
Olivier Chafik [Fri, 17 Oct 2025 05:59:31 +0000 (06:59 +0100)]
grammar : use int64_t to avoid int overflows in int schema to grammar conversion logic (#16626)

3 months agoSYCL SET operator optimized for F32 tensors (#16350)
GittyBurstein [Fri, 17 Oct 2025 02:36:40 +0000 (05:36 +0300)]
SYCL SET operator optimized for F32 tensors (#16350)

* SYCL/SET: implement operator + wire-up; docs/ops updates; element_wise & ggml-sycl changes

* sycl(SET): re-apply post-rebase; revert manual docs/ops.md; style cleanups

* move SET op to standalone file, GPU-only implementation

* Update SYCL SET operator for F32

* ci: fix editorconfig issues (LF endings, trailing spaces, final newline)

* fixed ggml-sycl.cpp

---------

Co-authored-by: Gitty Burstein <redacted>
3 months agomtmd : support home-cooked Mistral Small Omni (#14928)
Xuan-Son Nguyen [Thu, 16 Oct 2025 17:00:31 +0000 (19:00 +0200)]
mtmd : support home-cooked Mistral Small Omni (#14928)

3 months agofix: added a normalization step for MathJax-style \[\] and \(\) delimiters (#16599)
Pascal [Thu, 16 Oct 2025 14:28:41 +0000 (16:28 +0200)]
fix: added a normalization step for MathJax-style \[\] and \(\) delimiters (#16599)

* fix: added a normalization step for MathJax-style \[\] and \(\) delimiters

So inline and block equations are converted before KaTeX rendering,
enabling proper display of model-generated LaTeX in the WebUI

* chore: update webui build output

3 months agosycl : add ARANGE operator (#16362)
GittyBurstein [Thu, 16 Oct 2025 13:26:21 +0000 (16:26 +0300)]
sycl : add ARANGE operator (#16362)

* SYCL: update element-wise ops and presets

* clean arange

* Re-trigger CI

---------

Co-authored-by: Gitty Burstein <redacted>
3 months agoCANN: format code using .clang-format (#15863)
Chenguang Li [Thu, 16 Oct 2025 08:41:11 +0000 (16:41 +0800)]
CANN: format code using .clang-format (#15863)

This commit applies .clang-format rules to all source files under the
ggml-cann directory to ensure consistent coding style and readability.
The .clang-format option `SortIncludes: false` has been set to disable
automatic reordering of include directives.
No functional changes are introduced.

Co-authored-by: hipudding <redacted>
3 months agocommon : Update the docs on -t --threads (#16236)
takasurazeem [Thu, 16 Oct 2025 05:11:33 +0000 (01:11 -0400)]
common : Update the docs on -t --threads (#16236)

* Update the docs on -t --threads

* Revert "Update the docs on -t --threads"

This reverts commit eba97345e2c88d8ca510abec87d00bf6b9b0e0c2.

* docs: clarify -t/--threads parameter uses CPU threads and defaults to all available cores

* Update arg.cpp

3 months agoggml-cpu: replace putenv with setenv for const-correctness (#16573)
takuya kodama [Thu, 16 Oct 2025 05:10:32 +0000 (13:10 +0800)]
ggml-cpu: replace putenv with setenv for const-correctness (#16573)

## Why it failed

When compiling with strict compiler flags (-Wwrite-strings -Werror=discarded-qualifiers),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_C_FLAGS="-Wwrite-strings -Werror=discarded-qualifiers" && \
cmake --build ../llama.cpp.build/
...
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c: In function ‘ggml_cpu_init’:
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:3572:24: error: passing argument 1 of ‘putenv’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 3572 |                 putenv("KMP_BLOCKTIME=200"); // 200ms
      |                        ^~~~~~~~~~~~~~~~~~~
In file included from /home/otegami/work/cpp/llama.cpp/ggml/src/./ggml-impl.h:10,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:6,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/traits.h:3,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:6:
/usr/include/stdlib.h:786:26: note: expected ‘char *’ but argument is of type ‘const char *’
  786 | extern int putenv (char *__string) __THROW __nonnull ((1));
      |                    ~~~~~~^~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
```

The issue is that putenv() expects a non-const char * but receives a string literal (const char *).

## How to fix

This PR replaces putenv("KMP_BLOCKTIME=200") with setenv("KMP_BLOCKTIME", "200", 0).

Benefits of setenv():
- Accepts const char * parameters (no qualifier warnings)
- Makes copies of the strings (safer memory handling)
- The third parameter (0) ensures we don't overwrite if already set

3 months agoSYCL: Add GGML_OP_MEAN operator support (#16009)
yael-works [Thu, 16 Oct 2025 04:21:28 +0000 (07:21 +0300)]
SYCL: Add GGML_OP_MEAN operator support (#16009)

* SYCL: Add GGML_OP_MEAN operator support

* SYCL: Fix formatting for GGML_OP_MEAN case

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
3 months agogguf-py : add support for endian conversion of BF16 data (#16594)
Aleksei Nikiforov [Wed, 15 Oct 2025 20:43:08 +0000 (22:43 +0200)]
gguf-py : add support for endian conversion of BF16 data (#16594)

BF16 requires special handling in this script
while it's a 2-bytes data, but view is 1-byte by default.
Switch to correct view before attempting byteswapping.

With this change correctly byteswapping models like
Meta-Llama-3-8B-Instruct-bf16-GGUF
should be possible.

3 months agocpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (#16083)
safranowith [Wed, 15 Oct 2025 19:24:51 +0000 (22:24 +0300)]
cpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (#16083)

* CPU: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

- Added the operators to unary op enum
- Implemented API functions
- Implemented forward and unary-op logic in CPU backend
- Updated ggml_get_n_tasks
- Updated operators names array and static_assert
- Updated docs and enabled automatic tests

* docs: add documentation for ggml_trunc and ggml_trunc_inplace in ggml.h

* chore: remove trailing whitespace from ggml.h

* Remove unresolved merge markers

* Apply review suggestions: cleanup formatting, enum order and leftover artifacts

* Regenerate ops.md using create_ops_docs.py

3 months agoopencl: add q8_0 mm support (#16469)
lhez [Wed, 15 Oct 2025 17:51:04 +0000 (10:51 -0700)]
opencl: add q8_0 mm support (#16469)

* opencl: add mm_q8_0_f32

* opencl: fix data loading for incomplete tile

* opencl: use q8_0 mm for larger matrix

* opencl: add some tests to cover the path

3 months agoopencl: fix FA for f32 (#16584)
lhez [Wed, 15 Oct 2025 17:48:28 +0000 (10:48 -0700)]
opencl: fix FA for f32 (#16584)

3 months agoAdd server-driven parameter defaults and syncing (#16515)
Aleksander Grygier [Wed, 15 Oct 2025 14:22:20 +0000 (16:22 +0200)]
Add server-driven parameter defaults and syncing (#16515)

3 months agometal: optimise `GGML_OP_SUM` (#16559)
Sam/Samuel [Wed, 15 Oct 2025 14:05:56 +0000 (23:05 +0900)]
metal: optimise `GGML_OP_SUM` (#16559)

* optimise GGML_OP_SUM

* add non-contiguous tests by permuting the input

* change tests to require full contiguity of OP_SUM

* cuda : add check GGML_OP_SUM

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agoserver : fix img token logs (#16595)
Georgi Gerganov [Wed, 15 Oct 2025 13:53:12 +0000 (16:53 +0300)]
server : fix img token logs (#16595)

3 months agollama-quant: add support for mmproj (#16592)
Xuan-Son Nguyen [Wed, 15 Oct 2025 12:48:08 +0000 (14:48 +0200)]
llama-quant: add support for mmproj (#16592)

* llama-quant: add support for mmproj

* Update src/llama.cpp

Co-authored-by: Georgi Gerganov <redacted>
* check prefix instead

* small fix

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agoCUDA: Changing the CUDA scheduling strategy to spin (#16585)
Julius Tischbein [Wed, 15 Oct 2025 11:54:15 +0000 (13:54 +0200)]
CUDA: Changing the CUDA scheduling strategy to spin (#16585)

* CUDA set scheduling strategy to spinning for cc121

* Using prop.major and prop.minor, include HIP and MUSA

* Exclude HIP and MUSA

* Remove trailing whitespace

Co-authored-by: Johannes Gäßler <redacted>
* Remove empty line

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
3 months agoserver : fix mtmd checkpoints (#16591)
Georgi Gerganov [Wed, 15 Oct 2025 09:51:27 +0000 (12:51 +0300)]
server : fix mtmd checkpoints (#16591)

3 months agometal : avoid using Metal's gpuAddress property (#16576)
Georgi Gerganov [Tue, 14 Oct 2025 17:33:05 +0000 (20:33 +0300)]
metal : avoid using Metal's gpuAddress property (#16576)

* metal : avoid using Metal's gpuAddress property

* metal : fix rope kernels buffer check

3 months agovulkan: Add ACC_TYPE_VEC2 implementation (#16203) upstream/0.0.6764
SavicStefan [Tue, 14 Oct 2025 17:18:05 +0000 (19:18 +0200)]
vulkan: Add ACC_TYPE_VEC2 implementation (#16203)

Signed-off-by: Stefan Savic <redacted>
Co-authored-by: Stefan Savic <redacted>
3 months agoCUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (#16577)
Aman Gupta [Tue, 14 Oct 2025 14:48:08 +0000 (22:48 +0800)]
CUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (#16577)

3 months agovulkan: Support FA with K/V in F32 (#16543)
Jeff Bolz [Tue, 14 Oct 2025 13:53:37 +0000 (08:53 -0500)]
vulkan: Support FA with K/V in F32 (#16543)

3 months agovulkan: Improve build time for MSVC (#16545)
Jeff Bolz [Tue, 14 Oct 2025 12:51:36 +0000 (07:51 -0500)]
vulkan: Improve build time for MSVC (#16545)

Enable CMP0147 so custom build steps (invoking vulkan-shader-gen) are run in parallel.

Enable /MP so source files are compiled in parallel.

3 months agoCUDA: enable FA for FP32 KV cache (#16546)
Johannes Gäßler [Tue, 14 Oct 2025 12:22:47 +0000 (14:22 +0200)]
CUDA: enable FA for FP32 KV cache (#16546)

3 months agoCUDA: use fastdiv + ggml_cuda_mad for mmvf (#16557)
Aman Gupta [Tue, 14 Oct 2025 11:16:21 +0000 (19:16 +0800)]
CUDA: use fastdiv + ggml_cuda_mad for mmvf (#16557)

* CUDA: use fastdiv + ggml_cuda_mad for mmvf

* use bf16 directly + fix formatting

* Add exception for HIP code

3 months agoCUDA: add fp kernel for larger batch size MoE (#16512)
Aman Gupta [Tue, 14 Oct 2025 11:15:15 +0000 (19:15 +0800)]
CUDA: add fp kernel for larger batch size MoE (#16512)

* CUDA: kernel for larger batch sizes for MoE

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* fixup

* tests

* Move mmq_ids_helper to mmid

* cleanup

* Remove redundant checks

3 months agocuda : remove legacy copy-op pointer indirection code (#16485)
Anav Prasad [Tue, 14 Oct 2025 09:53:49 +0000 (09:53 +0000)]
cuda : remove legacy copy-op pointer indirection code (#16485)

* remove legacy copy-op pointer indirection code

* further removal of copy-op indirection code

* renamed check_node_graph_compatibility_and_refresh_copy_ops function

3 months agoserver : dynamic token limit for prompt cache (#16560)
Georgi Gerganov [Tue, 14 Oct 2025 05:48:50 +0000 (08:48 +0300)]
server : dynamic token limit for prompt cache (#16560)

* server : dynamic token limit for prompt cache

* cont : print estimated token limit

3 months agometal : FA support F32 K and V and head size = 32 (#16531)
Georgi Gerganov [Mon, 13 Oct 2025 20:07:57 +0000 (23:07 +0300)]
metal : FA support F32 K and V and head size = 32 (#16531)

* metal : FA support F32 K and V and head size = 32

* graph : remove obsolete comment [no ci]

3 months agograph : support cacheless embeddings with FA and iSWA (#16528)
Georgi Gerganov [Mon, 13 Oct 2025 19:42:37 +0000 (22:42 +0300)]
graph : support cacheless embeddings with FA and iSWA (#16528)

* graph : support cacheless embeddings with FA and iSWA

* cont : deduplicate mask creation

* cont : fix name

3 months agoopencl: fix build targeting CL 2 (#16554)
lhez [Mon, 13 Oct 2025 18:50:37 +0000 (11:50 -0700)]
opencl: fix build targeting CL 2 (#16554)

3 months agoCUDA: fix numerical issues in tile FA kernel (#16540)
Johannes Gäßler [Mon, 13 Oct 2025 14:29:45 +0000 (16:29 +0200)]
CUDA: fix numerical issues in tile FA kernel (#16540)

3 months agoggml : fix build broken with -march=armv9-a on MacOS (#16520)
Jie Fu (傅杰) [Mon, 13 Oct 2025 12:48:47 +0000 (20:48 +0800)]
ggml : fix build broken with -march=armv9-a on MacOS (#16520)

* ggml : fix build broken with -march=armv9-a on MacOS

Signed-off-by: Jie Fu <redacted>
* Add #pragma message

Signed-off-by: Jie Fu <redacted>
* Address review comment.

Signed-off-by: Jie Fu <redacted>
* Update ggml/src/ggml-cpu/ggml-cpu.c

---------

Signed-off-by: Jie Fu <redacted>
Co-authored-by: Diego Devesa <redacted>
3 months agoCANN: fix CPU memory leak in CANN backend (#16549)
Chenguang Li [Mon, 13 Oct 2025 09:01:24 +0000 (17:01 +0800)]
CANN: fix CPU memory leak in CANN backend (#16549)

This commit fixes a CPU-side memory leak issue in the CANN backend,
which occurred when intermediate aclTensorList objects were not properly
released after operator execution. The leak happened during repeated
invocations of CANN ops (e.g., FlashAttention), leading to increasing
host memory usage over time.

Proper resource cleanup (aclDestroyTensorList and related release logic)
has been added to ensure that all temporary tensors are correctly freed.

3 months agofix: add remark plugin to render raw HTML as literal text (#16505)
Pascal [Mon, 13 Oct 2025 08:55:32 +0000 (10:55 +0200)]
fix: add remark plugin to render raw HTML as literal text (#16505)

* fix: add remark plugin to render raw HTML as literal text

Implemented a missing MDAST stage to neutralize raw HTML like major LLM WebUIs
do ensuring consistent and safe Markdown rendering

Introduced 'remarkLiteralHtml', a plugin that converts raw HTML nodes in the
Markdown AST into plain-text equivalents while preserving indentation and
line breaks. This ensures consistent rendering and prevents unintended HTML
execution, without altering valid Markdown structure

Kept 'remarkRehype' in the pipeline since it performs the required conversion
from MDAST to HAST for KaTeX, syntax highlighting, and HTML serialization

Refined the link-enhancement logic to skip unnecessary DOM rewrites,
fixing a subtle bug where extra paragraphs were injected after the first
line due to full innerHTML reconstruction, and ensuring links open in new
tabs only when required

Final pipeline: remarkGfm -> remarkMath -> remarkBreaks -> remarkLiteralHtml
-> remarkRehype -> rehypeKatex -> rehypeHighlight -> rehypeStringify

* fix: address review feedback from allozaur

* chore: update webui build output

3 months agometal: add support for opt_step_sgd (#16539)
Sam/Samuel [Mon, 13 Oct 2025 08:25:02 +0000 (16:25 +0800)]
metal: add support for opt_step_sgd (#16539)

* metal: add support for opt_step_sgd

* add newline to pass EditorConfig check

3 months agoggml : fix scalar path for computing norm (#16558)
Georgi Gerganov [Mon, 13 Oct 2025 08:22:27 +0000 (11:22 +0300)]
ggml : fix scalar path for computing norm (#16558)

3 months agoCANN: Update several operators to support FP16 data format (#16251)
hipudding [Mon, 13 Oct 2025 00:52:22 +0000 (08:52 +0800)]
CANN: Update several operators to support FP16 data format (#16251)

Many Ascend operators internally use FP16 precision for computation.
If input data is in FP32, it must first be cast to FP16 before
computation, and then cast back to FP32 after computation, which
introduces unnecessary cast operations. Moreover, FP16 computation
requires significantly less workload compared to FP32, leading to
noticeable efficiency improvements.

In this change, `get_rows`, `rms_norm`, and `flash_attn_ext` are extended
to support multiple data types. Validation on the Qwen2 0.5b model shows
correct accuracy and about 10% performance gain in concurrent scenarios.

Co-authored-by: noemotiovon <redacted>
3 months agometal : add opt_step_adamw and op_sum (#16529)
Sam/Samuel [Sun, 12 Oct 2025 18:43:14 +0000 (02:43 +0800)]
metal : add opt_step_adamw and op_sum (#16529)

* scaffold to support opt step adamw on metal (not written so far)

* add opt-step-adamw kernel for metal

* pass op->src[4] as a separate buffer to the pipeline

* add bounds check to opt-step-adamw kernel

* complete scaffold for GGML_OP_SUM

* naive GGML_OP_SUM kernel

* remove unwanted comment

* change OP_SUM capability gate

* Add has_simdgroup_reduction to both ops to pass CI

3 months agowebui: remove client-side context pre-check and rely on backend for limits (#16506)
Pascal [Sun, 12 Oct 2025 16:06:41 +0000 (18:06 +0200)]
webui: remove client-side context pre-check and rely on backend for limits (#16506)

* fix: make SSE client robust to premature [DONE] in agentic proxy chains

* webui: remove client-side context pre-check and rely on backend for limits

Removed the client-side context window pre-check and now simply sends messages
while keeping the dialog imports limited to core components, eliminating the
maximum context alert path

Simplified streaming and non-streaming chat error handling to surface a generic
'No response received from server' error whenever the backend returns no content

Removed the obsolete maxContextError plumbing from the chat store so state
management now focuses on the core message flow without special context-limit cases

* webui: cosmetic rename of error messages

* Update tools/server/webui/src/lib/stores/chat.svelte.ts

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/stores/chat.svelte.ts

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/components/app/chat/ChatScreen/ChatScreen.svelte

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/components/app/chat/ChatScreen/ChatScreen.svelte

Co-authored-by: Aleksander Grygier <redacted>
* chore: update webui build output

---------

Co-authored-by: Aleksander Grygier <redacted>
3 months ago[SYCL] fix UT fault cases: count-equal, argsort, pad OPs (#16521)
Neo Zhang Jianyu [Sun, 12 Oct 2025 13:53:35 +0000 (21:53 +0800)]
[SYCL] fix UT fault cases: count-equal, argsort, pad OPs (#16521)

* fix/refactor OP argsort, pad

* fix count-equal op

* update SYCL OP list

* fix format issue

---------

Co-authored-by: Zhang Jianyu <redacted>
3 months agoci : add Vulkan on Ubuntu with default packages build (#16532)
Mathieu Baudier [Sun, 12 Oct 2025 13:48:03 +0000 (15:48 +0200)]
ci : add Vulkan on Ubuntu with default packages build (#16532)

* ci: build Vulkan on Ubuntu with default packages

* ci: disable tests in Vulkan build with default Ubuntu packages

3 months agocommon : handle unicode during partial json parsing (#16526)
Aldehir Rojas [Sun, 12 Oct 2025 13:18:47 +0000 (08:18 -0500)]
common : handle unicode during partial json parsing (#16526)

* common : handle unicode during partial json parsing

* common : set missing `ensure_ascii = true` during json dump

3 months agocommon : update presets (#16504)
Georgi Gerganov [Sun, 12 Oct 2025 06:29:13 +0000 (09:29 +0300)]
common : update presets (#16504)

* presets : add --embd-gemma-default and remove old embedding presets

* presets : add gpt-oss presets

* presets : add vision presets

* cont : remove reasoning overrides [no ci]

* cont : fix batch size for embedding gemma [no ci]

3 months agoggml : Fix FP16 ELU positive branch (#16519)
sirus20x6 [Sun, 12 Oct 2025 05:25:37 +0000 (00:25 -0500)]
ggml : Fix FP16 ELU positive branch (#16519)

Co-authored-by: Aaron <redacted>
3 months agohparams : add check for layer index in is_recurrent (#16511)
Daniel Bevenius [Sun, 12 Oct 2025 05:19:06 +0000 (07:19 +0200)]
hparams : add check for layer index in is_recurrent (#16511)

* hparams : add check for layer index in is_recurrent

This commit adds a check in the is_recurrent method to ensure that the
provided layer index is within the valid range.

The motivation for this change is to prevent potential out-of-bounds
and also be consistent with other methods in the class that perform
similar checks, like is_swa.

3 months agoggml: Correct SVE implementation in ggml_vec_dot_f16_unroll (#16518)
sirus20x6 [Sun, 12 Oct 2025 05:15:00 +0000 (00:15 -0500)]
ggml: Correct SVE implementation in ggml_vec_dot_f16_unroll (#16518)

The previous SVE implementation for `ggml_vec_dot_f16_unroll` contained a bug due to a copy-paste error. The wrong variable was used in an FMA instruction, leading to incorrect results. This commit corrects the variable usage and improves the clarity of the code by renaming variables to avoid confusion.

Co-authored-by: Aaron <redacted>
3 months agoCUDA: faster tile FA, add oob checks, more HSs (#16492)
Johannes Gäßler [Sat, 11 Oct 2025 18:54:32 +0000 (20:54 +0200)]
CUDA: faster tile FA, add oob checks, more HSs (#16492)

3 months agometal : fix mul-mm condition + fix mul-mv permuted kernels (#16494)
Georgi Gerganov [Sat, 11 Oct 2025 13:54:10 +0000 (16:54 +0300)]
metal : fix mul-mm condition + fix mul-mv permuted kernels (#16494)

3 months agofeat: render user content as markdown option (#16358)
Pascal [Sat, 11 Oct 2025 13:50:49 +0000 (15:50 +0200)]
feat: render user content as markdown option (#16358)

* feat: render user content as markdown option
- Add a persisted 'renderUserContentAsMarkdown' preference to the settings defaults and info metadata so the choice survives reloads like other options
- Surface the new 'Render user content as Markdown' checkbox in the General section of the chat settings dialog, beneath the PDF toggle
- Render user chat messages with 'MarkdownContent' when the new setting is enabled, matching assistant formatting while preserving the existing card styling otherwise
- chore: update webui build output

* chore: update webui build output

3 months agoserver / ranking : add sorting and management of top_n (#16403)
Yann Follet [Sat, 11 Oct 2025 13:39:04 +0000 (21:39 +0800)]
server / ranking : add sorting and management of top_n (#16403)

* server / ranking : add sorting and management of top_n

* Make the retro compatible if no top_n will return
all results

here is a script to make some test

```script

URL=${1:-http://127.0.0.1:8181}

curl "$URL/v1/rerank" -H "Content-Type: application/json" \
 -d '{ "model": "M", "query": "What is the recipe to make bread ?",
 "return_text" : true,
 "texts" : true,
 "top_n": 6,
 "documents": [
 "voici la recette pour faire du pain, il faut de la farine de l eau et du levain et du sel",
 "it is a bear",
 "bread recipe : floor, water, yest, salt",
 "The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.",
 "here is the ingedients to bake bread : 500g floor, 350g water, 120g fresh refresh yest, 15g salt",
 "recipe to make cookies : floor, eggs, water, chocolat",
 "here is the recipe to make bread : 500g floor, 350g water, 120g fresh refresh yest, 15g salt",
 "il fait tres beau aujourd hui",
 "je n ai pas faim, je ne veux pas manger",
 "je suis a paris"
 ] }' | jq
```

* use resize() instead for(...)

* simplify top_n init since no need to return error

result to test :

./tests.sh unit/test_rerank.py -v -x
==================================================== test session starts =====================================================
platform linux -- Python 3.12.3, pytest-8.3.5, pluggy-1.6.0 -- /home/yann/dev/yann/llama.cpp/tools/server/tests/test/bin/python3
cachedir: .pytest_cache
rootdir: /home/yann/dev/yann/llama.cpp/tools/server/tests
configfile: pytest.ini
plugins: anyio-4.11.0
collected 8 items

unit/test_rerank.py::test_rerank PASSED                                                                                [ 12%]
unit/test_rerank.py::test_rerank_tei_format PASSED                                                                     [ 25%]
unit/test_rerank.py::test_invalid_rerank_req[documents0] PASSED                                                        [ 37%]
unit/test_rerank.py::test_invalid_rerank_req[None] PASSED                                                              [ 50%]
unit/test_rerank.py::test_invalid_rerank_req[123] PASSED                                                               [ 62%]
unit/test_rerank.py::test_invalid_rerank_req[documents3] PASSED                                                        [ 75%]
unit/test_rerank.py::test_rerank_usage[Machine learning is-A machine-Learning is-19] PASSED                            [ 87%]
unit/test_rerank.py::test_rerank_usage[Which city?-Machine learning is -Paris, capitale de la-26] PASSED               [100%]

===================================================== 8 passed in 4.31s ======================================================

* add rerank top_n unit test

here is the result :

./tests.sh unit/test_rerank.py -v -x
=================================================================== test session starts ===================================================================
platform linux -- Python 3.12.3, pytest-8.3.5, pluggy-1.6.0 -- /home/yann/dev/yann/llama.cpp/tools/server/tests/test/bin/python3
cachedir: .pytest_cache
rootdir: /home/yann/dev/yann/llama.cpp/tools/server/tests
configfile: pytest.ini
plugins: anyio-4.11.0
collected 16 items

unit/test_rerank.py::test_rerank PASSED                                                                                                             [  6%]
unit/test_rerank.py::test_rerank_tei_format PASSED                                                                                                  [ 12%]
unit/test_rerank.py::test_invalid_rerank_req[documents0] PASSED                                                                                     [ 18%]
unit/test_rerank.py::test_invalid_rerank_req[None] PASSED                                                                                           [ 25%]
unit/test_rerank.py::test_invalid_rerank_req[123] PASSED                                                                                            [ 31%]
unit/test_rerank.py::test_invalid_rerank_req[documents3] PASSED                                                                                     [ 37%]
unit/test_rerank.py::test_rerank_usage[Machine learning is-A machine-Learning is-19] PASSED                                                         [ 43%]
unit/test_rerank.py::test_rerank_usage[Which city?-Machine learning is -Paris, capitale de la-26] PASSED                                            [ 50%]
unit/test_rerank.py::test_rerank_top_n[None-4] PASSED                                                                                               [ 56%]
unit/test_rerank.py::test_rerank_top_n[2-2] PASSED                                                                                                  [ 62%]
unit/test_rerank.py::test_rerank_top_n[4-4] PASSED                                                                                                  [ 68%]
unit/test_rerank.py::test_rerank_top_n[99-4] PASSED                                                                                                 [ 75%]
unit/test_rerank.py::test_rerank_tei_top_n[None-4] PASSED                                                                                           [ 81%]
unit/test_rerank.py::test_rerank_tei_top_n[2-2] PASSED                                                                                              [ 87%]
unit/test_rerank.py::test_rerank_tei_top_n[4-4] PASSED                                                                                              [ 93%]
unit/test_rerank.py::test_rerank_tei_top_n[99-4] PASSED                                                                                             [100%]

=================================================================== 16 passed in 8.84s ===================================================================

* editor config check fix

3 months agocuda : avoid initializing unused devices (#16510)
Diego Devesa [Sat, 11 Oct 2025 11:02:26 +0000 (04:02 -0700)]
cuda : avoid initializing unused devices (#16510)

3 months agoconvert : correctly handle LLaMA tokenizer for Jamba (#16470)
amirai21 [Sat, 11 Oct 2025 08:33:41 +0000 (11:33 +0300)]
convert : correctly handle LLaMA tokenizer for Jamba (#16470)

* fix: convert_hf_to_gguf - change Jamba non-sentencepiece mode (tokenizer.json) vocab construction

* fix: convert_hf_to_gguf - jamba non-sentencepiece tokenizer to use _set_vocab_llama_hf func

* fix: convert_hf_to_gguf - removed get_vocab_base_pre from jamba

3 months agoserver : fix division by zero when reporting stats (#16501)
Georgi Gerganov [Fri, 10 Oct 2025 19:15:05 +0000 (22:15 +0300)]
server : fix division by zero when reporting stats (#16501)

3 months agovocab : mark EOT token for Granite models (#16499)
Georgi Gerganov [Fri, 10 Oct 2025 14:17:31 +0000 (17:17 +0300)]
vocab : mark EOT token for Granite models (#16499)

* vocab : mark EOT token for Granite models

* sampling : fallback to EOS when EOT is not found

3 months agoserver : return HTTP 400 if prompt exceeds context length (#16486)
Radoslav Gerganov [Fri, 10 Oct 2025 14:11:07 +0000 (17:11 +0300)]
server : return HTTP 400 if prompt exceeds context length (#16486)

In streaming mode when prompt exceeds context length, the server returns
HTTP 200 status code with a JSON error in the body.  This is very
confusing and inconsistent with all other inference engines which return
HTTP 4xx error in this case.

This patch fixes this problem and makes the server return HTTP 400 in
such cases.

3 months agoserver : log requests to /v1/completions (#16495)
Radoslav Gerganov [Fri, 10 Oct 2025 10:22:27 +0000 (13:22 +0300)]
server : log requests to /v1/completions (#16495)

3 months agocmake : Dont define XOPENSOURCE on AIX (#16481)
Prajwal B Mehendarkar [Fri, 10 Oct 2025 08:15:46 +0000 (13:45 +0530)]
cmake : Dont define XOPENSOURCE on AIX (#16481)

3 months agowebui: updated the chat service to only include max_tokens in the req… (#16489)
Pascal [Thu, 9 Oct 2025 20:54:57 +0000 (22:54 +0200)]
webui: updated the chat service to only include max_tokens in the req… (#16489)

* webui: updated the chat service to only include max_tokens in the request payload when the setting is explicitly provided, while still mapping explicit zero or null values to the infinite-token sentinel

* chore: update webui build output

3 months agocpu : optimize the ggml NORM operation (#15953)
duduta [Thu, 9 Oct 2025 19:11:15 +0000 (22:11 +0300)]
cpu : optimize the ggml NORM operation (#15953)

* ggml-cpu: optimize norm operation to use intrinsics or Accelerate

          rename function

          add endif macro comment

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Aaron Teo <redacted>
* implement s390x SIMD suggested by @taronaeo

* add TODO comment

* tidy up spaces

---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Aaron Teo <redacted>
3 months agoserver : host-memory prompt caching (#16391)
Georgi Gerganov [Thu, 9 Oct 2025 15:54:51 +0000 (18:54 +0300)]
server : host-memory prompt caching (#16391)

* minor : code style

* server : fix prompt similarity calculation

* server : initial host-memory prompt caching

* cont

* server : refactor

* cont

* cont : make the server task of the slot const

* cont : minor [no ci]

* server : cache prompts and checkpoints only for completion tasks

* server : improve prompt caching logic

* cont : fix check for number of cached prompts [no ci]

* server : improve caching logic, add -cram CLI arg

* server : print prompt mismatch info

* cont : better naming [no ci]

* server : improve prompt cache loading logic

* server : add option to debug the slot contents (#16482)

* server : add option to debug the slot contents

* Update tools/server/server.cpp

---------

Co-authored-by: Xuan-Son Nguyen <redacted>
* server : add option to disable prompt cache

---------

Co-authored-by: Xuan-Son Nguyen <redacted>
3 months agoNo markdown in cot (#16483)
Pascal [Thu, 9 Oct 2025 15:36:29 +0000 (17:36 +0200)]
No markdown in cot (#16483)

* fix: let the model think in plaintext

* chore: npm run format + npm run build

3 months agomodel-conversion : add support for SentenceTransformers (#16387)
Daniel Bevenius [Thu, 9 Oct 2025 12:35:22 +0000 (14:35 +0200)]
model-conversion : add support for SentenceTransformers (#16387)

* model-conversion : add support for SentenceTransformers

This commit adds support for models that use SentenceTransformer layers.

The motivation for this is that if converted model includes any of the
numbered layers specified in the original models repository then these
changes enable these models to be used and verified. Currently the
model-conversion only support the base model output without any of
the additional transformation layers.

Usage:
Convert the model that also includes the SentenceTransformer layers:
```console
(venv) $ export EMBEDDING_MODEL_PATH="~/google/embeddinggemma-300M"
(venv) make embedding-convert-model
```

Verify the produced embeddings from the converted model against the
original model embeddings:
```console
(venv) make embedding-verify-logits-st
```

The original model can be run using SentenceTransformer:
```console
(venv) make embedding-run-original-model-st
```

Run the converted model using "SentenceTransformer" layers whic
enables pooling and normalization:
```console
(venv) make embedding-run-converted-model-st
```

* add model-conversion example requirements

* add support for -st flag in embedding model conversion

This commit add support for the -st flag in the embedding model
conversion script. This will enable models to be converted using
sentence transformers dense layers.

3 months agoci: add ARM64 Kleidiai build and test support (#16462)
sudhiarm [Thu, 9 Oct 2025 08:13:18 +0000 (09:13 +0100)]
ci: add ARM64 Kleidiai build and test support (#16462)

3 months agoCANN: Improve ACL graph matching (#16166)
Chenguang Li [Thu, 9 Oct 2025 07:50:25 +0000 (15:50 +0800)]
CANN: Improve ACL graph matching (#16166)

* CANN: improve ACL graph matching

Record `ne` and `nb` information for src tensors and include them in the
graph matching check. This enhances the robustness of ACL graph matching
by preventing incorrect matches when src tensors share the same data
address but differ in shape or stride.

* CANN: add op_params match

3 months agokleidiai: kernel interface refactoring (#16460)
Charles Xu [Thu, 9 Oct 2025 07:29:17 +0000 (09:29 +0200)]
kleidiai: kernel interface refactoring (#16460)

3 months ago[SYCL] refactor soft_max, add soft_max_back (#16472)
Neo Zhang Jianyu [Thu, 9 Oct 2025 07:25:11 +0000 (15:25 +0800)]
[SYCL] refactor soft_max, add soft_max_back (#16472)

* refactor to support soft_max_ext

* fix error and support soft_max_back

* rm unused functions

* fix format issue

---------

Co-authored-by: Zhang Jianyu <redacted>
3 months agomodel: EmbeddingGemma Adding Support for SentenceTransformers Dense Modules (#16367)
Saba Fallah [Thu, 9 Oct 2025 06:39:18 +0000 (08:39 +0200)]
model: EmbeddingGemma Adding Support for SentenceTransformers Dense Modules (#16367)

* model: EmbeddingGemma sentence-transformers dense linear projections support

* model: add support for EmbeddingGemma SentenceTransformers dense linear projections

Adding support for the Dense modules used in EmbeddingGemma models.
EmbeddingGemma is a SentenceTransformers model with additional modules beyond the base Transformer backbone.

See: https://developers.googleblog.com/en/gemma-explained-embeddinggemma-architecture-and-recipe/

* model: add support for EmbeddingGemma SentenceTransformers dense linear projections

- converting model with dense-layers is optional
- introduced dense config params

* Update convert_hf_to_gguf.py

Co-authored-by: Daniel Bevenius <redacted>
* fixed formatting issues

* Update src/llama-graph.cpp

Co-authored-by: Georgi Gerganov <redacted>
* - removed pooling_type_opt, always allow overriding pooling_type
- asserts checking dense features dims

* fix python lint

* fix ubuntu gcc build warning

* - fixed thread-safety test
- moved asserts to load_hparams

* - tidying up code
- simplifying graph-context expecting both dense weights

* minor : add TODO

---------

Co-authored-by: Daniel Bevenius <redacted>
Co-authored-by: Georgi Gerganov <redacted>
3 months agorefactor: centralize CoT parsing in backend for streaming mode (#16394)
Pascal [Wed, 8 Oct 2025 20:18:41 +0000 (22:18 +0200)]
refactor: centralize CoT parsing in backend for streaming mode (#16394)

* refactor: unify reasoning handling via backend reasoning_content, drop frontend tag parsing

- Updated the chat message component to surface backend-supplied reasoning via message.thinking while showing the raw assistant content without inline tag scrubbing
- Simplified chat streaming to append content chunks directly, stream reasoning into the message model, and persist any partial reasoning when generation stops
- Refactored the chat service SSE handler to rely on server-provided reasoning_content, removing legacy <think> parsing logic
- Refreshed Storybook data and streaming flows to populate the thinking field explicitly for static and streaming assistant messages

* refactor: implement streaming-aware universal reasoning parser

Remove the streaming mode limitation from --reasoning-format by refactoring
try_parse_reasoning() to handle incremental parsing of <think> tags across
all formats.

- Rework try_parse_reasoning() to track whitespace, partial tags, and
  multiple reasoning segments, allowing proper separation of reasoning_content
  and content in streaming mode
- Parse reasoning tags before tool call handling in content-only and Llama 3.x
  formats to ensure inline <think> blocks are captured correctly
- Change default reasoning_format from 'auto' to 'deepseek' for consistent
  behavior
- Add 'deepseek-legacy' option to preserve old inline behavior when needed
- Update CLI help and documentation to reflect streaming support
- Add parser tests for inline <think>...</think> segments

The parser now continues processing content after </think> closes instead of
stopping, enabling proper message.reasoning_content and message.content
separation in both streaming and non-streaming modes.

Fixes the issue where streaming responses would dump everything (including
post-thinking content) into reasoning_content while leaving content empty.

* refactor: address review feedback from allozaur

- Passed the assistant message content directly to ChatMessageAssistant to drop the redundant derived state in the chat message component
- Simplified chat streaming updates by removing unused partial-thinking handling and persisting partial responses straight from currentResponse
- Refreshed the ChatMessage stories to cover standard and reasoning scenarios without the old THINK-tag parsing examples

Co-authored-by: Aleksander Grygier <redacted>
* refactor: restore forced reasoning prefix to pass test-chat ([chat] All tests passed)

- store the exact sequence seen on input when 'thinking_forced_open' enforces a reasoning block
- inject this prefix before the first accumulated segment in 'reasoning_content', then clear it to avoid duplication
- repeat the capture on every new 'start_think' detection to properly handle partial/streaming flows

* refactor: address review feedback from ngxson

* debug: say goodbye to curl -N, hello one-click raw stream

- adds a new checkbox in the WebUI to display raw LLM output without backend parsing or frontend Markdown rendering

* Update tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessage.svelte

Co-authored-by: Aleksander Grygier <redacted>
* webui: add Storybook example for raw LLM output and scope reasoning format toggle per story

- Added a Storybook example that showcases the chat message component in raw LLM output mode with the provided trace sample
- Updated every ChatMessage story to toggle the disableReasoningFormat setting so the raw-output rendering remains scoped to its own example

* npm run format

* chat-parser: address review feedback from ngxson

Co-authored-by: Xuan Son Nguyen <redacted>
---------

Co-authored-by: Aleksander Grygier <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
3 months agoDisable CUDA host buffers on integrated GPUs (#16308)
ai-fonsi [Wed, 8 Oct 2025 18:21:46 +0000 (20:21 +0200)]
Disable CUDA host buffers on integrated GPUs (#16308)

3 months agoserver : fix cancel pending task (#16467)
issixx [Wed, 8 Oct 2025 08:20:18 +0000 (17:20 +0900)]
server : fix cancel pending task (#16467)

Co-authored-by: DevAI <redacted>
3 months agometal : mark FA blocks (#16372)
Georgi Gerganov [Wed, 8 Oct 2025 07:57:53 +0000 (10:57 +0300)]
metal : mark FA blocks (#16372)

* metal : better unroll in the FA kernels

* metal : index FA blocks

* tests : restore [no ci]

* metal : prevent division by zero in FA kernels

* metal : fix -INF detection logic

3 months agoserver : improve context checkpoint logic (#16440)
Georgi Gerganov [Wed, 8 Oct 2025 07:57:29 +0000 (10:57 +0300)]
server : improve context checkpoint logic (#16440)

3 months agoggml webgpu: profiling, CI updates, reworking of command submission (#16452)
Reese Levine [Tue, 7 Oct 2025 20:48:56 +0000 (13:48 -0700)]
ggml webgpu: profiling, CI updates, reworking of command submission (#16452)

* Add profiling

* More detailed profiling

* Rework command submission to avoid global locks

* Update wait handling

* try new method of waiting on futures

* Add serializing of command submission in some cases

* Add new pool for timestamp queries and clean up logging

* Serialize command submission in CI and leave a TODO note

* Update webgpu CI

* Add myself as WebGPU codeowner

* Deadlock avoidance

* Leave WebGPU/Vulkan CI serialized

* Fix divide by 0

* Fix logic in division by inflight_threads

* Update CODEOWNERS and remove serialize submit option

3 months agollama : support LiquidAI LFM2-MoE hybrid model (#16464)
Tarek Dakhran [Tue, 7 Oct 2025 18:03:35 +0000 (20:03 +0200)]
llama : support LiquidAI LFM2-MoE hybrid model (#16464)

* llama : support LiquidAI LFM2-MoE hybrid model

Add support for [LiquidAI/LFM2-8B-A1B](https://huggingface.co/LiquidAI/LFM2-8B-A1B) model.
For more information about models, please read [the blog post](https://www.liquid.ai/company/news).

[HF PR](https://github.com/huggingface/transformers/pull/41401)
[GGUFs](https://huggingface.co/LiquidAI/LFM2-8B-A1B-GGUF)

* Do not use defaultdict

* Address PR feedback

3 months agoserver : add `/v1/health` endpoint (#16461)
Georgi Gerganov [Tue, 7 Oct 2025 12:57:14 +0000 (15:57 +0300)]
server : add `/v1/health` endpoint (#16461)

* server : add /v1/health endpoint

* cont : update readme

3 months agowebui : added download action (#13552) (#16282)
Sascha Rogmann [Tue, 7 Oct 2025 09:11:08 +0000 (11:11 +0200)]
webui : added download action (#13552) (#16282)

* webui : added download action (#13552)

* webui : import and export (for all conversations)

* webui : fixed download-format, import of one conversation

* webui : add ExportedConversations type for chat import/export

* feat: Update naming & order

* chore: Linting

* webui : Updated static build output

---------

Co-authored-by: Aleksander Grygier <redacted>
3 months agopresets : fix pooling param for embedding models (#16455)
Georgi Gerganov [Tue, 7 Oct 2025 07:32:32 +0000 (10:32 +0300)]
presets : fix pooling param for embedding models (#16455)

3 months agorpc : update documentation (#16441)
Radoslav Gerganov [Tue, 7 Oct 2025 06:59:13 +0000 (09:59 +0300)]
rpc : update documentation (#16441)

Update the README file to match the newly added functionality of
exposing multiple devices from a single server.

Co-authored-by: Diego Devesa <redacted>
3 months agomemory : use sequential equal splits for recurrent modules (#16442)
Georgi Gerganov [Tue, 7 Oct 2025 05:24:17 +0000 (08:24 +0300)]
memory : use sequential equal splits for recurrent modules (#16442)