]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Eve [Sat, 4 Oct 2025 20:04:27 +0000 (20:04 +0000)]
vulkan: use a more appropriate amount of threads when generating shaders (#16418)
* use a more flexible amount of threads
* fix windows compile and 0 thread case
* nominmax
Radoslav Gerganov [Sat, 4 Oct 2025 13:22:45 +0000 (16:22 +0300)]
rpc : check src buffer when copying tensor (#16421)
Only dst buffer is guaranteed to be an RPC buffer. Add check for the src
one.
Radoslav Gerganov [Sat, 4 Oct 2025 09:49:16 +0000 (12:49 +0300)]
rpc : add support for multiple devices (#16276)
* rpc : add support for multiple devices
Allow rpc-server to expose multiple devices from a single endpoint.
Change RPC protocol to include device identifier where needed.
closes: #15210
* fixes
* use ggml_backend_reg_t
* address review comments
* fix llama-bench backend report
* address review comments, change device naming
* fix cmd order
Acly [Sat, 4 Oct 2025 09:42:56 +0000 (11:42 +0200)]
vulkan : incremental shader builds (#16341)
* vulkan (DRAFT): split shader generation by GLSL source file, to improve incremental build times
* support dep-files so shaders are recompiled if their included files change
* rename shader files which are used as "headers" to use .glsl extension
* move glslc extension detection shaders to separate folders
* the above is to prevent them from getting glob'd with the actual compute shaders that need to be compiled
* vulkan : only write embedded shader .hpp/.cpp when they change
* avoid recompiling ggml-vulkan.cpp when editing shaders
* pass single --source argument instead of --input-dir & --filter to shader gen
* check for source file match earlier
* fix hang in vulkan-shaders-gen when there are compilation errors
* early out did not decrement compile_count
* clean up
* fix glslc integer dot product test
* unconditionally write the embedded shader cpp output
* replace output filepath in generated dep-files to match output in CMakeLists
---------
Co-authored-by: Jeff Bolz <redacted>
Pascal [Fri, 3 Oct 2025 18:51:48 +0000 (20:51 +0200)]
chat : support Magistral thinking (#16413)
* feat: added a dedicated Magistral chat format that preserves [THINK] spans, parses reasoning before tool calls
* feat: new flow in the chat template test suite for Magistral
ddh0 [Fri, 3 Oct 2025 18:34:51 +0000 (13:34 -0500)]
server : context checkpointing for hybrid and recurrent models (#16382)
* initial commit for branch 3
* generalize `swa_checkpoint` to `ctx_checkpoint`
this extends `llama-server`'s SWA checkpointing logic to include
hybrid/recurrent models such as Jamba, Granite
* oops
* disable debug prints
* keep backwards compat with `--swa-checkpoints`
Co-authored-by: Georgi Gerganov <redacted>
* update prompt re-processing message
* fix off-by-one error per GG
* keep `seq_rm` log per GG
Co-authored-by: Georgi Gerganov <redacted>
* server : fix checkpoint logic to support recurrent caches
* server : cleanup and fixes
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 3 Oct 2025 16:18:56 +0000 (19:18 +0300)]
metal : fix loop bound in ggml_mem_ranges (#16412)
Sigbjørn Skjæret [Fri, 3 Oct 2025 12:40:25 +0000 (14:40 +0200)]
llama : fix shapes for bert/mpt q/k norm (#16409)
Acly [Fri, 3 Oct 2025 11:49:08 +0000 (13:49 +0200)]
ggml : fix graph reallocation with multiple chunks (#16396)
reallocation is needed if a single chunk grows in size,
even if total allocation size stays the same or is lower
Aleksander Grygier [Fri, 3 Oct 2025 10:51:40 +0000 (12:51 +0200)]
Fix missing messages on sibling navigation (#16408)
* fix: resolve message disappearing issue when navigating between regenerated siblings by using current leaf nodes instead of cached sibling IDs
* chore: update webui build output
* chore: update webui build output
Jeff Bolz [Fri, 3 Oct 2025 10:50:46 +0000 (05:50 -0500)]
vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE (#16354)
* vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE
Replace maxMemoryAllocationSize check with maxBufferSize when creating buffers.
The maxMemoryAllocationSize limit is a "soft" limit and allocations can succeed
beyond that limit. This allows > 4GB buffers to be allocated on some
implementations (e.g. NVIDIA) and tensors this large can be used for im2col
and mul_mat.
For temporary buffers (prealloc_x/y/etc) check against maxStorageBufferRange.
I'm not sure this check is ideal, but we always use these buffers as a single
full size binding and the limit may be smaller than maxMemoryAllocationSize
or maxBufferSize, so I think this is reasonable.
Replace descriptor range uses of VK_WHOLE_SIZE with a manually computed range.
The maxStorageBufferRange may be smaller than the maxBufferSize or
maxMemoryAllocationSize (and the Vulkan spec warns about this in a note) and
it's invalid usage if VK_WHOLE_SIZE computes a range larger than
maxStorageBufferRange.
With this change, it should be possible to generate videos using wan networks
in stable-diffusion.cpp.
* vulkan: Add env var GGML_VK_FORCE_MAX_BUFFER_SIZE and use stoull
Jeff Bolz [Fri, 3 Oct 2025 09:52:46 +0000 (04:52 -0500)]
vulkan: Fix FA coopmat1 invalid array indexing (#16365)
When computing sinks, the cm1 shader was looping r from 0 to Br rather than
to rows_per_thread. I must have copied this from the scalar path (where it is
correct), and somehow it wasn't causing failures on current drivers.
Daniel Bevenius [Fri, 3 Oct 2025 09:45:16 +0000 (11:45 +0200)]
ci : change macos-13 to macos-15-intel (#16401)
This commit updates the macos-13 runners to macos-15-intel.
The motivation for this changes is the macos-13 runners are scheduled
to be retired on 2025-12-04.
Refs: https://github.blog/changelog/2025-09-19-github-actions-macos-13-runner-image-is-closing-down/
Aleksander Grygier [Fri, 3 Oct 2025 09:30:39 +0000 (11:30 +0200)]
Capture model name only after first token (streaming) or completed request (#16405)
* feat: Capture model name only after first token (streaming) or completed request (non-streaming)
* chore: update webui build output
* chore: update webui build output
Jeff Bolz [Fri, 3 Oct 2025 08:33:08 +0000 (03:33 -0500)]
vulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD) (#16316)
Aleksander Grygier [Fri, 3 Oct 2025 07:11:34 +0000 (09:11 +0200)]
webui : Fix messages payload sent to chat completions (#16402)
* fix: Include just the currently active message branches instead of all in chat completions request
* chore: Build webui static output
* chore: Formatting
* chore: update webui build output
Pascal [Fri, 3 Oct 2025 06:01:31 +0000 (08:01 +0200)]
fix: track viewportHeight via window.innerHeight to avoid unwanted scrolling (#16356)
Use <svelte:window bind:innerHeight> instead of manual resize listener
Co-authored-by: Aleksander Grygier <redacted>
Sigbjørn Skjæret [Thu, 2 Oct 2025 18:10:12 +0000 (20:10 +0200)]
test-barrier : do not use more threads than physically available (#16389)
* do not use more threads than physically available
* ensure n_threads > 0
Co-authored-by: Jeff Bolz <redacted>
---------
Co-authored-by: Jeff Bolz <redacted>
Reese Levine [Thu, 2 Oct 2025 18:00:31 +0000 (11:00 -0700)]
ggml webgpu: add support for soft_max, optimize rms_norm (#16357)
* Add inplace softmax
* Move rms_norm to split row approach
* Update debug for supports_op
* clean up debug statements
* Update tests/test-backend-ops.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Piotr Wilkin (ilintar) [Thu, 2 Oct 2025 17:43:22 +0000 (19:43 +0200)]
model : Apertus model implementation (#15852)
* First attempt
* No permute during convert (fixes qk tensors), proper norm application.
* RoPE = NeoX
* Coherence!
* Migrate xielu params from tensors to hyperparameters
* Simple CUDA kernel
* Revert stupid LLM refactorings
* Chat template support
* configchecker / flake8 errors
* Reorder unary.cu
* I do conclude that LLMs are, in fact, stupid.
* Fix after merge
* Final newline
* Make xIELU an UNARY_OP
* Final newline
* Correctly account for parameter shift
* Argh.
* Update ggml/src/ggml-cpu/unary-ops.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Refactor: remove unused methods, inline and factorize softplus, add const modifiers
* Revert CUDA changes, implement xIELU as a separate OP
* Pesky newline
* Add float2half / half2float for F16 inputs/outputs
* CUDA variants, attempt 2
* Actually, attempt 3
* Update ggml/src/ggml-cuda/unary.cu
Co-authored-by: Johannes Gäßler <redacted>
* Missing convert header
* Proper formula and reference for xIELU in the comments.
* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
* Add tensor mappings for Apertus to global list instead
* Fix lazy on scalars
* Update ggml/src/ggml-cuda/unary.cu
Co-authored-by: Johannes Gäßler <redacted>
* Add comment about the constraints on positive/negative alpha
* Change `softplus` to `ggml_softplus`
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Johannes Gäßler <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
R0CKSTAR [Thu, 2 Oct 2025 13:29:56 +0000 (21:29 +0800)]
musa: update compile flags (#16265)
Signed-off-by: Xiaodong Ye <redacted>
Sigbjørn Skjæret [Thu, 2 Oct 2025 11:51:36 +0000 (13:51 +0200)]
ci : fix ubuntu-latest-cmake-rpc (disable ccache) (#16388)
Eve [Thu, 2 Oct 2025 08:10:07 +0000 (08:10 +0000)]
ci: update vulkan ci (#16294)
Georgi Gerganov [Thu, 2 Oct 2025 07:35:43 +0000 (10:35 +0300)]
ci : fix clean-up of old logs (#16381)
Neo Zhang Jianyu [Thu, 2 Oct 2025 07:16:25 +0000 (15:16 +0800)]
SYCL: Update to oneAPI 2025.2 (#16371)
* update oneapi to 2025.2, use deep-learning-essentials to replace base-tool
* update to 2025.2 use deeplearn essi to replace base toolkit
* add missed dll
* add deep learning essentials
* add sycl-ls
---------
Co-authored-by: Zhang Jianyu <redacted>
uvos [Thu, 2 Oct 2025 03:52:59 +0000 (05:52 +0200)]
HIP: add IMbackK to codeowner (#16375)
uvos [Wed, 1 Oct 2025 21:32:39 +0000 (23:32 +0200)]
CI: reenable cdna in rocm docker builds (#16376)
uvos [Wed, 1 Oct 2025 21:09:25 +0000 (23:09 +0200)]
HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (#16221)
* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0
rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA
* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn
Shunta Saito [Wed, 1 Oct 2025 21:08:15 +0000 (06:08 +0900)]
llama : parameter conversion and loading fixes for PLaMo2 variants (#16075)
* Fix to use hidden_size_per_head
* Fix num heads
* Fix array
* Fix loading weights
* Support old GGUF converted by the previous version of llama.cpp
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Move shared parameter definitions to the outside of loop
* Not calculating n_embd_head_k,v by n_embd / n_head
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
uvos [Wed, 1 Oct 2025 18:18:03 +0000 (20:18 +0200)]
ci: Properly install rocwmma for hip builds (#16305)
* CI: Properly install rocwmma for hip builds
on windows we now windows install rocwmma from ubuntu pacakges
* CI: update linux rocm docker build to use rocm 7.0
Adrien Gallouët [Wed, 1 Oct 2025 17:22:18 +0000 (19:22 +0200)]
common: introduce http.h for httplib-based client (#16373)
* common: introduce http.h for httplib-based client
This change moves cpp-httplib based URL parsing and client setup into
a new header `common/http.h`, and integrates it in `arg.cpp` and `run.cpp`.
It is an iteration towards removing libcurl, while intentionally
minimizing changes to existing code to guarantee the same behavior when
`LLAMA_CURL` is used.
Signed-off-by: Adrien Gallouët <redacted>
* tools : add missing WIN32_LEAN_AND_MEAN
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Signed-off-by: Adrien Gallouët <redacted>
Aleksander Grygier [Wed, 1 Oct 2025 16:18:10 +0000 (18:18 +0200)]
Conversation action dialogs as singletons from Chat Sidebar + apply conditional rendering for Actions Dropdown for Chat Conversation Items (#16369)
* fix: Render Conversation action dialogs as singletons from Chat Sidebar level
* chore: update webui build output
* fix: Render Actions Dropdown conditionally only when user hovers conversation item + remove unused markup
* chore: Update webui static build
* fix: Always truncate conversation names
* chore: Update webui static build
Aleksander Grygier [Wed, 1 Oct 2025 13:54:42 +0000 (15:54 +0200)]
Improve code block color theming (#16325)
* feat: Improve code block theming
* chore: update webui build output
* chore: Update webui static build
Sigbjørn Skjæret [Wed, 1 Oct 2025 12:09:52 +0000 (14:09 +0200)]
ci : use registry cache for docker builds (#16366)
Aleksander Grygier [Wed, 1 Oct 2025 10:08:16 +0000 (12:08 +0200)]
Add optional setting for showing "Model used:" information (#16337)
* feat: Add a setting to include model name used to generate the message
* feat: UI improvements
* feat: Save model info along with the database message entry creation
* chore: Build webui static output
Eve [Wed, 1 Oct 2025 07:56:36 +0000 (07:56 +0000)]
vulkan: make ggml_vk_default_dispatcher support older vulkan headers (#16345)
* make ggml_vk_default_dispatcher support older vulkan headers
* simpilfy with using
Aleksander Grygier [Wed, 1 Oct 2025 05:40:26 +0000 (07:40 +0200)]
webui: Remove running `llama-server` within WebUI `dev.sh` script (#16363)
Bartowski [Tue, 30 Sep 2025 20:24:36 +0000 (16:24 -0400)]
model : support GLM 4.6 (make a few NextN/MTP tensors not required) (#16359)
* Make a few GLM tensors not required
layer.nextn.shared_head_head and layer.nextn.embed_tokens are both excluded from GLM 4.6 resulting in the model not loading after conversion/quantization, this marks those tensors as not required which makes it work
* Update llama-model.cpp
layer.nextn.shared_head_norm also not required in case of future models
Sigbjørn Skjæret [Tue, 30 Sep 2025 19:41:42 +0000 (21:41 +0200)]
ci : fix ccache key for ubuntu-cpu-cmake (#16355)
* fix ccache key for ubuntu-cpu-cmake
* set it for release as well [no ci]
Adrien Gallouët [Tue, 30 Sep 2025 17:52:41 +0000 (19:52 +0200)]
common : disable progress bar without a tty (#16352)
* common : disable progress bar without a tty
Signed-off-by: Adrien Gallouët <redacted>
* Add missing headers
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
lhez [Tue, 30 Sep 2025 17:45:45 +0000 (10:45 -0700)]
opencl: support pad_ext (#15888)
Pascal [Tue, 30 Sep 2025 17:18:54 +0000 (19:18 +0200)]
Chatapi ignore empty sampling (#16330)
* fix: skip empty sampling fields instead of coercing to 0 in chat API options
* chore: update webui build output
Reese Levine [Tue, 30 Sep 2025 16:57:51 +0000 (09:57 -0700)]
ggml webgpu: support for rope,div,sub,glu,scale,cont operators (#16187)
* Work on rope
* Simplify inplace operation generation and combine mul/add generation
* Work on rope variants
* implement neox rope
* rope complete
* Add sub,div,glu operators
* implement scale op
* Update cpy shader to handle cont/more types
* formatting
* Update test vars printing for rope,rms_norm
* Avoid ROPE hardcoded constants
* Add TODO to change ROPE constants to enum
Co-authored-by: Georgi Gerganov <redacted>
* fix TODO comment
---------
Co-authored-by: Georgi Gerganov <redacted>
lhez [Tue, 30 Sep 2025 16:55:13 +0000 (09:55 -0700)]
opencl: support ne3 in get_rows (#15866)
Adrien Gallouët [Tue, 30 Sep 2025 14:39:44 +0000 (16:39 +0200)]
common : remove common_has_curl() (#16351)
`test-arg-parser.cpp` has been updated to work consistently,
regardless of whether CURL or SSL support is available, and
now always points to `ggml.ai`.
The previous timeout test has been removed, but it can be
added back by providing a dedicated URL under `ggml.ai`.
Signed-off-by: Adrien Gallouët <redacted>
Sigbjørn Skjæret [Tue, 30 Sep 2025 13:38:01 +0000 (15:38 +0200)]
ci : disable ccache for android (#16348)
Georgi Gerganov [Tue, 30 Sep 2025 10:42:39 +0000 (13:42 +0300)]
ggml : bump version to 0.9.4 (ggml/1363)
anavp-nvidia [Tue, 30 Sep 2025 08:13:22 +0000 (08:13 +0000)]
cuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) (#16328)
* Fix Nemotron Nano v2 9B not executing as CUDA Graph on NVIDIA GPUs
* fix to ensure test-backend-ops check passes
Georgi Gerganov [Tue, 30 Sep 2025 08:03:23 +0000 (11:03 +0300)]
metal : dynamic simdgroups for MV kernels (#16340)
* metal : dynamic simdgroups for MV kernels
* cont : minor
Adrien Gallouët [Tue, 30 Sep 2025 07:36:33 +0000 (09:36 +0200)]
common : simplify etag tracking by removing json (#16342)
The JSON parser is temporarily kept only for backward compatibility. It
reads the etag from old .json files to prevent unnecessary re-downloads
for existing users.
This legacy code can be removed in a future version.
Signed-off-by: Adrien Gallouët <redacted>
Charles Xu [Tue, 30 Sep 2025 07:07:20 +0000 (09:07 +0200)]
kleidiai : fix work size and threads sync for fp16 (#16246)
lhez [Tue, 30 Sep 2025 05:30:16 +0000 (22:30 -0700)]
codeowners: add codeowners for opencl backend (#16344)
Jeff Bolz [Tue, 30 Sep 2025 00:26:34 +0000 (19:26 -0500)]
tests: override test_set_rows::max_nmse_err to allow for occasional rounding differences (#16295)
* tests: override test_set_rows::max_nmse_err to allow for occasional rounding differences
* apply similar error bounds to test_cpy
Pascal [Mon, 29 Sep 2025 16:49:47 +0000 (18:49 +0200)]
Fix thinking blocks with quotes + add handling `[THINK]...[/THINK]` blocks (#16326)
* fix: prevent reasoning blocks with quotes from being truncated
* chore: update webui build output
* feat: Improve thinking content parsing
* test: Adds ChatMessage component stories for different thinking blocks
* chore: update webui build output
* fix: ChatMessage story fix
---------
Co-authored-by: Aleksander Grygier <redacted>
Georgi Gerganov [Mon, 29 Sep 2025 14:51:48 +0000 (17:51 +0300)]
ci : add AMD runners and workflows (#16249)
* ci : add AMD runners and workflows
* ci : move AMD jobs to separate workflow
* cont : fix paths
alex-spacemit [Mon, 29 Sep 2025 14:50:44 +0000 (22:50 +0800)]
ggml: riscv: add riscv spacemit backend (#15288)
* ggml: add spacemit backend
Change-Id: I249bdc043485d815a9c351867137bc1e27cc2e23
* add new line at end of file
Change-Id: I889ed1c85fb45e62350ecde0c06f70450cadfbe2
* add riscv zba extension limit
Change-Id: I321eb200f859751727afe5cae13074dfce2bb0ce
* fixed for review comments, file renamed and format
Change-Id: Ia20b6ec24a36638e62e0fe07cf100916a7cce3ce
* fixed for code format, after clang-format
Change-Id: I5dc33a0412da3d3f2d77075d8939185d3009eca2
* use _Float16 instead of __fp16
Change-Id: I039fb02bb95270e641bc4442204e658735859d43
* add ci for riscv64-spacemit-ime-native
Change-Id: I711c1033061df1a289ea77891b2997599dfe8279
* update debian-13-riscv64-spacemit-ime-native ci label
Change-Id: Ifb2b891e2fca57b5da604fce2ac255f27731179a
* remove license comment for spacemit ime
Change-Id: If0dc3ca30a958631ccca0a28b62e0b825f9fb0c3
* upgrade binutils for gcc ime
Change-Id: Ibf2fa74c1064408974cb5b45f044d40987e5fb45
* add spacemit ime cross jobs
Change-Id: I80d74909941d41cb9cd09e51d8baf01c985cbfc6
* remove native compile for riscv64-spacemit-ime
Change-Id: I01920afafdc73fa7424014fd648d243f8ec9e25e
* ci : add caching for spacemit ime cross toolchain
Change-Id: Ic54a192019a2fd982bbd58225ce3bbc38f4053de
* ci: bug fixed for cache path and env
Change-Id: I28c42e10b6fff053bb6580926ca2353448cb042a
* Update .github/workflows/build-linux-cross.yml for cache path
Co-authored-by: Sigbjørn Skjæret <redacted>
* bugfixed for build-linux-cross.yml, syntax error
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: cailinxi <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Georgi Gerganov [Mon, 29 Sep 2025 13:50:52 +0000 (16:50 +0300)]
sync : ggml
Georgi Gerganov [Mon, 29 Sep 2025 13:49:11 +0000 (16:49 +0300)]
sync : whisper.cpp (ggml/1359)
* ggml : Fix MKL detection by quoting BLAS_INCLUDE_DIRS (whisper/3426)
* sync : whisper.cpp
Daniel Bevenius [Fri, 26 Sep 2025 15:34:42 +0000 (17:34 +0200)]
ggml : remove -dev suffix from release version (ggml/1355)
This commit removes the `-dev` suffix from the version string in
CMakeLists.txt and the release script. The version will now be
just be formatted as `MAJOR.MINOR.PATCH`.
Daniel Bevenius [Thu, 25 Sep 2025 12:39:05 +0000 (14:39 +0200)]
ggml : bump version to 0.9.3 (ggml/1353)
Georgi Gerganov [Sat, 20 Sep 2025 13:44:23 +0000 (16:44 +0300)]
ggml : prepare for development of 0.9.2-dev
Georgi Gerganov [Sat, 20 Sep 2025 13:44:23 +0000 (16:44 +0300)]
ggml : bump version to 0.9.1
Rafal Lewczuk [Mon, 29 Sep 2025 11:17:09 +0000 (13:17 +0200)]
ggml-backend : add root cause in error message if loading backend library fails (#16172)
This PR adds additional information to an error message when loading backend library via ld_load_library() fails. This helps spotting why backend library did not load (missing library, missing dependency or unresolved symbol etc.).
Sigbjørn Skjæret [Mon, 29 Sep 2025 09:09:00 +0000 (11:09 +0200)]
ggml : check cuda and metal argsort limits and add test (#16323)
* check cuda argsort limits and add test
* add metal check
Aleksander Grygier [Mon, 29 Sep 2025 08:37:20 +0000 (10:37 +0200)]
Improve Mobile UI for dialogs and action dropdowns (#16222)
* fix: Always show conversation item actions
* feat: Improve Alert Dialog and Dialog mobile UI
* feat: Add settings reset to default confirmation
* fix: Close Edit dialog on save
* chore: update webui build output
* webui: implement proper z-index system and scroll management
- Add CSS variable for centralized z-index control
- Fix dropdown positioning with Settings dialog conflicts
- Prevent external scroll interference with proper event handling
- Clean up hardcoded z-index values for maintainable architecture
* webui: ensured the settings dialog enforces dynamic viewport height on mobile while retaining existing desktop sizing overrides
* feat: Use `dvh` instead of computed px height for dialogs max height on mobile
* chore: update webui build output
* feat: Improve Settings fields UI
* chore: update webui build output
* chore: update webui build output
---------
Co-authored-by: Pascal <redacted>
Pascal [Mon, 29 Sep 2025 07:08:41 +0000 (09:08 +0200)]
fix: preserved zero values in chat settings inputs and textareas by switching to nullish coalescing for field values and default placeholders (#16312)
Vinkal [Mon, 29 Sep 2025 07:03:12 +0000 (12:33 +0530)]
llama-cli: prevent spurious assistant token (#16202)
* tools/main: llama-cli: prevent spurious assistant token (#13402)
During prompt ingestion, prompt tokens are accepted into the sampler history (for repetition penalties). The conversation-mode path then appended `common_sampler_last(smpl)` to `assistant_ss` before any new token was sampled. At that point, "last" was a prompt-side token (e.g., an input prefix), so the assistant chat message began with an extra piece.
Fix: append to `assistant_ss` only for a newly sampled (non-EOG) token. This affects only chat message assembly (`assistant_ss` / `chat_msgs` / `common_chat_format_single`); terminal stdout is unchanged. Sampling order/logits are unchanged.
Fixes #13402.
Signed-off-by: Vinkal Chudgar <redacted>
* Update tools/main/main.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* tools/main: remove outdated comment
Signed-off-by: Vinkal Chudgar <redacted>
---------
Signed-off-by: Vinkal Chudgar <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
ddh0 [Mon, 29 Sep 2025 06:30:45 +0000 (01:30 -0500)]
perplexity : show more kl-divergence data (#16321)
Adds additional percentile data for displayed in the output of `llama-perplexity --kl-divergence`:
- Added 95 percentile (mirroring existing 5 percentile)
- Added 0.1 percentile (mirroring existing 99.9 percentile)
Georgi Gerganov [Mon, 29 Sep 2025 05:41:28 +0000 (08:41 +0300)]
ggml : fix dependencies for ggml_set_rows (#16318)
Jeff Bolz [Mon, 29 Sep 2025 04:50:37 +0000 (23:50 -0500)]
vulkan: Fix validation failure in quantized flash attention (#16292)
Sigbjørn Skjæret [Sun, 28 Sep 2025 21:15:03 +0000 (23:15 +0200)]
ggml : fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32 (#16307)
* fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32
* add test that fails on simd
crat0z [Sun, 28 Sep 2025 18:13:50 +0000 (14:13 -0400)]
common : fix reasoning before forced tool call via tool_choice = required (#16264)
* common : fix reasoning before forced tool call via tool_choice = required
* common : improve reasoning and commentary handling when tool_choice is required
(cherry picked from commit
c746984956d6882c2de73d53ae2bb3bdf889e475 )
---------
Co-authored-by: Alde Rojas <redacted>
R0CKSTAR [Sun, 28 Sep 2025 14:38:15 +0000 (22:38 +0800)]
ci : fix musa docker build (#16306)
Signed-off-by: Xiaodong Ye <redacted>
Aaron Teo [Sun, 28 Sep 2025 11:25:58 +0000 (19:25 +0800)]
devops: switch to using ubuntu-22.04-s390x image (#16302)
Signed-off-by: Aaron Teo <redacted>
Imad Saddik [Sun, 28 Sep 2025 11:04:46 +0000 (12:04 +0100)]
Fixed a few typos in the README of the LLaMA.cpp HTTP Server [no ci] (#16297)
Jeff Bolz [Sun, 28 Sep 2025 06:38:37 +0000 (01:38 -0500)]
vulkan: 64-bit im2col (#16135)
* vulkan: 64-bit im2col
Add variants of the im2col shaders that use buffer_device_address/buffer_reference,
and use 64-bit address calculations. This is needed for large convolutions used in
stable-diffusion.cpp.
* fix validation error for large im2col
Georgi Gerganov [Sun, 28 Sep 2025 06:34:44 +0000 (09:34 +0300)]
metal : extend mat-mat multiplication support (#16225)
* metal : support mul_mm with src1->type == GGML_TYPE_F16
* metal : support mul_mm_id with src1->type == GGML_TYPE_F16
[no ci]
* metal : mul_mm support ne00 % 32 != 0
* metal : support mul_mm_id with ne00 % 32 != 0
* cont : remove unnecessary unrolls
* cont : simplify data loading
* metal : optimize mul_mm when output bounds checks are not needed
Georgi Gerganov [Sun, 28 Sep 2025 06:34:05 +0000 (09:34 +0300)]
metal : fuse non-sequential nodes (#16102)
* metal : fuse non-sequential nodes
* cont : add comment
* cont : simplify bounds checks
Jeff Bolz [Sun, 28 Sep 2025 01:36:34 +0000 (20:36 -0500)]
vulkan: handle mat_mul with A matrix > 4GB (#16176)
* vulkan: handle mat_mul with A matrix > 4GB
This change splits mat_mul operations with huge A matrix into chunks in the M
dimension. This works well for stable-diffusion use cases where the im2col
matrix has very large M.
Fix the order of setting the stride in mul_mm_cm2 - setting the dimension
clobbers the stride, so stride should be set after.
* build fixes
Jeff Bolz [Sat, 27 Sep 2025 20:43:39 +0000 (16:43 -0400)]
vulkan: support arbitrary KV dimension in flash attention (#16160)
The "Clamp" spec constant is already based on whether KV is a multiple of Bc,
so use that to control whether bounds checking is performed. Add bounds checking
to the scalar and coopmat1 paths. Coopmat2 didn't need any changes (the K/V
tensors are already optionally clamped, nothing else needed to be changed).
Acly [Sat, 27 Sep 2025 20:41:03 +0000 (22:41 +0200)]
vulkan : make the vulkan.hpp dynamic dispatcher instance private (#16224)
* don't use VULKAN_HPP_DEFAULT_DISPATCH_LOADER_DYNAMIC_STORAGE which can cause conflicts if application or other libraries do the same
Aleksander Grygier [Sat, 27 Sep 2025 17:56:40 +0000 (19:56 +0200)]
Show message actions by default (#16289)
Aman Gupta [Sat, 27 Sep 2025 16:49:32 +0000 (00:49 +0800)]
CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32 (#16277)
* CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32
This commit adds mul_mat_id support for ncols_dst >= 16. It does this by
packing ncols_dst tiles into the blockDim.y.
My tests on a RTX 3090 show that this is faster than the cuBLAS fallback
for f16 till bs=64, and for f32 till bs=32
* Review: refactor if statement
Johannes Gäßler [Sat, 27 Sep 2025 16:45:07 +0000 (18:45 +0200)]
CUDA: refactor and deduplicate vector FA kernels (#16208)
* CUDA: refactor and deduplicate vector FA kernels
Dmytro Minochkin [Sat, 27 Sep 2025 16:26:46 +0000 (19:26 +0300)]
vulkan: throw system error instead of SIGABRT during init on older devices (#16156)
* Throw system error on old Vulkan driver rather than SIGABRT
* Optionally handle any potential error in vulkan init
Adrien Gallouët [Sat, 27 Sep 2025 16:17:08 +0000 (18:17 +0200)]
server : remove old LLAMA_SERVER_SSL (#16290)
Signed-off-by: Adrien Gallouët <redacted>
Jeff Bolz [Sat, 27 Sep 2025 10:36:11 +0000 (06:36 -0400)]
vulkan: support GET_ROWS for k-quants (#16235)
The dequantize functions are copy/pasted from mul_mm_funcs.comp with very few
changes - add a_offset and divide iqs by 2. It's probably possible to call
these functions from mul_mm_funcs and avoid the duplication, but I didn't go
that far in this change.
Adrien Gallouët [Sat, 27 Sep 2025 09:12:46 +0000 (11:12 +0200)]
build : add LLAMA_OPENSSL option (#16287)
Introduce a new `LLAMA_OPENSSL` option, enabled by default.
This preserves the previous default (libcurl first, OpenSSL as fallback),
while allowing OpenSSL to be disabled if desired.
Signed-off-by: Adrien Gallouët <redacted>
Vinkal [Fri, 26 Sep 2025 21:28:29 +0000 (02:58 +0530)]
model : make minicpm embedding_scale, residual_scale and logit_scale optional with legacy defaults (#16273)
* minicpm: make GGUF scaling keys optional with legacy defaults
Older MiniCPM GGUFs do not include the scaling metadata keys (minicpm.embedding_scale, minicpm.residual_scale, minicpm.logit_scale). The loader currently treats these as required, so quantization fails with:
key not found in model: minicpm.embedding_scale
This change restores backward compatibility by treating these keys as optional in the loader and using the older MiniCPM scaling values:
embedding_scale = 12.0f
residual_scale = 1.4f / sqrt(n_layer)
logit_scale = 256.0f / n_embd
When the GGUF provides the keys, their values override the defaults; otherwise the legacy defaults are used. Newer GGUFs that already include these keys are unaffected.
Fixes: #16192
Signed-off-by: Vinkal Chudgar <redacted>
* Update src/llama-model.cpp
Committed as suggested. Thanks!
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Signed-off-by: Vinkal Chudgar <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Aaron Teo [Fri, 26 Sep 2025 18:03:33 +0000 (02:03 +0800)]
devops: add s390x & ppc64le CI (#15925)
* devops: move s390x and ppc64le ci build
we have access to ubuntu-24.04-s390x and ppc64le images now
Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le for now since they have compiler errors
Signed-off-by: Aaron Teo <redacted>
* devops: stop warnings as errors
Signed-off-by: Aaron Teo <redacted>
* devops: switch to non-macro flag
Signed-off-by: Aaron Teo <redacted>
* devops: going the llama macro route
Signed-off-by: Aaron Teo <redacted>
* devops: add big-endian gguf test models
Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le to test s390x, check test build
Signed-off-by: Aaron Teo <redacted>
* devops: dup .gguf.inp files for big-endian tests
Signed-off-by: Aaron Teo <redacted>
* devops: dup .gguf.out files for big-endian too
Signed-off-by: Aaron Teo <redacted>
* devops: add python setup and endian byteswap
Signed-off-by: Aaron Teo <redacted>
* devops: pooring thing does not have s390x python3
Signed-off-by: Aaron Teo <redacted>
* devops: add missing rust compiler for s390x
Signed-off-by: Aaron Teo <redacted>
* devops: try rust actions runner
Signed-off-by: Aaron Teo <redacted>
* Revert "devops: try rust actions runner"
This reverts commit
3f8db04356033d6c1d7eccc75ca396bc5298250c .
Signed-off-by: Aaron Teo <redacted>
* devops: try a different path for rust
Signed-off-by: Aaron Teo <redacted>
* devops: dump home directory and user info
Signed-off-by: Aaron Teo <redacted>
* devops: install gguf-py only
Signed-off-by: Aaron Teo <redacted>
* devops: missed relative path
Signed-off-by: Aaron Teo <redacted>
* devops: remove big-endian files since local swapping is working
Signed-off-by: Aaron Teo <redacted>
* devops: revert test-tokenizer-0 cmakelists
Signed-off-by: Aaron Teo <redacted>
* Fix unicode flags conversion from and to uint16_t
Bitfields are allocated in different order on s390x
Signed-off-by: Aaron Teo <redacted>
* Simplify byteswap command
Signed-off-by: Aaron Teo <redacted>
* Add byteswapping and git-lfs for test-tokenizers-ggml-vocabs
Signed-off-by: Aaron Teo <redacted>
* Fix endianness detection in vocab loader
Signed-off-by: Aaron Teo <redacted>
* Disable test-thread-safety on s390x
In this test a model is downloaded,
then immediately loaded to check if more downloads are needed,
and then used for test.
There is no clean way to separate all those steps
to add byteswapping between them, so just skip this test.
Signed-off-by: Aaron Teo <redacted>
* Fix q8_0 test in test-quantize-fns
vec_signed uses unexpected rounding mode.
Explicitly use different rounding function.
Signed-off-by: Aaron Teo <redacted>
* devops: add big-endian stories260K
Signed-off-by: Aaron Teo <redacted>
* devops: add s390x test-eval-callback
Signed-off-by: Aaron Teo <redacted>
* devops: fix test does not exist
Signed-off-by: Aaron Teo <redacted>
* devops: fix model not found llama-eval-callback
Signed-off-by: Aaron Teo <redacted>
* Fix q3_K dot product error in test-quantize-fns on s390x
Array q8bytes had only 4 elements allocated, but 8 elements accessed.
This lead to write out of bounds and later read of overwritten values out of bounds
and incorrect result.
Signed-off-by: Aaron Teo <redacted>
* devops: re-enable ppc64le for testing
Signed-off-by: Aaron Teo <redacted>
* devops: activate test-thread-safety for s390x
Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le tests
for some reason it keeps failing test-thread-safety tests and I do not
have a machine that is able to replicate the tests.
Signed-off-by: Aaron Teo <redacted>
* devops: LLAMA_FATAL_WARNINGS=ON
Signed-off-by: Aaron Teo <redacted>
* Correct repository URL for s390x for test-thread-safety model
Signed-off-by: Aaron Teo <redacted>
* Fix fs_get_cache_directory
Ensure it works even if both XDG_CACHE_HOME and HOME are unset.
This might happen in containers.
Signed-off-by: Aaron Teo <redacted>
* Re-enable CI for ppc64le
Signed-off-by: Aaron Teo <redacted>
* Fortify ggml_rope_impl
Only memcpy data from sections argument if it's non-NULL.
Signed-off-by: Aaron Teo <redacted>
* Add TODO in struct unicode_cpt_flags to reimplement it in endian-independent way
* Update URL for big-endian model
* Update .github/workflows/build.yml
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update remaining mentions of BE models to ggml-org/models repo
---------
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Aleksander Grygier [Fri, 26 Sep 2025 17:25:29 +0000 (19:25 +0200)]
Enhance text file detection logic for file attachments (#16199)
* feat: Enhances text file detection logic
* chore: Build static `webui` output
* chore: update webui build output
Aleksander Grygier [Fri, 26 Sep 2025 16:35:42 +0000 (18:35 +0200)]
Allow viewing conversations even when llama server is down (#16255)
* webui: allow viewing conversations and sending messages even if llama-server is down
- Cached llama.cpp server properties in browser localStorage on startup, persisting successful fetches and reloading them when refresh attempts fail so the chat UI continues to render while the backend is unavailable.
- Cleared the stored server properties when resetting the store to prevent stale capability data after cache-backed operation.
- Kept the original error-splash behavior when no cached props exist so fresh installs still surface a clear failure state instead of rendering stale data.
* feat: Add UI for `props` endpoint unavailable + cleanup logic
* webui: extend cached props fallback to offline errors
Treat connection failures (refused, DNS, timeout, fetch) the same way as
server 5xx so the warning banner shows up when cache is available, instead
of falling back to a full error screen.
* webui: Left the chat form enabled when a server warning is present so operators can keep sending messages
e.g., to restart the backend over llama-swap, even while cached /props data is in use
* chore: update webui build output
---------
Co-authored-by: Pascal <redacted>
Isaac McFadyen [Fri, 26 Sep 2025 15:36:48 +0000 (11:36 -0400)]
webui: switch to hash-based routing (alternative of #16079) (#16157)
* Switched web UI to hash-based routing
* Added hash to missed goto function call
* Removed outdated SPA handling code
* Fixed broken sidebar home link
Aleksander Grygier [Fri, 26 Sep 2025 13:59:07 +0000 (15:59 +0200)]
Always show message actions for mobile UI + improvements for user message sizing (#16076)
Radoslav Gerganov [Fri, 26 Sep 2025 13:09:34 +0000 (16:09 +0300)]
codeowners : add rgerganov as owner of RPC [no ci] (#16279)
Aleksei Nikiforov [Fri, 26 Sep 2025 13:00:44 +0000 (15:00 +0200)]
mtmd : fix uninitialized variable in bicubic_resize (#16275)
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aaron Teo <redacted>
Georgi Gerganov [Fri, 26 Sep 2025 11:14:28 +0000 (14:14 +0300)]
metal : report OOM errors (#16274)
Adrien Gallouët [Fri, 26 Sep 2025 11:12:19 +0000 (13:12 +0200)]
common : use cpp-httplib as a cURL alternative for downloads (#16185)
* vendor : update httplib
Signed-off-by: Adrien Gallouët <redacted>
* common : use cpp-httplib as a cURL alternative for downloads
The existing cURL implementation is intentionally left untouched to
prevent any regressions and to allow for safe, side-by-side testing by
toggling the `LLAMA_CURL` CMake option.
Signed-off-by: Adrien Gallouët <redacted>
* ggml : Bump to Windows 10
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Adrien Gallouët [Fri, 26 Sep 2025 10:39:35 +0000 (12:39 +0200)]
build : fix build-ios-device (#16257)
Signed-off-by: Adrien Gallouët <redacted>
Aaron Teo [Fri, 26 Sep 2025 10:27:25 +0000 (18:27 +0800)]
ggml-cpu: implement MXFP4 SIMD for s390x (#16193)
* ggml-cpu: impl mxfp4 s390x
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: missing s = sumf
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix incorrect kval_mxfp4 type
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rework mxfp4
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: missing delta calc
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo for vec_splats
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: expand to 2 blocks per loop
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add unroll to boost perf
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: back to 1 block per loop to test perf
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: back to 1 block per loop to test perf"
This reverts commit
1fe55724e2dc295701101bf838bdd4a512237492 .
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rm unroll from single block
Signed-off-by: Aaron Teo <redacted>
---------
Signed-off-by: Aaron Teo <redacted>