]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
alex-spacemit [Mon, 29 Sep 2025 14:50:44 +0000 (22:50 +0800)]
ggml: riscv: add riscv spacemit backend (#15288)
* ggml: add spacemit backend
Change-Id: I249bdc043485d815a9c351867137bc1e27cc2e23
* add new line at end of file
Change-Id: I889ed1c85fb45e62350ecde0c06f70450cadfbe2
* add riscv zba extension limit
Change-Id: I321eb200f859751727afe5cae13074dfce2bb0ce
* fixed for review comments, file renamed and format
Change-Id: Ia20b6ec24a36638e62e0fe07cf100916a7cce3ce
* fixed for code format, after clang-format
Change-Id: I5dc33a0412da3d3f2d77075d8939185d3009eca2
* use _Float16 instead of __fp16
Change-Id: I039fb02bb95270e641bc4442204e658735859d43
* add ci for riscv64-spacemit-ime-native
Change-Id: I711c1033061df1a289ea77891b2997599dfe8279
* update debian-13-riscv64-spacemit-ime-native ci label
Change-Id: Ifb2b891e2fca57b5da604fce2ac255f27731179a
* remove license comment for spacemit ime
Change-Id: If0dc3ca30a958631ccca0a28b62e0b825f9fb0c3
* upgrade binutils for gcc ime
Change-Id: Ibf2fa74c1064408974cb5b45f044d40987e5fb45
* add spacemit ime cross jobs
Change-Id: I80d74909941d41cb9cd09e51d8baf01c985cbfc6
* remove native compile for riscv64-spacemit-ime
Change-Id: I01920afafdc73fa7424014fd648d243f8ec9e25e
* ci : add caching for spacemit ime cross toolchain
Change-Id: Ic54a192019a2fd982bbd58225ce3bbc38f4053de
* ci: bug fixed for cache path and env
Change-Id: I28c42e10b6fff053bb6580926ca2353448cb042a
* Update .github/workflows/build-linux-cross.yml for cache path
Co-authored-by: Sigbjørn Skjæret <redacted>
* bugfixed for build-linux-cross.yml, syntax error
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: cailinxi <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Georgi Gerganov [Mon, 29 Sep 2025 13:50:52 +0000 (16:50 +0300)]
sync : ggml
Georgi Gerganov [Mon, 29 Sep 2025 13:49:11 +0000 (16:49 +0300)]
sync : whisper.cpp (ggml/1359)
* ggml : Fix MKL detection by quoting BLAS_INCLUDE_DIRS (whisper/3426)
* sync : whisper.cpp
Daniel Bevenius [Fri, 26 Sep 2025 15:34:42 +0000 (17:34 +0200)]
ggml : remove -dev suffix from release version (ggml/1355)
This commit removes the `-dev` suffix from the version string in
CMakeLists.txt and the release script. The version will now be
just be formatted as `MAJOR.MINOR.PATCH`.
Daniel Bevenius [Thu, 25 Sep 2025 12:39:05 +0000 (14:39 +0200)]
ggml : bump version to 0.9.3 (ggml/1353)
Georgi Gerganov [Sat, 20 Sep 2025 13:44:23 +0000 (16:44 +0300)]
ggml : prepare for development of 0.9.2-dev
Georgi Gerganov [Sat, 20 Sep 2025 13:44:23 +0000 (16:44 +0300)]
ggml : bump version to 0.9.1
Rafal Lewczuk [Mon, 29 Sep 2025 11:17:09 +0000 (13:17 +0200)]
ggml-backend : add root cause in error message if loading backend library fails (#16172)
This PR adds additional information to an error message when loading backend library via ld_load_library() fails. This helps spotting why backend library did not load (missing library, missing dependency or unresolved symbol etc.).
Sigbjørn Skjæret [Mon, 29 Sep 2025 09:09:00 +0000 (11:09 +0200)]
ggml : check cuda and metal argsort limits and add test (#16323)
* check cuda argsort limits and add test
* add metal check
Aleksander Grygier [Mon, 29 Sep 2025 08:37:20 +0000 (10:37 +0200)]
Improve Mobile UI for dialogs and action dropdowns (#16222)
* fix: Always show conversation item actions
* feat: Improve Alert Dialog and Dialog mobile UI
* feat: Add settings reset to default confirmation
* fix: Close Edit dialog on save
* chore: update webui build output
* webui: implement proper z-index system and scroll management
- Add CSS variable for centralized z-index control
- Fix dropdown positioning with Settings dialog conflicts
- Prevent external scroll interference with proper event handling
- Clean up hardcoded z-index values for maintainable architecture
* webui: ensured the settings dialog enforces dynamic viewport height on mobile while retaining existing desktop sizing overrides
* feat: Use `dvh` instead of computed px height for dialogs max height on mobile
* chore: update webui build output
* feat: Improve Settings fields UI
* chore: update webui build output
* chore: update webui build output
---------
Co-authored-by: Pascal <redacted>
Pascal [Mon, 29 Sep 2025 07:08:41 +0000 (09:08 +0200)]
fix: preserved zero values in chat settings inputs and textareas by switching to nullish coalescing for field values and default placeholders (#16312)
Vinkal [Mon, 29 Sep 2025 07:03:12 +0000 (12:33 +0530)]
llama-cli: prevent spurious assistant token (#16202)
* tools/main: llama-cli: prevent spurious assistant token (#13402)
During prompt ingestion, prompt tokens are accepted into the sampler history (for repetition penalties). The conversation-mode path then appended `common_sampler_last(smpl)` to `assistant_ss` before any new token was sampled. At that point, "last" was a prompt-side token (e.g., an input prefix), so the assistant chat message began with an extra piece.
Fix: append to `assistant_ss` only for a newly sampled (non-EOG) token. This affects only chat message assembly (`assistant_ss` / `chat_msgs` / `common_chat_format_single`); terminal stdout is unchanged. Sampling order/logits are unchanged.
Fixes #13402.
Signed-off-by: Vinkal Chudgar <redacted>
* Update tools/main/main.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* tools/main: remove outdated comment
Signed-off-by: Vinkal Chudgar <redacted>
---------
Signed-off-by: Vinkal Chudgar <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
ddh0 [Mon, 29 Sep 2025 06:30:45 +0000 (01:30 -0500)]
perplexity : show more kl-divergence data (#16321)
Adds additional percentile data for displayed in the output of `llama-perplexity --kl-divergence`:
- Added 95 percentile (mirroring existing 5 percentile)
- Added 0.1 percentile (mirroring existing 99.9 percentile)
Georgi Gerganov [Mon, 29 Sep 2025 05:41:28 +0000 (08:41 +0300)]
ggml : fix dependencies for ggml_set_rows (#16318)
Jeff Bolz [Mon, 29 Sep 2025 04:50:37 +0000 (23:50 -0500)]
vulkan: Fix validation failure in quantized flash attention (#16292)
Sigbjørn Skjæret [Sun, 28 Sep 2025 21:15:03 +0000 (23:15 +0200)]
ggml : fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32 (#16307)
* fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32
* add test that fails on simd
crat0z [Sun, 28 Sep 2025 18:13:50 +0000 (14:13 -0400)]
common : fix reasoning before forced tool call via tool_choice = required (#16264)
* common : fix reasoning before forced tool call via tool_choice = required
* common : improve reasoning and commentary handling when tool_choice is required
(cherry picked from commit
c746984956d6882c2de73d53ae2bb3bdf889e475 )
---------
Co-authored-by: Alde Rojas <redacted>
R0CKSTAR [Sun, 28 Sep 2025 14:38:15 +0000 (22:38 +0800)]
ci : fix musa docker build (#16306)
Signed-off-by: Xiaodong Ye <redacted>
Aaron Teo [Sun, 28 Sep 2025 11:25:58 +0000 (19:25 +0800)]
devops: switch to using ubuntu-22.04-s390x image (#16302)
Signed-off-by: Aaron Teo <redacted>
Imad Saddik [Sun, 28 Sep 2025 11:04:46 +0000 (12:04 +0100)]
Fixed a few typos in the README of the LLaMA.cpp HTTP Server [no ci] (#16297)
Jeff Bolz [Sun, 28 Sep 2025 06:38:37 +0000 (01:38 -0500)]
vulkan: 64-bit im2col (#16135)
* vulkan: 64-bit im2col
Add variants of the im2col shaders that use buffer_device_address/buffer_reference,
and use 64-bit address calculations. This is needed for large convolutions used in
stable-diffusion.cpp.
* fix validation error for large im2col
Georgi Gerganov [Sun, 28 Sep 2025 06:34:44 +0000 (09:34 +0300)]
metal : extend mat-mat multiplication support (#16225)
* metal : support mul_mm with src1->type == GGML_TYPE_F16
* metal : support mul_mm_id with src1->type == GGML_TYPE_F16
[no ci]
* metal : mul_mm support ne00 % 32 != 0
* metal : support mul_mm_id with ne00 % 32 != 0
* cont : remove unnecessary unrolls
* cont : simplify data loading
* metal : optimize mul_mm when output bounds checks are not needed
Georgi Gerganov [Sun, 28 Sep 2025 06:34:05 +0000 (09:34 +0300)]
metal : fuse non-sequential nodes (#16102)
* metal : fuse non-sequential nodes
* cont : add comment
* cont : simplify bounds checks
Jeff Bolz [Sun, 28 Sep 2025 01:36:34 +0000 (20:36 -0500)]
vulkan: handle mat_mul with A matrix > 4GB (#16176)
* vulkan: handle mat_mul with A matrix > 4GB
This change splits mat_mul operations with huge A matrix into chunks in the M
dimension. This works well for stable-diffusion use cases where the im2col
matrix has very large M.
Fix the order of setting the stride in mul_mm_cm2 - setting the dimension
clobbers the stride, so stride should be set after.
* build fixes
Jeff Bolz [Sat, 27 Sep 2025 20:43:39 +0000 (16:43 -0400)]
vulkan: support arbitrary KV dimension in flash attention (#16160)
The "Clamp" spec constant is already based on whether KV is a multiple of Bc,
so use that to control whether bounds checking is performed. Add bounds checking
to the scalar and coopmat1 paths. Coopmat2 didn't need any changes (the K/V
tensors are already optionally clamped, nothing else needed to be changed).
Acly [Sat, 27 Sep 2025 20:41:03 +0000 (22:41 +0200)]
vulkan : make the vulkan.hpp dynamic dispatcher instance private (#16224)
* don't use VULKAN_HPP_DEFAULT_DISPATCH_LOADER_DYNAMIC_STORAGE which can cause conflicts if application or other libraries do the same
Aleksander Grygier [Sat, 27 Sep 2025 17:56:40 +0000 (19:56 +0200)]
Show message actions by default (#16289)
Aman Gupta [Sat, 27 Sep 2025 16:49:32 +0000 (00:49 +0800)]
CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32 (#16277)
* CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32
This commit adds mul_mat_id support for ncols_dst >= 16. It does this by
packing ncols_dst tiles into the blockDim.y.
My tests on a RTX 3090 show that this is faster than the cuBLAS fallback
for f16 till bs=64, and for f32 till bs=32
* Review: refactor if statement
Johannes Gäßler [Sat, 27 Sep 2025 16:45:07 +0000 (18:45 +0200)]
CUDA: refactor and deduplicate vector FA kernels (#16208)
* CUDA: refactor and deduplicate vector FA kernels
Dmytro Minochkin [Sat, 27 Sep 2025 16:26:46 +0000 (19:26 +0300)]
vulkan: throw system error instead of SIGABRT during init on older devices (#16156)
* Throw system error on old Vulkan driver rather than SIGABRT
* Optionally handle any potential error in vulkan init
Adrien Gallouët [Sat, 27 Sep 2025 16:17:08 +0000 (18:17 +0200)]
server : remove old LLAMA_SERVER_SSL (#16290)
Signed-off-by: Adrien Gallouët <redacted>
Jeff Bolz [Sat, 27 Sep 2025 10:36:11 +0000 (06:36 -0400)]
vulkan: support GET_ROWS for k-quants (#16235)
The dequantize functions are copy/pasted from mul_mm_funcs.comp with very few
changes - add a_offset and divide iqs by 2. It's probably possible to call
these functions from mul_mm_funcs and avoid the duplication, but I didn't go
that far in this change.
Adrien Gallouët [Sat, 27 Sep 2025 09:12:46 +0000 (11:12 +0200)]
build : add LLAMA_OPENSSL option (#16287)
Introduce a new `LLAMA_OPENSSL` option, enabled by default.
This preserves the previous default (libcurl first, OpenSSL as fallback),
while allowing OpenSSL to be disabled if desired.
Signed-off-by: Adrien Gallouët <redacted>
Vinkal [Fri, 26 Sep 2025 21:28:29 +0000 (02:58 +0530)]
model : make minicpm embedding_scale, residual_scale and logit_scale optional with legacy defaults (#16273)
* minicpm: make GGUF scaling keys optional with legacy defaults
Older MiniCPM GGUFs do not include the scaling metadata keys (minicpm.embedding_scale, minicpm.residual_scale, minicpm.logit_scale). The loader currently treats these as required, so quantization fails with:
key not found in model: minicpm.embedding_scale
This change restores backward compatibility by treating these keys as optional in the loader and using the older MiniCPM scaling values:
embedding_scale = 12.0f
residual_scale = 1.4f / sqrt(n_layer)
logit_scale = 256.0f / n_embd
When the GGUF provides the keys, their values override the defaults; otherwise the legacy defaults are used. Newer GGUFs that already include these keys are unaffected.
Fixes: #16192
Signed-off-by: Vinkal Chudgar <redacted>
* Update src/llama-model.cpp
Committed as suggested. Thanks!
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Signed-off-by: Vinkal Chudgar <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Aaron Teo [Fri, 26 Sep 2025 18:03:33 +0000 (02:03 +0800)]
devops: add s390x & ppc64le CI (#15925)
* devops: move s390x and ppc64le ci build
we have access to ubuntu-24.04-s390x and ppc64le images now
Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le for now since they have compiler errors
Signed-off-by: Aaron Teo <redacted>
* devops: stop warnings as errors
Signed-off-by: Aaron Teo <redacted>
* devops: switch to non-macro flag
Signed-off-by: Aaron Teo <redacted>
* devops: going the llama macro route
Signed-off-by: Aaron Teo <redacted>
* devops: add big-endian gguf test models
Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le to test s390x, check test build
Signed-off-by: Aaron Teo <redacted>
* devops: dup .gguf.inp files for big-endian tests
Signed-off-by: Aaron Teo <redacted>
* devops: dup .gguf.out files for big-endian too
Signed-off-by: Aaron Teo <redacted>
* devops: add python setup and endian byteswap
Signed-off-by: Aaron Teo <redacted>
* devops: pooring thing does not have s390x python3
Signed-off-by: Aaron Teo <redacted>
* devops: add missing rust compiler for s390x
Signed-off-by: Aaron Teo <redacted>
* devops: try rust actions runner
Signed-off-by: Aaron Teo <redacted>
* Revert "devops: try rust actions runner"
This reverts commit
3f8db04356033d6c1d7eccc75ca396bc5298250c .
Signed-off-by: Aaron Teo <redacted>
* devops: try a different path for rust
Signed-off-by: Aaron Teo <redacted>
* devops: dump home directory and user info
Signed-off-by: Aaron Teo <redacted>
* devops: install gguf-py only
Signed-off-by: Aaron Teo <redacted>
* devops: missed relative path
Signed-off-by: Aaron Teo <redacted>
* devops: remove big-endian files since local swapping is working
Signed-off-by: Aaron Teo <redacted>
* devops: revert test-tokenizer-0 cmakelists
Signed-off-by: Aaron Teo <redacted>
* Fix unicode flags conversion from and to uint16_t
Bitfields are allocated in different order on s390x
Signed-off-by: Aaron Teo <redacted>
* Simplify byteswap command
Signed-off-by: Aaron Teo <redacted>
* Add byteswapping and git-lfs for test-tokenizers-ggml-vocabs
Signed-off-by: Aaron Teo <redacted>
* Fix endianness detection in vocab loader
Signed-off-by: Aaron Teo <redacted>
* Disable test-thread-safety on s390x
In this test a model is downloaded,
then immediately loaded to check if more downloads are needed,
and then used for test.
There is no clean way to separate all those steps
to add byteswapping between them, so just skip this test.
Signed-off-by: Aaron Teo <redacted>
* Fix q8_0 test in test-quantize-fns
vec_signed uses unexpected rounding mode.
Explicitly use different rounding function.
Signed-off-by: Aaron Teo <redacted>
* devops: add big-endian stories260K
Signed-off-by: Aaron Teo <redacted>
* devops: add s390x test-eval-callback
Signed-off-by: Aaron Teo <redacted>
* devops: fix test does not exist
Signed-off-by: Aaron Teo <redacted>
* devops: fix model not found llama-eval-callback
Signed-off-by: Aaron Teo <redacted>
* Fix q3_K dot product error in test-quantize-fns on s390x
Array q8bytes had only 4 elements allocated, but 8 elements accessed.
This lead to write out of bounds and later read of overwritten values out of bounds
and incorrect result.
Signed-off-by: Aaron Teo <redacted>
* devops: re-enable ppc64le for testing
Signed-off-by: Aaron Teo <redacted>
* devops: activate test-thread-safety for s390x
Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le tests
for some reason it keeps failing test-thread-safety tests and I do not
have a machine that is able to replicate the tests.
Signed-off-by: Aaron Teo <redacted>
* devops: LLAMA_FATAL_WARNINGS=ON
Signed-off-by: Aaron Teo <redacted>
* Correct repository URL for s390x for test-thread-safety model
Signed-off-by: Aaron Teo <redacted>
* Fix fs_get_cache_directory
Ensure it works even if both XDG_CACHE_HOME and HOME are unset.
This might happen in containers.
Signed-off-by: Aaron Teo <redacted>
* Re-enable CI for ppc64le
Signed-off-by: Aaron Teo <redacted>
* Fortify ggml_rope_impl
Only memcpy data from sections argument if it's non-NULL.
Signed-off-by: Aaron Teo <redacted>
* Add TODO in struct unicode_cpt_flags to reimplement it in endian-independent way
* Update URL for big-endian model
* Update .github/workflows/build.yml
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update remaining mentions of BE models to ggml-org/models repo
---------
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Aleksander Grygier [Fri, 26 Sep 2025 17:25:29 +0000 (19:25 +0200)]
Enhance text file detection logic for file attachments (#16199)
* feat: Enhances text file detection logic
* chore: Build static `webui` output
* chore: update webui build output
Aleksander Grygier [Fri, 26 Sep 2025 16:35:42 +0000 (18:35 +0200)]
Allow viewing conversations even when llama server is down (#16255)
* webui: allow viewing conversations and sending messages even if llama-server is down
- Cached llama.cpp server properties in browser localStorage on startup, persisting successful fetches and reloading them when refresh attempts fail so the chat UI continues to render while the backend is unavailable.
- Cleared the stored server properties when resetting the store to prevent stale capability data after cache-backed operation.
- Kept the original error-splash behavior when no cached props exist so fresh installs still surface a clear failure state instead of rendering stale data.
* feat: Add UI for `props` endpoint unavailable + cleanup logic
* webui: extend cached props fallback to offline errors
Treat connection failures (refused, DNS, timeout, fetch) the same way as
server 5xx so the warning banner shows up when cache is available, instead
of falling back to a full error screen.
* webui: Left the chat form enabled when a server warning is present so operators can keep sending messages
e.g., to restart the backend over llama-swap, even while cached /props data is in use
* chore: update webui build output
---------
Co-authored-by: Pascal <redacted>
Isaac McFadyen [Fri, 26 Sep 2025 15:36:48 +0000 (11:36 -0400)]
webui: switch to hash-based routing (alternative of #16079) (#16157)
* Switched web UI to hash-based routing
* Added hash to missed goto function call
* Removed outdated SPA handling code
* Fixed broken sidebar home link
Aleksander Grygier [Fri, 26 Sep 2025 13:59:07 +0000 (15:59 +0200)]
Always show message actions for mobile UI + improvements for user message sizing (#16076)
Radoslav Gerganov [Fri, 26 Sep 2025 13:09:34 +0000 (16:09 +0300)]
codeowners : add rgerganov as owner of RPC [no ci] (#16279)
Aleksei Nikiforov [Fri, 26 Sep 2025 13:00:44 +0000 (15:00 +0200)]
mtmd : fix uninitialized variable in bicubic_resize (#16275)
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aaron Teo <redacted>
Georgi Gerganov [Fri, 26 Sep 2025 11:14:28 +0000 (14:14 +0300)]
metal : report OOM errors (#16274)
Adrien Gallouët [Fri, 26 Sep 2025 11:12:19 +0000 (13:12 +0200)]
common : use cpp-httplib as a cURL alternative for downloads (#16185)
* vendor : update httplib
Signed-off-by: Adrien Gallouët <redacted>
* common : use cpp-httplib as a cURL alternative for downloads
The existing cURL implementation is intentionally left untouched to
prevent any regressions and to allow for safe, side-by-side testing by
toggling the `LLAMA_CURL` CMake option.
Signed-off-by: Adrien Gallouët <redacted>
* ggml : Bump to Windows 10
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Adrien Gallouët [Fri, 26 Sep 2025 10:39:35 +0000 (12:39 +0200)]
build : fix build-ios-device (#16257)
Signed-off-by: Adrien Gallouët <redacted>
Aaron Teo [Fri, 26 Sep 2025 10:27:25 +0000 (18:27 +0800)]
ggml-cpu: implement MXFP4 SIMD for s390x (#16193)
* ggml-cpu: impl mxfp4 s390x
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: missing s = sumf
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix incorrect kval_mxfp4 type
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rework mxfp4
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: missing delta calc
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo for vec_splats
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: expand to 2 blocks per loop
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add unroll to boost perf
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: back to 1 block per loop to test perf
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: back to 1 block per loop to test perf"
This reverts commit
1fe55724e2dc295701101bf838bdd4a512237492 .
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rm unroll from single block
Signed-off-by: Aaron Teo <redacted>
---------
Signed-off-by: Aaron Teo <redacted>
Radoslav Gerganov [Fri, 26 Sep 2025 10:19:23 +0000 (13:19 +0300)]
ci : create git tags for released docker images (#16008)
* ci : create git tags for released docker images
When releasing a docker image for build number X, we should also create
the corresponding git tag. This allows users to easily checkout the
corresponding source tree for given docker image.
* Update .github/workflows/docker.yml
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update .github/workflows/docker.yml
Co-authored-by: Sigbjørn Skjæret <redacted>
* Apply suggestion from @CISC
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Daniel Bevenius [Fri, 26 Sep 2025 05:53:36 +0000 (07:53 +0200)]
codeowners : add danbev as owner of build-xcframework.sh [no ci] (#16268)
R0CKSTAR [Fri, 26 Sep 2025 00:56:38 +0000 (08:56 +0800)]
musa: upgrade musa sdk to 4.3.0 (#16240)
Signed-off-by: Xiaodong Ye <redacted>
R0CKSTAR [Fri, 26 Sep 2025 00:56:10 +0000 (08:56 +0800)]
musa: fix build warnings (#15611)
Signed-off-by: Xiaodong Ye <redacted>
Sigbjørn Skjæret [Thu, 25 Sep 2025 17:50:28 +0000 (19:50 +0200)]
model : add GroveMoE support (#15510)
* add GroveMoE support
* remove constexpr that fails on certain compilers
* revert crude scalar div implementation, use cast
* build_attn_inp_kv_unified -> build_attn_inp_kv
* fix build_attn
* re-apply ffn_exps regex changes
Aaron Teo [Thu, 25 Sep 2025 15:38:10 +0000 (23:38 +0800)]
vendors: update miniaudio version (#16212)
* vendor: update miniaudio.h
Signed-off-by: Aaron Teo <redacted>
* vendor: update miniaudio.h
Signed-off-by: Aaron Teo <redacted>
---------
Signed-off-by: Aaron Teo <redacted>
rtaluyev [Thu, 25 Sep 2025 15:20:34 +0000 (18:20 +0300)]
readme : update bindings (#16144)
Link to Java JNA bindings to llama.cpp native libraries
Aman Gupta [Thu, 25 Sep 2025 14:35:05 +0000 (22:35 +0800)]
CUDA: add a fused top-K MoE kernel (#16130)
* CUDA: add a fused top-K MoE kernel
This kernel does the following:
1. softmax over the logits per token [n_experts, n_tokens]
2. argmax reduce over the top-k (n_experts_used) logits
3. write weights + ids to global memory
It is intended as fusion of softmax->top-k->get_rows pipeline for MoE models
* Refactor into ggml_cuda_should_use_topk_moe
* Review: Use better coalescing pattern, use WARP_SIZE, store logits into registers before
* Review: format + micro-optimizations
* Fix bug: fix tie breakers
* Add optional norm + clean-up code
* Use smem for final write
* Add bounds check
* Use better memory pattern for writeback
Daniel Bevenius [Thu, 25 Sep 2025 10:02:36 +0000 (12:02 +0200)]
model-conversion : add embedding prompt file support (#15871)
This commit adds support for passing a prompt file to the model
conversion targets/scripts. It also updates the logits.cpp to print out
embedding information in the same format as when running the original
embedding model.
The motivation for this is that it allows us to pass files of different
sizes when running the converted models and validating the logits.
This can be particularly important when testing the sliding window
functionality of models where the sequence length needs to exceed a
certain number of tokens to trigger the sliding window logic.
Daniel Bevenius [Thu, 25 Sep 2025 09:36:47 +0000 (11:36 +0200)]
server : add support for external server for tests (#16243)
This commit adds support for using an externally started llama-server
instance for the server tests. This can be enabled by setting the
DEBUG_EXTERNAL environment variable.
The motivation for this is to allow debugging of the server itself
when investigating a test failure. Instructions for how to do this are
added to the README.md file in the tests directory.
junchao-zhao [Thu, 25 Sep 2025 09:22:55 +0000 (17:22 +0800)]
ggml : fix loongarch lsx compilation error (#15864)
Johannes Gäßler [Thu, 25 Sep 2025 09:12:27 +0000 (11:12 +0200)]
docs: fix typo [no ci] (#16244)
Douglas Hanley [Thu, 25 Sep 2025 08:53:09 +0000 (03:53 -0500)]
llama : add support for qwen3 reranker (#15824)
Georgi Gerganov [Thu, 25 Sep 2025 08:30:16 +0000 (11:30 +0300)]
metal : fuse NORM + MUL + ADD, support non-multiples of 4 (#16220)
* metal : fuse NORM + MUL + ADD
* metal : support norms of non-multiple of 4
* cont : fix comment [no ci]
Georgi Gerganov [Thu, 25 Sep 2025 08:29:42 +0000 (11:29 +0300)]
metal : relax reorder conditions (#16216)
Georgi Gerganov [Thu, 25 Sep 2025 08:29:08 +0000 (11:29 +0300)]
metal : restore im2col perf (#16219)
Radoslav Gerganov [Thu, 25 Sep 2025 07:20:02 +0000 (10:20 +0300)]
rpc : use ggml logging facilities
Use RPC_DEBUG environment variable to enable debug messages.
Add helper macro LOG_DBG() which does an early
check of the env var before calling GGML_LOG_DEBUG().
Make sure we log a debug message for every server function.
Aaron Teo [Thu, 25 Sep 2025 05:06:30 +0000 (13:06 +0800)]
codeowners: add ownership of zdnn backend [no ci] (#16232)
add @Andreas-Krebbel to owners of zDNN backend
Signed-off-by: Aaron Teo <redacted>
Eve [Thu, 25 Sep 2025 05:06:06 +0000 (05:06 +0000)]
ci: run the x64 and arm ci on the github machines instead (#16183)
* run the x64 ci on regular machines
* set up the same thing for arm
fix test-quantize-perf just like #12306
* try to disable sve
* add another sve run
Aaron Teo [Thu, 25 Sep 2025 03:36:30 +0000 (11:36 +0800)]
devops: fix s390x docker release failure (#16231)
Aaron Teo [Wed, 24 Sep 2025 16:25:04 +0000 (00:25 +0800)]
codeowners: add ownership of zdnn backend [no ci] (#16229)
add @AlekseiNikiforovIBM to owners of zDNN backend
Signed-off-by: Aaron Teo <redacted>
Johannes Gäßler [Wed, 24 Sep 2025 14:53:48 +0000 (16:53 +0200)]
llama: print memory breakdown on exit (#15860)
* llama: print memory breakdown on exit
Acly [Wed, 24 Sep 2025 14:17:49 +0000 (16:17 +0200)]
ggml : split graph allocations according to backend max buffer size (#15815)
* ggml : make gallocr respect the backend's max buffer size
* if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers
* vulkan: report the actual max allocation size in buffer type interface
* fix missing newline, apple-clang warning
* track size of individual chunks in ggml_dyn_tallocr and raise max chunks.
revert to use suballocation_block_size as max chunk size for vulkan.
* track (chunk, offset) pairs instead of "global" offsets through gallocr.
* simpler, don't need loops to map between local/global offsets
* touches more code
* fix dyn_tallocr_max_size and initialization
* fix memory leak when buffers are reused due to same buffer type appearing multiple times
* make vbuffer allocation follow the same logic as backend_buffer did before
* continue to use leftover unallocated space of previous chunks after a new one has been created
* treat free blocks of each chunk as separate list
* they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges
* exhaust freed blocks of all chunks before considering their last blocks with unallocated space
* start with 0 chunks/blocks and create chunks as needed
* allow the last chunk to grow beyond max size
* refactor: move adding new free block and new chunk into separate functions
* allocate chunks individually with a separate free-blocks list for each one
* needs a bit more memory/allocations/indirections, but code is simpler
* fix warnings (missing static) & debug checks
Tarek Dakhran [Wed, 24 Sep 2025 11:42:26 +0000 (13:42 +0200)]
model : add label for LiquidAI LFM2-2.6B model (#16204)
* model : add label for LiquidAI LFM2-2.6B model
HF link: [LiquidAI/LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B).
Support for GGUF conversion and inference is added in #14620.
However, due to similar `n_embd`, it identifies as a 1.2B model.
Fix the label by using `n_ff` to identify the model instead.
Output of `llama-bench`:
```
| model | size | params | backend | threads | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| lfm2 1.2B F16 | 2.18 GiB | 1.17 B | CPU | 10 | pp512 | 223.97 ± 5.32 |
| lfm2 2.6B F16 | 4.79 GiB | 2.57 B | CPU | 10 | pp512 | 92.53 ± 4.14 |
| lfm2 350M F16 | 676.25 MiB | 354.48 M | CPU | 10 | pp512 | 725.52 ± 11.70 |
| lfm2 700M F16 | 1.38 GiB | 742.49 M | CPU | 10 | pp512 | 336.22 ± 12.93 |
```
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Jie Fu (傅杰) [Wed, 24 Sep 2025 08:25:26 +0000 (16:25 +0800)]
model-conversion : make causal-verify-logits fails with model names containing "." (#16215)
Signed-off-by: Jie Fu <redacted>
Uilian Ries [Wed, 24 Sep 2025 06:53:47 +0000 (08:53 +0200)]
common : add missing chrono header for common.cpp (#16211)
Signed-off-by: Uilian Ries <redacted>
Sigbjørn Skjæret [Wed, 24 Sep 2025 06:53:20 +0000 (08:53 +0200)]
codeowners : match all requirements files (#16214)
Jie Fu (傅杰) [Wed, 24 Sep 2025 06:46:52 +0000 (14:46 +0800)]
model-conversion : run-org-model.py fails to run on mac m1 (#16213)
Signed-off-by: Jie Fu <redacted>
Daniel Bevenius [Wed, 24 Sep 2025 06:10:09 +0000 (08:10 +0200)]
codeowners : use slash prefix for root files [no ci] (#16210)
This commit adds a leading slash to the paths of root-level files
in the CODEOWNERS file.
The motivation for this is that these might otherwise match files
in subdirectories that have other/additional owners will override them.
Refs: https://github.com/ggml-org/llama.cpp/pull/16209#issuecomment-
3326434274
Jie Fu (傅杰) [Wed, 24 Sep 2025 04:19:23 +0000 (12:19 +0800)]
model-conversion : fix the make targets in the README.md (#16209)
Fix two incorrect make targets in the readme.
Signed-off-by: Jie Fu <redacted>
Georgi Gerganov [Tue, 23 Sep 2025 17:41:40 +0000 (20:41 +0300)]
ci : disable AMD workflows + update NVIDIA workflows (#16200)
* ci : disable AMD workflows + update NVIDIA workflows
* cont : fixes
* cont : update nvidia vulkan workflows
Georgi Gerganov [Tue, 23 Sep 2025 10:44:25 +0000 (13:44 +0300)]
ci : enable Vulkan workflow on Mac (#16194)
Xiangyan Sun [Tue, 23 Sep 2025 08:58:12 +0000 (01:58 -0700)]
ggml-cpu: Respect cpumask settings (#16164)
Sigbjørn Skjæret [Tue, 23 Sep 2025 08:25:20 +0000 (10:25 +0200)]
ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928)
* fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl
* change initialization to true
Aaron Teo [Tue, 23 Sep 2025 06:53:05 +0000 (14:53 +0800)]
zdnn: refactor codebase + add docs (#16178)
* zdnn: initial matmul refactor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rm static from funcs
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update ggml-zdnn.h
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: change header files to hpp
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to common.hpp
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move mulmat forward around
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rm inline from utils
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code cleanup
Signed-off-by: Aaron Teo <redacted>
* docs: add zDNN docs
Signed-off-by: Aaron Teo <redacted>
---------
Signed-off-by: Aaron Teo <redacted>
Daniel Bevenius [Tue, 23 Sep 2025 06:13:22 +0000 (08:13 +0200)]
codeowners : add @danbev to model-conversion example [no ci] (#16190)
This commit adds examples/model-conversion/ to the CODEOWNERS file and
assigns myself (@danbev) as the code owner for this directory.
Aaron Teo [Tue, 23 Sep 2025 05:59:34 +0000 (13:59 +0800)]
devops: add s390x containers (#15915)
* devops: add s390x dockerfile
Signed-off-by: Aaron Teo <redacted>
* devops: add missing ninja
Signed-off-by: Aaron Teo <redacted>
* devops: move s390x docker into cpu docker
Signed-off-by: Aaron Teo <redacted>
* devops: rework s390x docker
Signed-off-by: Aaron Teo <redacted>
* devops: copy more tools
Signed-off-by: Aaron Teo <redacted>
* devops: add server build step
Signed-off-by: Aaron Teo <redacted>
* devops: remove apt clean steps as distroless misses it
Signed-off-by: Aaron Teo <redacted>
* devops: remove apt commands from distroless
Signed-off-by: Aaron Teo <redacted>
* devops: fix shared libs in distroless
Signed-off-by: Aaron Teo <redacted>
* devops: use correct libs path
Signed-off-by: Aaron Teo <redacted>
* devops: fix shared libs
Signed-off-by: Aaron Teo <redacted>
* devops: add collector stage
Signed-off-by: Aaron Teo <redacted>
* devops: fix missing stage ref
Signed-off-by: Aaron Teo <redacted>
* devops: fix permission issue
Signed-off-by: Aaron Teo <redacted>
* devops: fix unknown model loading failures
Signed-off-by: Aaron Teo <redacted>
* devops: attempt at fixing model loading failure
Signed-off-by: Aaron Teo <redacted>
* devops: fix missing ggml shared object
failure to load model
Signed-off-by: Aaron Teo <redacted>
* devops: remove move shared objects
Signed-off-by: Aaron Teo <redacted>
* devops: move libggml-cpu and blas into bin
Signed-off-by: Aaron Teo <redacted>
* devops: finalise hardened server stage
Signed-off-by: Aaron Teo <redacted>
* devops: add cli target
Signed-off-by: Aaron Teo <redacted>
* devops: fix typos
Signed-off-by: Aaron Teo <redacted>
* devops: fix missing shared libraries in base
Signed-off-by: Aaron Teo <redacted>
* devops: update debian target
Signed-off-by: Aaron Teo <redacted>
* devops: formalise llama.cpp loc
Signed-off-by: Aaron Teo <redacted>
* Revert "devops: formalise llama.cpp loc"
This reverts commit
0a7664af8466a15f318ff209e02ac3c4e551cc18 .
Signed-off-by: Aaron Teo <redacted>
* devops: formalise llama.cpp loc
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
0a7664af8466a15f318ff209e02ac3c4e551cc18 )
Signed-off-by: Aaron Teo <redacted>
* devops: attempt at fixing missing dir
Signed-off-by: Aaron Teo <redacted>
* devops: attempt at making it cache the build
Signed-off-by: Aaron Teo <redacted>
* devops: fix copying process
Signed-off-by: Aaron Teo <redacted>
* devops: make build dir an argument
Signed-off-by: Aaron Teo <redacted>
* Revert "devops: make build dir an argument"
This reverts commit
438698976b8a5181c1e8179600527cfd5a50cc23 .
Signed-off-by: Aaron Teo <redacted>
* devops: add build stage for gguf-py
Signed-off-by: Aaron Teo <redacted>
* devops: move gguf-py installation into build stage
Signed-off-by: Aaron Teo <redacted>
* devops: break system packages?
Signed-off-by: Aaron Teo <redacted>
* devops: add rust compiler installer
Signed-off-by: Aaron Teo <redacted>
* devops: fix rustc not found
Signed-off-by: Aaron Teo <redacted>
* devops: remove cache mount to allow rustc to persist
Signed-off-by: Aaron Teo <redacted>
* devops: move rustc installation to another layer
Signed-off-by: Aaron Teo <redacted>
* devops: move gguf-py installation to full stage, fix copying
Signed-off-by: Aaron Teo <redacted>
* devops: remove rustc installation in build
Signed-off-by: Aaron Teo <redacted>
* devops: disable full target for now
Signed-off-by: Aaron Teo <redacted>
* devops: attempting static build
Signed-off-by: Aaron Teo <redacted>
* devops: merge s390x dockerfile into cpu for now
Signed-off-by: Aaron Teo <redacted>
* devops: switch to gcc image for build step
Signed-off-by: Aaron Teo <redacted>
* devops: remove build essentials
Signed-off-by: Aaron Teo <redacted>
* devops: install openblas into base target
Signed-off-by: Aaron Teo <redacted>
* devops: go back to s390x dockerfile
Signed-off-by: Aaron Teo <redacted>
* devops: remove libggml and libblas
Signed-off-by: Aaron Teo <redacted>
* devops: add full target
Signed-off-by: Aaron Teo <redacted>
* devops: add break system packages
Signed-off-by: Aaron Teo <redacted>
* devops: add libjpeg
Signed-off-by: Aaron Teo <redacted>
* devops: add missing cmake dep
Signed-off-by: Aaron Teo <redacted>
* devops: finalise docker images for s390x
Signed-off-by: Aaron Teo <redacted>
* devops: add custom openblas patch
Signed-off-by: Aaron Teo <redacted>
* devops: use libopenblas-dev instead of libopenblas-openmp-dev
Signed-off-by: Aaron Teo <redacted>
* devops: add s390x docker build
Signed-off-by: Aaron Teo <redacted>
---------
Signed-off-by: Aaron Teo <redacted>
Daniel Bevenius [Tue, 23 Sep 2025 03:59:03 +0000 (05:59 +0200)]
ggml-cpu : fix typo in gemm comments [no ci] (#16189)
Gabe Goodhart [Mon, 22 Sep 2025 18:40:10 +0000 (12:40 -0600)]
feat: Add conversion support in GraniteHybrid for non-hybrid (all attn) (#16177)
This is a configuration of the hparams in the GraniteHybrid architecture
that devolves to the Granite (or GraniteMoe) architecture (ie Granite 3.x).
It may be used for some models in the Granite 4 family with the
GraniteHybrid architecture acting as a superset arch. Rather than support
it directly in the c++ graph, we simply coerce the architecture flag back
to the correct "granite" or "granitemoe" architecture.
Branch: gabe-l-hart/GraniteNonHybridConversion
Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Haiyue Wang [Mon, 22 Sep 2025 17:57:46 +0000 (01:57 +0800)]
clang-tidy : disable warning about performance enum size (#16127)
Disable 'performance-enum-size' checking:
Enum 'llama_token_type' uses a larger base type ('unsigned int', size: 4 bytes)
than necessary for its value set, consider using 'std::uint8_t' (1 byte) as the
base type to reduce its size.
Sigbjørn Skjæret [Mon, 22 Sep 2025 17:13:00 +0000 (19:13 +0200)]
ggml : implement set_rows with i32 index (#16159)
* implement set_rows with i32 index
* template fix
* test quantized path
warnings--
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* forgotten name change
* deduplicate cuda/sycl and test-fix
* indent++
* vulkan: support set_rows with i32 index type (#16162)
* disable i32 index for webgpu for now
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Jeff Bolz <redacted>
Georgi Gerganov [Mon, 22 Sep 2025 15:20:21 +0000 (18:20 +0300)]
codeowners : update + cleanup (#16174)
---------
Co-authored-by: slaren <redacted>
Adrien Gallouët [Mon, 22 Sep 2025 12:13:51 +0000 (14:13 +0200)]
common : enable `--offline` mode without curl support (#16137)
* common : use the json parser
Signed-off-by: Adrien Gallouët <redacted>
* common : enable --offline mode without CURL support
This change refactors the download logic to properly support offline mode
even when the project is built without CURL.
Without this commit, using `--offline` would give the following error:
error: built without CURL, cannot download model from the internet
even if all the files are already cached.
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Quentin Bramas [Mon, 22 Sep 2025 08:53:13 +0000 (10:53 +0200)]
webui : fix handling incomplete chunks (#16107)
GideonSerf [Mon, 22 Sep 2025 08:49:58 +0000 (10:49 +0200)]
embedding : fix typos in README (#16171)
Haiyue Wang [Mon, 22 Sep 2025 08:48:42 +0000 (16:48 +0800)]
common : remove unused local variables (#16140)
These two local variables 'arg' and 'arg_prefix' have been overriden by:
1. for (const auto & arg : opt.args)
2. for (int i = 1; i < argc; i++) {
const std::string arg_prefix = "--";
std::string arg = argv[i];
Georgi Gerganov [Mon, 22 Sep 2025 08:12:37 +0000 (11:12 +0300)]
ggml : extend ggml_can_fuse to work with non-sequential nodes (#16123)
* ggml : extend ggml_can_fuse to work with non-sequential nodes in the graph
* cont : fix wrong bounds check condition
* cont : remove unnecessary overload
Georgi Gerganov [Mon, 22 Sep 2025 08:12:09 +0000 (11:12 +0300)]
ggml : add ggml_op_is_empty (#16122)
* ggml : add ggml_op_is_empty
* ggml : move to ggml-impl.h
Xuan-Son Nguyen [Mon, 22 Sep 2025 08:10:58 +0000 (15:10 +0700)]
codeowners : update ownership for @ngxson and @allozuar (#16128)
Shin-myoung-serp [Mon, 22 Sep 2025 08:04:01 +0000 (17:04 +0900)]
Vulkan: add conv_transpose_2d operation (#16022)
* Vulkan: add conv_transpose_2d operation
* Vulkan: fix typo in conv_transpose_2d shader(s0mp, s0L, s1mp, s1L)
* Vulkan: fix incorrect indentation in conv_transpose_2d shader
* Vulkan: add checking the push constants size limit and reuse conv2d_mm.comp for conv_transpose_2d operation
* Vulkan: revert the order of the index calculation and bound check in conv_2d shader
* Vulkan: explicity check push constants limit in supports_op() for conv_transpose_2d operation.
* Vulkan: remove unnecessary lower bound checks for H/W_idx in the conv_2d shader.
Sigbjørn Skjæret [Mon, 22 Sep 2025 07:59:05 +0000 (09:59 +0200)]
codeowners : claim responsibility for ci, models, gguf-py and convert (#16124)
* claim responsibility for ci, gguf-py and convert
* add myself to various src/llama- files
Georgi Gerganov [Mon, 22 Sep 2025 07:58:02 +0000 (10:58 +0300)]
contrib : update roles (#16113)
* contrib : update roles
* contrib : merge PR sections + add link to CI instructions
Updated pull request guidelines for contributors and collaborators, and clarified merging practices for maintainers.
Georgi Gerganov [Mon, 22 Sep 2025 07:16:05 +0000 (10:16 +0300)]
ci : remove vulkaninfo calls (#16169)
Georgi Gerganov [Mon, 22 Sep 2025 06:11:39 +0000 (09:11 +0300)]
ci : use smaller model (#16168)
* ci : switch from gemma to qwen3 0.6b
* ci : use smaller model for some tests
Jeff Bolz [Mon, 22 Sep 2025 05:37:17 +0000 (00:37 -0500)]
vulkan: add RTE variants of exp shader (#16165)
This fixes some failures on Turing where "round to zero" rounds to the max f16
value but the CPU reference value is infinite.