]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 months agoserver : remove old LLAMA_SERVER_SSL (#16290)
Adrien Gallouët [Sat, 27 Sep 2025 16:17:08 +0000 (18:17 +0200)]
server : remove old LLAMA_SERVER_SSL (#16290)

Signed-off-by: Adrien Gallouët <redacted>
2 months agovulkan: support GET_ROWS for k-quants (#16235)
Jeff Bolz [Sat, 27 Sep 2025 10:36:11 +0000 (06:36 -0400)]
vulkan: support GET_ROWS for k-quants (#16235)

The dequantize functions are copy/pasted from mul_mm_funcs.comp with very few
changes - add a_offset and divide iqs by 2. It's probably possible to call
these functions from mul_mm_funcs and avoid the duplication, but I didn't go
that far in this change.

2 months agobuild : add LLAMA_OPENSSL option (#16287)
Adrien Gallouët [Sat, 27 Sep 2025 09:12:46 +0000 (11:12 +0200)]
build : add LLAMA_OPENSSL option (#16287)

Introduce a new `LLAMA_OPENSSL` option, enabled by default.

This preserves the previous default (libcurl first, OpenSSL as fallback),
while allowing OpenSSL to be disabled if desired.

Signed-off-by: Adrien Gallouët <redacted>
2 months agomodel : make minicpm embedding_scale, residual_scale and logit_scale optional with...
Vinkal [Fri, 26 Sep 2025 21:28:29 +0000 (02:58 +0530)]
model : make minicpm embedding_scale, residual_scale and logit_scale optional with legacy defaults (#16273)

* minicpm: make GGUF scaling keys optional with legacy defaults

Older MiniCPM GGUFs do not include the scaling metadata keys (minicpm.embedding_scale, minicpm.residual_scale, minicpm.logit_scale). The loader currently treats these as required, so quantization fails with:

    key not found in model: minicpm.embedding_scale

This change restores backward compatibility by treating these keys as optional in the loader and using the older MiniCPM scaling values:

    embedding_scale = 12.0f
    residual_scale  = 1.4f / sqrt(n_layer)
    logit_scale     = 256.0f / n_embd

When the GGUF provides the keys, their values override the defaults; otherwise the legacy defaults are used. Newer GGUFs that already include these keys are unaffected.

Fixes: #16192
Signed-off-by: Vinkal Chudgar <redacted>
* Update src/llama-model.cpp

Committed as suggested. Thanks!

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Signed-off-by: Vinkal Chudgar <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agodevops: add s390x & ppc64le CI (#15925)
Aaron Teo [Fri, 26 Sep 2025 18:03:33 +0000 (02:03 +0800)]
devops: add s390x & ppc64le CI (#15925)

* devops: move s390x and ppc64le ci build

we have access to ubuntu-24.04-s390x and ppc64le images now

Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le for now since they have compiler errors

Signed-off-by: Aaron Teo <redacted>
* devops: stop warnings as errors

Signed-off-by: Aaron Teo <redacted>
* devops: switch to non-macro flag

Signed-off-by: Aaron Teo <redacted>
* devops: going the llama macro route

Signed-off-by: Aaron Teo <redacted>
* devops: add big-endian gguf test models

Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le to test s390x, check test build

Signed-off-by: Aaron Teo <redacted>
* devops: dup .gguf.inp files for big-endian tests

Signed-off-by: Aaron Teo <redacted>
* devops: dup .gguf.out files for big-endian too

Signed-off-by: Aaron Teo <redacted>
* devops: add python setup and endian byteswap

Signed-off-by: Aaron Teo <redacted>
* devops: pooring thing does not have s390x python3

Signed-off-by: Aaron Teo <redacted>
* devops: add missing rust compiler for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: try rust actions runner

Signed-off-by: Aaron Teo <redacted>
* Revert "devops: try rust actions runner"

This reverts commit 3f8db04356033d6c1d7eccc75ca396bc5298250c.

Signed-off-by: Aaron Teo <redacted>
* devops: try a different path for rust

Signed-off-by: Aaron Teo <redacted>
* devops: dump home directory and user info

Signed-off-by: Aaron Teo <redacted>
* devops: install gguf-py only

Signed-off-by: Aaron Teo <redacted>
* devops: missed relative path

Signed-off-by: Aaron Teo <redacted>
* devops: remove big-endian files since local swapping is working

Signed-off-by: Aaron Teo <redacted>
* devops: revert test-tokenizer-0 cmakelists

Signed-off-by: Aaron Teo <redacted>
* Fix unicode flags conversion from and to uint16_t

Bitfields are allocated in different order on s390x

Signed-off-by: Aaron Teo <redacted>
* Simplify byteswap command

Signed-off-by: Aaron Teo <redacted>
* Add byteswapping and git-lfs for test-tokenizers-ggml-vocabs

Signed-off-by: Aaron Teo <redacted>
* Fix endianness detection in vocab loader

Signed-off-by: Aaron Teo <redacted>
* Disable test-thread-safety on s390x

In this test a model is downloaded,
then immediately loaded to check if more downloads are needed,
and then used for test.

There is no clean way to separate all those steps
 to add byteswapping between them, so just skip this test.

Signed-off-by: Aaron Teo <redacted>
* Fix q8_0 test in test-quantize-fns

vec_signed uses unexpected rounding mode.
Explicitly use different rounding function.

Signed-off-by: Aaron Teo <redacted>
* devops: add big-endian stories260K

Signed-off-by: Aaron Teo <redacted>
* devops: add s390x test-eval-callback

Signed-off-by: Aaron Teo <redacted>
* devops: fix test does not exist

Signed-off-by: Aaron Teo <redacted>
* devops: fix model not found llama-eval-callback

Signed-off-by: Aaron Teo <redacted>
* Fix q3_K dot product error in test-quantize-fns on s390x

Array q8bytes had only 4 elements allocated, but 8 elements accessed.
This lead to write out of bounds and later read of overwritten values out of bounds
and incorrect result.

Signed-off-by: Aaron Teo <redacted>
* devops: re-enable ppc64le for testing

Signed-off-by: Aaron Teo <redacted>
* devops: activate test-thread-safety for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le tests

for some reason it keeps failing test-thread-safety tests and I do not
    have a machine that is able to replicate the tests.

Signed-off-by: Aaron Teo <redacted>
* devops: LLAMA_FATAL_WARNINGS=ON

Signed-off-by: Aaron Teo <redacted>
* Correct repository URL for s390x for test-thread-safety model

Signed-off-by: Aaron Teo <redacted>
* Fix fs_get_cache_directory

Ensure it works even if both XDG_CACHE_HOME and HOME are unset.
This might happen in containers.

Signed-off-by: Aaron Teo <redacted>
* Re-enable CI for ppc64le

Signed-off-by: Aaron Teo <redacted>
* Fortify ggml_rope_impl

Only memcpy data from sections argument if it's non-NULL.

Signed-off-by: Aaron Teo <redacted>
* Add TODO in struct unicode_cpt_flags to reimplement it in endian-independent way

* Update URL for big-endian model

* Update .github/workflows/build.yml

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update remaining mentions of BE models to ggml-org/models repo

---------

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoEnhance text file detection logic for file attachments (#16199)
Aleksander Grygier [Fri, 26 Sep 2025 17:25:29 +0000 (19:25 +0200)]
Enhance text file detection logic for file attachments (#16199)

* feat: Enhances text file detection logic

* chore: Build static `webui` output

* chore: update webui build output

2 months agoAllow viewing conversations even when llama server is down (#16255)
Aleksander Grygier [Fri, 26 Sep 2025 16:35:42 +0000 (18:35 +0200)]
Allow viewing conversations even when llama server is down (#16255)

* webui: allow viewing conversations and sending messages even if llama-server is down

- Cached llama.cpp server properties in browser localStorage on startup, persisting successful fetches and reloading them when refresh attempts fail so the chat UI continues to render while the backend is unavailable.
- Cleared the stored server properties when resetting the store to prevent stale capability data after cache-backed operation.
- Kept the original error-splash behavior when no cached props exist so fresh installs still surface a clear failure state instead of rendering stale data.

* feat: Add UI for `props` endpoint unavailable + cleanup logic

* webui: extend cached props fallback to offline errors

Treat connection failures (refused, DNS, timeout, fetch) the same way as
server 5xx so the warning banner shows up when cache is available, instead
of falling back to a full error screen.

* webui: Left the chat form enabled when a server warning is present so operators can keep sending messages

e.g., to restart the backend over llama-swap, even while cached /props data is in use

* chore: update webui build output

---------

Co-authored-by: Pascal <redacted>
2 months agowebui: switch to hash-based routing (alternative of #16079) (#16157)
Isaac McFadyen [Fri, 26 Sep 2025 15:36:48 +0000 (11:36 -0400)]
webui: switch to hash-based routing (alternative of #16079) (#16157)

* Switched web UI to hash-based routing

* Added hash to missed goto function call

* Removed outdated SPA handling code

* Fixed broken sidebar home link

2 months agoAlways show message actions for mobile UI + improvements for user message sizing...
Aleksander Grygier [Fri, 26 Sep 2025 13:59:07 +0000 (15:59 +0200)]
Always show message actions for mobile UI + improvements for user message sizing (#16076)

2 months agocodeowners : add rgerganov as owner of RPC [no ci] (#16279)
Radoslav Gerganov [Fri, 26 Sep 2025 13:09:34 +0000 (16:09 +0300)]
codeowners : add rgerganov as owner of RPC [no ci] (#16279)

2 months agomtmd : fix uninitialized variable in bicubic_resize (#16275)
Aleksei Nikiforov [Fri, 26 Sep 2025 13:00:44 +0000 (15:00 +0200)]
mtmd : fix uninitialized variable in bicubic_resize (#16275)

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aaron Teo <redacted>
2 months agometal : report OOM errors (#16274)
Georgi Gerganov [Fri, 26 Sep 2025 11:14:28 +0000 (14:14 +0300)]
metal : report OOM errors (#16274)

2 months agocommon : use cpp-httplib as a cURL alternative for downloads (#16185)
Adrien Gallouët [Fri, 26 Sep 2025 11:12:19 +0000 (13:12 +0200)]
common : use cpp-httplib as a cURL alternative for downloads (#16185)

* vendor : update httplib

Signed-off-by: Adrien Gallouët <redacted>
* common : use cpp-httplib as a cURL alternative for downloads

The existing cURL implementation is intentionally left untouched to
prevent any regressions and to allow for safe, side-by-side testing by
toggling the `LLAMA_CURL` CMake option.

Signed-off-by: Adrien Gallouët <redacted>
* ggml : Bump to Windows 10

Signed-off-by: Adrien Gallouët <redacted>
---------

Signed-off-by: Adrien Gallouët <redacted>
2 months agobuild : fix build-ios-device (#16257)
Adrien Gallouët [Fri, 26 Sep 2025 10:39:35 +0000 (12:39 +0200)]
build : fix build-ios-device (#16257)

Signed-off-by: Adrien Gallouët <redacted>
2 months agoggml-cpu: implement MXFP4 SIMD for s390x (#16193)
Aaron Teo [Fri, 26 Sep 2025 10:27:25 +0000 (18:27 +0800)]
ggml-cpu: implement MXFP4 SIMD for s390x (#16193)

* ggml-cpu: impl mxfp4 s390x

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: missing s = sumf

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix incorrect kval_mxfp4 type

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rework mxfp4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: missing delta calc

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo for vec_splats

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: expand to 2 blocks per loop

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add unroll to boost perf

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: back to 1 block per loop to test perf

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: back to 1 block per loop to test perf"

This reverts commit 1fe55724e2dc295701101bf838bdd4a512237492.

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rm unroll from single block

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
2 months agoci : create git tags for released docker images (#16008)
Radoslav Gerganov [Fri, 26 Sep 2025 10:19:23 +0000 (13:19 +0300)]
ci : create git tags for released docker images (#16008)

* ci : create git tags for released docker images

When releasing a docker image for build number X, we should also create
the corresponding git tag. This allows users to easily checkout the
corresponding source tree for given docker image.

* Update .github/workflows/docker.yml

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update .github/workflows/docker.yml

Co-authored-by: Sigbjørn Skjæret <redacted>
* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agocodeowners : add danbev as owner of build-xcframework.sh [no ci] (#16268)
Daniel Bevenius [Fri, 26 Sep 2025 05:53:36 +0000 (07:53 +0200)]
codeowners : add danbev as owner of build-xcframework.sh [no ci] (#16268)

2 months agomusa: upgrade musa sdk to 4.3.0 (#16240)
R0CKSTAR [Fri, 26 Sep 2025 00:56:38 +0000 (08:56 +0800)]
musa: upgrade musa sdk to 4.3.0 (#16240)

Signed-off-by: Xiaodong Ye <redacted>
2 months agomusa: fix build warnings (#15611)
R0CKSTAR [Fri, 26 Sep 2025 00:56:10 +0000 (08:56 +0800)]
musa: fix build warnings (#15611)

Signed-off-by: Xiaodong Ye <redacted>
2 months agomodel : add GroveMoE support (#15510)
Sigbjørn Skjæret [Thu, 25 Sep 2025 17:50:28 +0000 (19:50 +0200)]
model : add GroveMoE support (#15510)

* add GroveMoE support

* remove constexpr that fails on certain compilers

* revert crude scalar div implementation, use cast

* build_attn_inp_kv_unified -> build_attn_inp_kv

* fix build_attn

* re-apply ffn_exps regex changes

2 months agovendors: update miniaudio version (#16212)
Aaron Teo [Thu, 25 Sep 2025 15:38:10 +0000 (23:38 +0800)]
vendors: update miniaudio version (#16212)

* vendor: update miniaudio.h

Signed-off-by: Aaron Teo <redacted>
* vendor: update miniaudio.h

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
2 months agoreadme : update bindings (#16144)
rtaluyev [Thu, 25 Sep 2025 15:20:34 +0000 (18:20 +0300)]
readme : update bindings (#16144)

Link to Java JNA bindings to llama.cpp native libraries

2 months agoCUDA: add a fused top-K MoE kernel (#16130)
Aman Gupta [Thu, 25 Sep 2025 14:35:05 +0000 (22:35 +0800)]
CUDA: add a fused top-K MoE kernel (#16130)

* CUDA: add a fused top-K MoE kernel

This kernel does the following:
1. softmax over the logits per token [n_experts, n_tokens]
2. argmax reduce over the top-k (n_experts_used) logits
3. write weights + ids to global memory

It is intended as fusion of softmax->top-k->get_rows pipeline for MoE models

* Refactor into ggml_cuda_should_use_topk_moe

* Review: Use better coalescing pattern, use WARP_SIZE, store logits into registers before

* Review: format + micro-optimizations

* Fix bug: fix tie breakers

* Add optional norm + clean-up code

* Use smem for final write

* Add bounds check

* Use better memory pattern for writeback

2 months agomodel-conversion : add embedding prompt file support (#15871)
Daniel Bevenius [Thu, 25 Sep 2025 10:02:36 +0000 (12:02 +0200)]
model-conversion : add embedding prompt file support (#15871)

This commit adds support for passing a prompt file to the model
conversion targets/scripts. It also updates the logits.cpp to print out
embedding information in the same format as when running the original
embedding model.

The motivation for this is that it allows us to pass files of different
sizes when running the converted models and validating the logits.

This can be particularly important when testing the sliding window
functionality of models where the sequence length needs to exceed a
certain number of tokens to trigger the sliding window logic.

2 months agoserver : add support for external server for tests (#16243)
Daniel Bevenius [Thu, 25 Sep 2025 09:36:47 +0000 (11:36 +0200)]
server : add support for external server for tests (#16243)

This commit adds support for using an externally started llama-server
instance for the server tests. This can be enabled by setting the
DEBUG_EXTERNAL environment variable.

The motivation for this is to allow debugging of the server itself
when investigating a test failure. Instructions for how to do this are
added to the README.md file in the tests directory.

2 months agoggml : fix loongarch lsx compilation error (#15864)
junchao-zhao [Thu, 25 Sep 2025 09:22:55 +0000 (17:22 +0800)]
ggml : fix loongarch lsx compilation error (#15864)

2 months agodocs: fix typo [no ci] (#16244)
Johannes Gäßler [Thu, 25 Sep 2025 09:12:27 +0000 (11:12 +0200)]
docs: fix typo [no ci] (#16244)

2 months agollama : add support for qwen3 reranker (#15824)
Douglas Hanley [Thu, 25 Sep 2025 08:53:09 +0000 (03:53 -0500)]
llama : add support for qwen3 reranker (#15824)

2 months agometal : fuse NORM + MUL + ADD, support non-multiples of 4 (#16220)
Georgi Gerganov [Thu, 25 Sep 2025 08:30:16 +0000 (11:30 +0300)]
metal : fuse NORM + MUL + ADD, support non-multiples of 4 (#16220)

* metal : fuse NORM + MUL + ADD

* metal : support norms of non-multiple of 4

* cont : fix comment [no ci]

2 months agometal : relax reorder conditions (#16216)
Georgi Gerganov [Thu, 25 Sep 2025 08:29:42 +0000 (11:29 +0300)]
metal : relax reorder conditions (#16216)

2 months agometal : restore im2col perf (#16219)
Georgi Gerganov [Thu, 25 Sep 2025 08:29:08 +0000 (11:29 +0300)]
metal : restore im2col perf (#16219)

2 months agorpc : use ggml logging facilities
Radoslav Gerganov [Thu, 25 Sep 2025 07:20:02 +0000 (10:20 +0300)]
rpc : use ggml logging facilities

Use RPC_DEBUG environment variable to enable debug messages.
Add helper macro LOG_DBG() which does an early
check of the env var before calling GGML_LOG_DEBUG().
Make sure we log a debug message for every server function.

2 months agocodeowners: add ownership of zdnn backend [no ci] (#16232)
Aaron Teo [Thu, 25 Sep 2025 05:06:30 +0000 (13:06 +0800)]
codeowners: add ownership of zdnn backend [no ci] (#16232)

add @Andreas-Krebbel to owners of zDNN backend

Signed-off-by: Aaron Teo <redacted>
2 months agoci: run the x64 and arm ci on the github machines instead (#16183)
Eve [Thu, 25 Sep 2025 05:06:06 +0000 (05:06 +0000)]
ci: run the x64 and arm ci on the github machines instead (#16183)

* run the x64 ci on regular machines

* set up the same thing for arm

fix test-quantize-perf just like #12306

* try to disable sve

* add another sve run

2 months agodevops: fix s390x docker release failure (#16231)
Aaron Teo [Thu, 25 Sep 2025 03:36:30 +0000 (11:36 +0800)]
devops: fix s390x docker release failure (#16231)

2 months agocodeowners: add ownership of zdnn backend [no ci] (#16229)
Aaron Teo [Wed, 24 Sep 2025 16:25:04 +0000 (00:25 +0800)]
codeowners: add ownership of zdnn backend [no ci] (#16229)

add @AlekseiNikiforovIBM to owners of zDNN backend

Signed-off-by: Aaron Teo <redacted>
2 months agollama: print memory breakdown on exit (#15860)
Johannes Gäßler [Wed, 24 Sep 2025 14:53:48 +0000 (16:53 +0200)]
llama: print memory breakdown on exit (#15860)

* llama: print memory breakdown on exit

2 months agoggml : split graph allocations according to backend max buffer size (#15815)
Acly [Wed, 24 Sep 2025 14:17:49 +0000 (16:17 +0200)]
ggml : split graph allocations according to backend max buffer size (#15815)

* ggml : make gallocr respect the backend's max buffer size

* if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers
* vulkan: report the actual max  allocation size in buffer type  interface

* fix missing newline, apple-clang warning

* track size of individual chunks in ggml_dyn_tallocr and raise max chunks.
revert to use suballocation_block_size as max chunk size for vulkan.

* track (chunk, offset) pairs instead of "global" offsets through gallocr.

* simpler, don't need loops to map between local/global offsets
* touches more code

* fix dyn_tallocr_max_size and initialization

* fix memory leak when buffers are reused due to same buffer type appearing multiple times

* make vbuffer allocation follow the same logic as backend_buffer did before

* continue to use leftover unallocated space of previous chunks after a new one has been created

* treat free blocks of each chunk as separate list
* they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges
* exhaust freed blocks of all chunks before considering their last blocks with unallocated space
* start with 0 chunks/blocks and create chunks as needed
* allow the last chunk to grow beyond max size

* refactor: move adding new free block and new chunk into separate functions

* allocate chunks individually with a separate free-blocks list for each one

* needs a bit more memory/allocations/indirections, but code is simpler

* fix warnings (missing static) & debug checks

2 months agomodel : add label for LiquidAI LFM2-2.6B model (#16204)
Tarek Dakhran [Wed, 24 Sep 2025 11:42:26 +0000 (13:42 +0200)]
model : add label for LiquidAI LFM2-2.6B model (#16204)

* model : add label for LiquidAI LFM2-2.6B model

HF link: [LiquidAI/LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B).

Support for GGUF conversion and inference is added in #14620.

However, due to similar `n_embd`, it identifies as a 1.2B model.
Fix the label by using `n_ff` to identify the model instead.

Output of `llama-bench`:
```
| model                          |       size |     params | backend    | threads |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| lfm2 1.2B F16                  |   2.18 GiB |     1.17 B | CPU        |      10 |           pp512 |        223.97 ± 5.32 |
| lfm2 2.6B F16                  |   4.79 GiB |     2.57 B | CPU        |      10 |           pp512 |         92.53 ± 4.14 |
| lfm2 350M F16                  | 676.25 MiB |   354.48 M | CPU        |      10 |           pp512 |       725.52 ± 11.70 |
| lfm2 700M F16                  |   1.38 GiB |   742.49 M | CPU        |      10 |           pp512 |       336.22 ± 12.93 |
```

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agomodel-conversion : make causal-verify-logits fails with model names containing "...
Jie Fu (傅杰) [Wed, 24 Sep 2025 08:25:26 +0000 (16:25 +0800)]
model-conversion : make causal-verify-logits fails with model names containing "." (#16215)

Signed-off-by: Jie Fu <redacted>
2 months agocommon : add missing chrono header for common.cpp (#16211)
Uilian Ries [Wed, 24 Sep 2025 06:53:47 +0000 (08:53 +0200)]
common : add missing chrono header for common.cpp (#16211)

Signed-off-by: Uilian Ries <redacted>
2 months agocodeowners : match all requirements files (#16214)
Sigbjørn Skjæret [Wed, 24 Sep 2025 06:53:20 +0000 (08:53 +0200)]
codeowners : match all requirements files (#16214)

2 months agomodel-conversion : run-org-model.py fails to run on mac m1 (#16213)
Jie Fu (傅杰) [Wed, 24 Sep 2025 06:46:52 +0000 (14:46 +0800)]
model-conversion : run-org-model.py fails to run on mac m1 (#16213)

Signed-off-by: Jie Fu <redacted>
2 months agocodeowners : use slash prefix for root files [no ci] (#16210)
Daniel Bevenius [Wed, 24 Sep 2025 06:10:09 +0000 (08:10 +0200)]
codeowners : use slash prefix for root files [no ci] (#16210)

This commit adds a leading slash to the paths of root-level files
in the CODEOWNERS file.

The motivation for this is that these might otherwise match files
in subdirectories that have other/additional owners will override them.

Refs: https://github.com/ggml-org/llama.cpp/pull/16209#issuecomment-3326434274

2 months agomodel-conversion : fix the make targets in the README.md (#16209)
Jie Fu (傅杰) [Wed, 24 Sep 2025 04:19:23 +0000 (12:19 +0800)]
model-conversion : fix the make targets in the README.md (#16209)

Fix two incorrect make targets in the readme.

Signed-off-by: Jie Fu <redacted>
2 months agoci : disable AMD workflows + update NVIDIA workflows (#16200)
Georgi Gerganov [Tue, 23 Sep 2025 17:41:40 +0000 (20:41 +0300)]
ci : disable AMD workflows + update NVIDIA workflows (#16200)

* ci : disable AMD workflows + update NVIDIA workflows

* cont : fixes

* cont : update nvidia vulkan workflows

2 months agoci : enable Vulkan workflow on Mac (#16194)
Georgi Gerganov [Tue, 23 Sep 2025 10:44:25 +0000 (13:44 +0300)]
ci : enable Vulkan workflow on Mac (#16194)

2 months agoggml-cpu: Respect cpumask settings (#16164)
Xiangyan Sun [Tue, 23 Sep 2025 08:58:12 +0000 (01:58 -0700)]
ggml-cpu: Respect cpumask settings (#16164)

2 months agoggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928)
Sigbjørn Skjæret [Tue, 23 Sep 2025 08:25:20 +0000 (10:25 +0200)]
ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928)

* fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl

* change initialization to true

2 months agozdnn: refactor codebase + add docs (#16178)
Aaron Teo [Tue, 23 Sep 2025 06:53:05 +0000 (14:53 +0800)]
zdnn: refactor codebase + add docs (#16178)

* zdnn: initial matmul refactor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rm static from funcs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update ggml-zdnn.h

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: change header files to hpp

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to common.hpp

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move mulmat forward around

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rm inline from utils

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code cleanup

Signed-off-by: Aaron Teo <redacted>
* docs: add zDNN docs

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
2 months agocodeowners : add @danbev to model-conversion example [no ci] (#16190)
Daniel Bevenius [Tue, 23 Sep 2025 06:13:22 +0000 (08:13 +0200)]
codeowners : add @danbev to model-conversion example [no ci] (#16190)

This commit adds examples/model-conversion/ to the CODEOWNERS file and
assigns myself (@danbev) as the code owner for this directory.

2 months agodevops: add s390x containers (#15915)
Aaron Teo [Tue, 23 Sep 2025 05:59:34 +0000 (13:59 +0800)]
devops: add s390x containers (#15915)

* devops: add s390x dockerfile

Signed-off-by: Aaron Teo <redacted>
* devops: add missing ninja

Signed-off-by: Aaron Teo <redacted>
* devops: move s390x docker into cpu docker

Signed-off-by: Aaron Teo <redacted>
* devops: rework s390x docker

Signed-off-by: Aaron Teo <redacted>
* devops: copy more tools

Signed-off-by: Aaron Teo <redacted>
* devops: add server build step

Signed-off-by: Aaron Teo <redacted>
* devops: remove apt clean steps as distroless misses it

Signed-off-by: Aaron Teo <redacted>
* devops: remove apt commands from distroless

Signed-off-by: Aaron Teo <redacted>
* devops: fix shared libs in distroless

Signed-off-by: Aaron Teo <redacted>
* devops: use correct libs path

Signed-off-by: Aaron Teo <redacted>
* devops: fix shared libs

Signed-off-by: Aaron Teo <redacted>
* devops: add collector stage

Signed-off-by: Aaron Teo <redacted>
* devops: fix missing stage ref

Signed-off-by: Aaron Teo <redacted>
* devops: fix permission issue

Signed-off-by: Aaron Teo <redacted>
* devops: fix unknown model loading failures

Signed-off-by: Aaron Teo <redacted>
* devops: attempt at fixing model loading failure

Signed-off-by: Aaron Teo <redacted>
* devops: fix missing ggml shared object

failure to load model

Signed-off-by: Aaron Teo <redacted>
* devops: remove move shared objects

Signed-off-by: Aaron Teo <redacted>
* devops: move libggml-cpu and blas into bin

Signed-off-by: Aaron Teo <redacted>
* devops: finalise hardened server stage

Signed-off-by: Aaron Teo <redacted>
* devops: add cli target

Signed-off-by: Aaron Teo <redacted>
* devops: fix typos

Signed-off-by: Aaron Teo <redacted>
* devops: fix missing shared libraries in base

Signed-off-by: Aaron Teo <redacted>
* devops: update debian target

Signed-off-by: Aaron Teo <redacted>
* devops: formalise llama.cpp loc

Signed-off-by: Aaron Teo <redacted>
* Revert "devops: formalise llama.cpp loc"

This reverts commit 0a7664af8466a15f318ff209e02ac3c4e551cc18.

Signed-off-by: Aaron Teo <redacted>
* devops: formalise llama.cpp loc

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 0a7664af8466a15f318ff209e02ac3c4e551cc18)
Signed-off-by: Aaron Teo <redacted>
* devops: attempt at fixing missing dir

Signed-off-by: Aaron Teo <redacted>
* devops: attempt at making it cache the build

Signed-off-by: Aaron Teo <redacted>
* devops: fix copying process

Signed-off-by: Aaron Teo <redacted>
* devops: make build dir an argument

Signed-off-by: Aaron Teo <redacted>
* Revert "devops: make build dir an argument"

This reverts commit 438698976b8a5181c1e8179600527cfd5a50cc23.

Signed-off-by: Aaron Teo <redacted>
* devops: add build stage for gguf-py

Signed-off-by: Aaron Teo <redacted>
* devops: move gguf-py installation into build stage

Signed-off-by: Aaron Teo <redacted>
* devops: break system packages?

Signed-off-by: Aaron Teo <redacted>
* devops: add rust compiler installer

Signed-off-by: Aaron Teo <redacted>
* devops: fix rustc not found

Signed-off-by: Aaron Teo <redacted>
* devops: remove cache mount to allow rustc to persist

Signed-off-by: Aaron Teo <redacted>
* devops: move rustc installation to another layer

Signed-off-by: Aaron Teo <redacted>
* devops: move gguf-py installation to full stage, fix copying

Signed-off-by: Aaron Teo <redacted>
* devops: remove rustc installation in build

Signed-off-by: Aaron Teo <redacted>
* devops: disable full target for now

Signed-off-by: Aaron Teo <redacted>
* devops: attempting static build

Signed-off-by: Aaron Teo <redacted>
* devops: merge s390x dockerfile into cpu for now

Signed-off-by: Aaron Teo <redacted>
* devops: switch to gcc image for build step

Signed-off-by: Aaron Teo <redacted>
* devops: remove build essentials

Signed-off-by: Aaron Teo <redacted>
* devops: install openblas into base target

Signed-off-by: Aaron Teo <redacted>
* devops: go back to s390x dockerfile

Signed-off-by: Aaron Teo <redacted>
* devops: remove libggml and libblas

Signed-off-by: Aaron Teo <redacted>
* devops: add full target

Signed-off-by: Aaron Teo <redacted>
* devops: add break system packages

Signed-off-by: Aaron Teo <redacted>
* devops: add libjpeg

Signed-off-by: Aaron Teo <redacted>
* devops: add missing cmake dep

Signed-off-by: Aaron Teo <redacted>
* devops: finalise docker images for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: add custom openblas patch

Signed-off-by: Aaron Teo <redacted>
* devops: use libopenblas-dev instead of libopenblas-openmp-dev

Signed-off-by: Aaron Teo <redacted>
* devops: add s390x docker build

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
2 months agoggml-cpu : fix typo in gemm comments [no ci] (#16189)
Daniel Bevenius [Tue, 23 Sep 2025 03:59:03 +0000 (05:59 +0200)]
ggml-cpu : fix typo in gemm comments [no ci] (#16189)

2 months agofeat: Add conversion support in GraniteHybrid for non-hybrid (all attn) (#16177)
Gabe Goodhart [Mon, 22 Sep 2025 18:40:10 +0000 (12:40 -0600)]
feat: Add conversion support in GraniteHybrid for non-hybrid (all attn) (#16177)

This is a configuration of the hparams in the GraniteHybrid architecture
that devolves to the Granite (or GraniteMoe) architecture (ie Granite 3.x).
It may be used for some models in the Granite 4 family with the
GraniteHybrid architecture acting as a superset arch. Rather than support
it directly in the c++ graph, we simply coerce the architecture flag back
to the correct "granite" or "granitemoe" architecture.

Branch: gabe-l-hart/GraniteNonHybridConversion

Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoclang-tidy : disable warning about performance enum size (#16127)
Haiyue Wang [Mon, 22 Sep 2025 17:57:46 +0000 (01:57 +0800)]
clang-tidy : disable warning about performance enum size (#16127)

Disable 'performance-enum-size' checking:

Enum 'llama_token_type' uses a larger base type ('unsigned int', size: 4 bytes)
than necessary for its value set, consider using 'std::uint8_t' (1 byte) as the
base type to reduce its size.

2 months agoggml : implement set_rows with i32 index (#16159)
Sigbjørn Skjæret [Mon, 22 Sep 2025 17:13:00 +0000 (19:13 +0200)]
ggml : implement set_rows with i32 index (#16159)

* implement set_rows with i32 index

* template fix

* test quantized path

warnings--

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
* forgotten name change

* deduplicate cuda/sycl and test-fix

* indent++

* vulkan: support set_rows with i32 index type (#16162)

* disable i32 index for webgpu for now

---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Jeff Bolz <redacted>
2 months agocodeowners : update + cleanup (#16174)
Georgi Gerganov [Mon, 22 Sep 2025 15:20:21 +0000 (18:20 +0300)]
codeowners : update + cleanup (#16174)

---------

Co-authored-by: slaren <redacted>
2 months agocommon : enable `--offline` mode without curl support (#16137)
Adrien Gallouët [Mon, 22 Sep 2025 12:13:51 +0000 (14:13 +0200)]
common : enable `--offline` mode without curl support (#16137)

* common : use the json parser

Signed-off-by: Adrien Gallouët <redacted>
* common : enable --offline mode without CURL support

This change refactors the download logic to properly support offline mode
even when the project is built without CURL.

Without this commit, using `--offline` would give the following error:

    error: built without CURL, cannot download model from the internet

even if all the files are already cached.

Signed-off-by: Adrien Gallouët <redacted>
---------

Signed-off-by: Adrien Gallouët <redacted>
2 months agowebui : fix handling incomplete chunks (#16107)
Quentin Bramas [Mon, 22 Sep 2025 08:53:13 +0000 (10:53 +0200)]
webui : fix handling incomplete chunks (#16107)

2 months agoembedding : fix typos in README (#16171)
GideonSerf [Mon, 22 Sep 2025 08:49:58 +0000 (10:49 +0200)]
embedding : fix typos in README (#16171)

2 months agocommon : remove unused local variables (#16140)
Haiyue Wang [Mon, 22 Sep 2025 08:48:42 +0000 (16:48 +0800)]
common : remove unused local variables (#16140)

These two local variables 'arg' and 'arg_prefix' have been overriden by:

  1. for (const auto & arg : opt.args)

  2. for (int i = 1; i < argc; i++) {
        const std::string arg_prefix = "--";

        std::string arg = argv[i];

2 months agoggml : extend ggml_can_fuse to work with non-sequential nodes (#16123)
Georgi Gerganov [Mon, 22 Sep 2025 08:12:37 +0000 (11:12 +0300)]
ggml : extend ggml_can_fuse to work with non-sequential nodes (#16123)

* ggml : extend ggml_can_fuse to work with non-sequential nodes in the graph

* cont : fix wrong bounds check condition

* cont : remove unnecessary overload

2 months agoggml : add ggml_op_is_empty (#16122)
Georgi Gerganov [Mon, 22 Sep 2025 08:12:09 +0000 (11:12 +0300)]
ggml : add ggml_op_is_empty (#16122)

* ggml : add ggml_op_is_empty

* ggml : move to ggml-impl.h

2 months agocodeowners : update ownership for @ngxson and @allozuar (#16128)
Xuan-Son Nguyen [Mon, 22 Sep 2025 08:10:58 +0000 (15:10 +0700)]
codeowners : update ownership for @ngxson and @allozuar (#16128)

2 months agoVulkan: add conv_transpose_2d operation (#16022)
Shin-myoung-serp [Mon, 22 Sep 2025 08:04:01 +0000 (17:04 +0900)]
Vulkan: add conv_transpose_2d operation (#16022)

* Vulkan: add conv_transpose_2d operation

* Vulkan: fix typo in conv_transpose_2d shader(s0mp, s0L, s1mp, s1L)

* Vulkan: fix incorrect indentation in conv_transpose_2d shader

* Vulkan: add checking the push constants size limit and reuse conv2d_mm.comp for conv_transpose_2d operation

* Vulkan: revert the order of the index calculation and bound check in conv_2d shader

* Vulkan: explicity check push constants limit in supports_op() for conv_transpose_2d operation.

* Vulkan: remove unnecessary lower bound checks for H/W_idx in the conv_2d shader.

2 months agocodeowners : claim responsibility for ci, models, gguf-py and convert (#16124)
Sigbjørn Skjæret [Mon, 22 Sep 2025 07:59:05 +0000 (09:59 +0200)]
codeowners : claim responsibility for ci, models, gguf-py and convert (#16124)

* claim responsibility for ci, gguf-py and convert

* add myself to various src/llama- files

2 months agocontrib : update roles (#16113)
Georgi Gerganov [Mon, 22 Sep 2025 07:58:02 +0000 (10:58 +0300)]
contrib : update roles (#16113)

* contrib : update roles

* contrib : merge PR sections + add link to CI instructions

Updated pull request guidelines for contributors and collaborators, and clarified merging practices for maintainers.

2 months agoci : remove vulkaninfo calls (#16169)
Georgi Gerganov [Mon, 22 Sep 2025 07:16:05 +0000 (10:16 +0300)]
ci : remove vulkaninfo calls (#16169)

2 months agoci : use smaller model (#16168)
Georgi Gerganov [Mon, 22 Sep 2025 06:11:39 +0000 (09:11 +0300)]
ci : use smaller model (#16168)

* ci : switch from gemma to qwen3 0.6b

* ci : use smaller model for some tests

2 months agovulkan: add RTE variants of exp shader (#16165)
Jeff Bolz [Mon, 22 Sep 2025 05:37:17 +0000 (00:37 -0500)]
vulkan: add RTE variants of exp shader (#16165)

This fixes some failures on Turing where "round to zero" rounds to the max f16
value but the CPU reference value is infinite.

2 months agoci : adjust params for less runtime (#16167)
Georgi Gerganov [Mon, 22 Sep 2025 05:31:40 +0000 (08:31 +0300)]
ci : adjust params for less runtime (#16167)

* ci : adjust params for less runtime

* ci : gate BF16 on some hardware

* ci : move extra tests to Arm runner

2 months agovulkan: vec dot matrix multiplication fix (#16151)
Ruben Ortlam [Mon, 22 Sep 2025 05:22:43 +0000 (07:22 +0200)]
vulkan: vec dot matrix multiplication fix (#16151)

* vulkan: fix matrix multiplication index calculation for odd m/n and odd k in combination with batching

* add odd m/n + odd k test with batching

2 months agoopencl: fix concat crash on win arm64 with Adreno (#15944)
lhez [Sun, 21 Sep 2025 23:42:10 +0000 (16:42 -0700)]
opencl: fix concat crash on win arm64 with Adreno (#15944)

2 months agoopencl: initial `q8_0` mv support (#15732)
lhez [Sun, 21 Sep 2025 21:48:44 +0000 (14:48 -0700)]
opencl: initial `q8_0` mv support (#15732)

2 months agoci : add label for the RISC-V runner (#16150)
Georgi Gerganov [Sun, 21 Sep 2025 16:00:27 +0000 (19:00 +0300)]
ci : add label for the RISC-V runner (#16150)

2 months agoci : migrate ggml ci to self-hosted runners (#16116)
Georgi Gerganov [Sun, 21 Sep 2025 13:50:45 +0000 (16:50 +0300)]
ci : migrate ggml ci to self-hosted runners (#16116)

* ci : migrate ggml ci to a self-hosted runners

* ci : add T4 runner

* ci : add instructions for adding self-hosted runners

* ci : disable test-backend-ops from debug builds due to slowness

* ci : add AMD V710 runner (vulkan)

* cont : add ROCM workflow

* ci : switch to qwen3 0.6b model

* cont : fix the context size

2 months agovulkan: optimize UMA buffer operations and fix driver hangs (#16059)
Giuseppe Scrivano [Sun, 21 Sep 2025 06:31:55 +0000 (08:31 +0200)]
vulkan: optimize UMA buffer operations and fix driver hangs (#16059)

* vulkan: optimize UMA buffer operations and fix driver hangs

The previous implementation was blocking the GPU for extended periods,
causing the i915 driver to reset the context due to the hangcheck
protection.

[32628.443070] i915 0000:00:02.0: [drm] GPU HANG: ecode 12:1:85dffffb, in llama-server [194114]
[32628.443091] i915 0000:00:02.0: [drm] llama-server[194114] context reset due to GPU hang

* vulkan: implement deferred_memset on UMA

---------

Signed-off-by: Giuseppe Scrivano <redacted>
2 months agovulkan: fix validation error about VK_PIPELINE_CREATE_CAPTURE_STATISTICS_BIT_KHR...
Jeff Bolz [Sun, 21 Sep 2025 06:23:37 +0000 (01:23 -0500)]
vulkan: fix validation error about VK_PIPELINE_CREATE_CAPTURE_STATISTICS_BIT_KHR (#16086)

2 months agosync : ggml upstream/0.0.6527
Georgi Gerganov [Sat, 20 Sep 2025 09:55:47 +0000 (12:55 +0300)]
sync : ggml

2 months agoggml : introduce semantic versioning (ggml/1336)
Daniel Bevenius [Tue, 16 Sep 2025 04:16:52 +0000 (06:16 +0200)]
ggml : introduce semantic versioning (ggml/1336)

* ggml : introduce semantic versioning

This commit introduces semantic versioning for the GGML library.

The motivation for this is that the current versioning, using build
numbers, makes it difficult to track changes and releases for projects
that use ggml.

The release steps are the following:
1. Sync the changes from llama.cpp using sync-llama-am.sh and after the
   PR has been approved and merged move to step 2.
2. Run scripts/release.sh and specify the type of release, major, minor,
   or patch. This script will handle incrementing the version
   (major|minor|patch), create a new commit with the version change,
   create a tag for the version, and prepare for the next development
   iteration.
3. Inspect the commits/tag and push to master. This will trigger the
   github release workflow which is triggered for new tags which will
   then publish a new release on github.

Example usage:
```console
$ ./scripts/release.sh major --dry-run
[dry-run] - No changes will be made

Step 1: Reading current version...
Current version: 0.9.0-dev
New release version: 1.0.0

Step 2: Updating version in ggml/CMakeLists.txt...
  [dry-run] Would update GGML_VERSION_MAJOR to 1
  [dry-run] Would update GGML_VERSION_MINOR to 0
  [dry-run] Would update GGML_VERSION_PATCH to 0
  [dry-run] Would remove -dev suffix

Step 3: Committing version bump...
  [dry-run] Would commit: 'ggml : bump version to 1.0.0'

Step 4: Creating git tag...
  [dry-run] Would create tag: v1.0.0 with message 'Release version 1.0.0'

Step 5: Preparing for next development cycle...
  [dry-run] Would update GGML_VERSION_MINOR to 1
  [dry-run] Would add -dev suffix back

Step 6: Committing development version...
  [dry-run] Would commit: 'ggml : prepare for development of 1.1.0-dev'

[dry-run] Summary (no changes were made):
  • Would have released version: 1.0.0
  • Would have created tag: v1.0.0
  • Would have set next development version: 1.1.0-dev
```

Refs: https://github.com/ggml-org/ggml/issues/1333

* ggml: create branch for release candidate and check master

* ggml : sign the git tag

2 months agoCUDA : conditionally add cuda architectures (ggml/1341)
Gregor Jasny [Wed, 10 Sep 2025 15:21:11 +0000 (17:21 +0200)]
CUDA : conditionally add cuda architectures (ggml/1341)

2 months agovulkan: use vec dot for matrix matrix multiplications (#16056)
Ruben Ortlam [Sat, 20 Sep 2025 08:42:56 +0000 (10:42 +0200)]
vulkan: use vec dot for matrix matrix multiplications (#16056)

* vulkan: Change the mul_mm shared memory and register caching system to use vec2 instead of scalars, to enable using dot2 instructions

* use fma instead of dot to fix Nvidia and Apple performance issues

2 months agoserver: fix SSE and OpenAI compatibility for error messages when streaming (#16109)
Benni [Sat, 20 Sep 2025 05:56:30 +0000 (07:56 +0200)]
server: fix SSE and OpenAI compatibility for error messages when streaming (#16109)

* server: fix SSE and OpenAI compatibility for error messages when streaming

* server: remove obsolete event parameter and use required data fieldname instead

2 months agollama-bench: add --devices and --list-devices support (#16039)
ssweens [Fri, 19 Sep 2025 22:15:21 +0000 (15:15 -0700)]
llama-bench: add --devices and --list-devices support (#16039)

* * llama-bench: add --devices support
- Support --devices same as llama-server
- Provide for benchmarking different device combinations
- Include --list-devices like llama-server for convenience

* fix: field display ordering restored

* fix: integrated the rpc devices
- aimed to mimic the server as much as possible

* cleanup: defaults for list-devices
- handle dup device listing with RPC

* cleanup: remove dup device load calls

* docs: update llama-bench
- added the recently added n-cpu-moe option to the docs while in there

* llama-bench: rpc device simplification
* rpc servers unify with other devices earlier, simplifying code
* --list-devices made stateless and simpler
* various cleanup

2 months agochat: Fix streaming parser for granite models (#15682)
shun095 [Fri, 19 Sep 2025 15:57:30 +0000 (00:57 +0900)]
chat: Fix streaming parser for granite models (#15682)

* fix(chat): fix streaming parser for granite models

* tests: add test cases for Granite models chat parser

2 months agofeat: Improve mobile UI for Settings Dialog (#16084)
Aleksander Grygier [Fri, 19 Sep 2025 07:52:27 +0000 (09:52 +0200)]
feat: Improve mobile UI for Settings Dialog (#16084)

* feat: Improve mobile UI for Settings Dialog

* chore: update webui build output

* fix: Linting errors

* chore: update webui build output

2 months agochat : fix build on arm64 (#16101)
Xuan-Son Nguyen [Fri, 19 Sep 2025 06:02:51 +0000 (13:02 +0700)]
chat : fix build on arm64 (#16101)

2 months agoggml : refactor forward_dup for cpu backend (#16062)
Xuan-Son Nguyen [Fri, 19 Sep 2025 04:31:56 +0000 (11:31 +0700)]
ggml : refactor forward_dup for cpu backend (#16062)

* ggml : refactor forward_dup for cpu backend

* clean up a bit

* add quant/dequant perf test

2 months agoggml-amx : fix ggml_amx_init() on generic Linux (#16049)
Adrien Gallouët [Thu, 18 Sep 2025 21:07:26 +0000 (23:07 +0200)]
ggml-amx : fix ggml_amx_init() on generic Linux (#16049)

Generalize Linux check to `__linux__` to support non-glibc systems (like musl).
Also, return `false` on unknown/untested OS.

Without this commit, the code compiles (with warnings) but fails:

    register_backend: registered backend CPU (1 devices)
    register_device: registered device CPU (Intel(R) Xeon(R) Platinum 8488C)
    build: 6487 (51c4cac6) with x86_64-linux-musl-gcc (GCC) 15.1.0 for x86_64-linux-musl (debug)
    system info: n_threads = 8, n_threads_batch = 8, total_threads = 16
    ....
    print_info: n_ctx_orig_yarn  = 262144
    print_info: rope_finetuned   = unknown
    print_info: model type       = 4B
    Illegal instruction (core dumped)

Signed-off-by: Adrien Gallouët <redacted>
2 months agocmake : fix static linking for OpenMP on Unix-like systems (#16031)
Adrien Gallouët [Thu, 18 Sep 2025 21:07:18 +0000 (23:07 +0200)]
cmake : fix static linking for OpenMP on Unix-like systems (#16031)

When compiling with GGML_STATIC=ON, the build process would produce a
binary that was still dynamically linked to OpenMP. This defeats the
purpose of a static build:

    $ cmake -B build \
            -DBUILD_SHARED_LIBS=OFF \
            -DLLAMA_CURL=OFF \
            -DGGML_CCACHE=OFF \
            -DGGML_NATIVE=OFF \
            -DGGML_STATIC=ON

    $ ldd llama-server
            linux-vdso.so.1 (0x0000e1a434e3b000)
            libgomp.so.1 => /lib/aarch64-linux-gnu/libgomp.so.1 (0x0000e1a4345a0000)
            libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000e1a434300000)
            libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000e1a434240000)
            libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000e1a434200000)
            libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000e1a434030000)
            /lib/ld-linux-aarch64.so.1 (0x0000e1a434df0000)

This commit resolves the issue by modifying `CMAKE_FIND_LIBRARY_SUFFIXES`
to prioritize `.a` files, forcing CMake to link the static version of
the library.

Signed-off-by: Adrien Gallouët <redacted>
2 months agoopencl: optimize mxfp4 kernels (#16037)
Shawn Gu [Thu, 18 Sep 2025 19:03:34 +0000 (12:03 -0700)]
opencl: optimize mxfp4 kernels (#16037)

- flatten mxfp4 and packed fp4->fp16 bit-wise convert function (replace lut)
- MoE kernel optimizations

---------

Co-authored-by: Li He <redacted>
2 months agorename optimize_graph to graph_optimize (#16082)
Jeff Bolz [Thu, 18 Sep 2025 18:46:17 +0000 (13:46 -0500)]
rename optimize_graph to graph_optimize (#16082)

2 months agoCUDA: Optimize PAD_REFLECT_1D (#15957)
Bowen Han [Thu, 18 Sep 2025 18:26:03 +0000 (11:26 -0700)]
CUDA: Optimize PAD_REFLECT_1D (#15957)

* CUDA: Optimize PAD_REFLECT_1D
feat: add more test cases for PAD_REFLECT_1D

* use fast_div to improve performance

* Apply suggestion from JohannesGaessler

Co-authored-by: Johannes Gäßler <redacted>
* Apply suggestion from JohannesGaessler

Co-authored-by: Johannes Gäßler <redacted>
* optimize

* use a concise expression to further speedup the cuda kernel

---------

Co-authored-by: Johannes Gäßler <redacted>
2 months agoCUDA: fix compilation on CC 6.0 (#16091)
Johannes Gäßler [Thu, 18 Sep 2025 17:28:32 +0000 (19:28 +0200)]
CUDA: fix compilation on CC 6.0 (#16091)

2 months agoAdd resumable downloads for llama-server model loading (#15963)
Eric Curtin [Thu, 18 Sep 2025 15:22:50 +0000 (16:22 +0100)]
Add resumable downloads for llama-server model loading (#15963)

- Implement resumable downloads in common_download_file_single function
- Add detection of partial download files (.downloadInProgress)
- Check server support for HTTP Range requests via Accept-Ranges header
- Implement HTTP Range request with "bytes=<start>-" header
- Open files in append mode when resuming vs create mode for new downloads

Signed-off-by: Eric Curtin <redacted>
2 months agometal : use function constants for mul_mv_ext kernels (#16074)
Georgi Gerganov [Thu, 18 Sep 2025 13:28:41 +0000 (16:28 +0300)]
metal : use function constants for mul_mv_ext kernels (#16074)

* metal : use function constants for mul_mv_ext kernels

ggml-ci

* metal : remove NW template argument

ggml-ci

* metal : adjust constants

ggml-ci

2 months agocuda : add missing F32<->I32 entries in ggml_cuda_cpy_fn (#16060)
Sigbjørn Skjæret [Thu, 18 Sep 2025 11:28:22 +0000 (13:28 +0200)]
cuda : add missing F32<->I32 entries in ggml_cuda_cpy_fn (#16060)

2 months agoserver : include usage statistics only when user request them (#16052)
Radoslav Gerganov [Thu, 18 Sep 2025 10:36:57 +0000 (13:36 +0300)]
server : include usage statistics only when user request them (#16052)

* server : include usage statistics only when user request them

When serving the OpenAI compatible API, we should check if
{"stream_options": {"include_usage": true} is set in the request when
deciding whether we should send usage statistics

closes: #16048

* add unit test

2 months agollama : bump max seq limit from 64 to 256 (#15916)
Georgi Gerganov [Thu, 18 Sep 2025 09:47:56 +0000 (12:47 +0300)]
llama : bump max seq limit from 64 to 256 (#15916)

ggml-ci

2 months agometal : improve F32, F16 and BF16 mat-vec multiplication (#16057)
Georgi Gerganov [Thu, 18 Sep 2025 09:33:45 +0000 (12:33 +0300)]
metal : improve F32, F16 and BF16 mat-vec multiplication (#16057)

* metal : improve F32, F16 and BF16 mat-vec multiplication

ggml-ci

* metal : make the NSG a function constant in mul_mv kernels

ggml-ci