]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
6 weeks agomodel : support vision LiquidAI LFM2-VL family (#15347)
Tarek Dakhran [Sat, 16 Aug 2025 21:33:54 +0000 (23:33 +0200)]
model : support vision LiquidAI LFM2-VL family (#15347)

* wip lfm2 vision model

* Fix conv weight

* Implement dynamic resolution

* Fix cuda

* support LFM2-VL-450M

* happy CI

* Remove extra `ggml_conv` and put others into the right place

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agovulkan: fuse adds (#15252)
Jeff Bolz [Sat, 16 Aug 2025 16:48:22 +0000 (11:48 -0500)]
vulkan: fuse adds (#15252)

* vulkan: fuse adds

Fuse adds that have the same shape, which are common in MoE models.
It will currently fuse up to 6 adds, because we assume no more than
8 descriptors per dispatch. But this could be changed.

* check runtimeDescriptorArray feature

* disable multi_add for Intel due to likely driver bug

6 weeks agovulkan: Support mul_mat_id with f32 accumulators (#15337)
Jeff Bolz [Sat, 16 Aug 2025 09:18:31 +0000 (04:18 -0500)]
vulkan: Support mul_mat_id with f32 accumulators (#15337)

* vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id

* vulkan: Support mul_mat_id with f32 accumulators, but they are not hooked up

- There's no explicit way to request f32 precision for mul_mat_id, but there
probably should be, and this gets the code in place for that.
- A couple fixes to check_results.
- Remove casts to fp16 in coopmat1 FA shader (found by inspection).

6 weeks agovulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (#15334)
Jeff Bolz [Sat, 16 Aug 2025 08:58:38 +0000 (03:58 -0500)]
vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (#15334)

6 weeks agoOpenCL: add initial FA support (#14987)
rmatif [Sat, 16 Aug 2025 08:05:55 +0000 (10:05 +0200)]
OpenCL: add initial FA support (#14987)

* add F16/F16 fa support

* fix kernel init

* use mad instead of fma

* use inline function

* mark FA with sinks as unsupported for now

* add pragma unroll to loops

7 weeks agocommon : fix double bos, use common_chat_templates for add_bos and add_eos (#15326)
Daniel Bevenius [Fri, 15 Aug 2025 17:50:52 +0000 (19:50 +0200)]
common : fix double bos, use common_chat_templates for add_bos and add_eos (#15326)

This commit updates common_chat_templates_apply_jinja to use the
the add_bos and add_eos parameters from the chat template instead of
the inputs.

The motivation for this is that currently if the `add_bos` and `add_eos`
from the input parameters are used it is possible to there will be a
missmatch between the model and the chat template which can lead to the
the removal of duplicate BOS/EOS tokens in chat.cpp `apply` to not
happen leading to two BOS tokens being added to the template.

7 weeks agoopencl: add initial mxfp4 support via mv (#15270)
lhez [Fri, 15 Aug 2025 16:52:14 +0000 (00:52 +0800)]
opencl: add initial mxfp4 support via mv (#15270)

* opencl: add reference `mul_mv_mxfp4_f32`

* opencl: add reference `mul_mv_id` for mxfp4

* Q4_0 tranpose fix for Adreno

---------

Co-authored-by: shawngu-quic <redacted>
7 weeks agovulkan : fix out-of-bounds access in argmax kernel (#15342)
Georgi Gerganov [Fri, 15 Aug 2025 14:16:36 +0000 (17:16 +0300)]
vulkan : fix out-of-bounds access in argmax kernel (#15342)

ggml-ci

7 weeks agovulkan : fix compile warnings on macos (#15340)
Georgi Gerganov [Fri, 15 Aug 2025 13:28:28 +0000 (16:28 +0300)]
vulkan : fix compile warnings on macos (#15340)

ggml-ci

7 weeks agoggml: initial IBM zDNN backend (#14975)
Aaron Teo [Fri, 15 Aug 2025 13:11:22 +0000 (21:11 +0800)]
ggml: initial IBM zDNN backend (#14975)

* ggml-zdnn: inital backend impl

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: temp change z17 to arch15

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix build bugs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tensor->extra logging check

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add layout name mapping, ztensor information

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: separate logging into its own line

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add shape comparison

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add ggml_tensor shape log

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix incorrect shape logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add output buffer check

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: run compute and store into tensor->extra

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add set_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more loggers

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update set_tensor logging to check only for matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: last working matmul version

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add comments to prevent accidentally deleting lines

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: support op out_prod

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update op out_prod to use tensor->extra

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite the backend implementation

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix new impl

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler warnings and bugfixes

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: test ztensor finding in init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: implement at least 1 op to test

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: assign tensor->extra to buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add check for view tensors to prevent init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rework init_tensor to create new buffers

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to std vector instead of array

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch buffers back and set to arbitrary number

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update supports_op matmul matrix

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix incorrect ztensor shape, reduce memory padding

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler error missing type

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing data transform call

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tighten memory usage, change string allocation

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias ztensor and data free

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias data transform

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more debug info for extra buffer transform

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logger to check if mat mul ops go through set_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: activate bias transform in matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move weights transform into mulmat

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more safeguards in matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sequencing of transforms

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix transform ztensor vs origtensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: figure out why sigtrap is happening

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sigsegv

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move everything back to local declaration

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move bias data to local also

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring back working matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite into mre

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import in header

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to fix sigsegv

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing load tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix invalid ztensor buffer release

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logging to debug free buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: remove free_buffer debug info

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add parmblkformat detections

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add nnpa installed detection

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add zdnn_init call for static libs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing invalid buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to using deque to fix pointer deref problem

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add weights logging to check

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to use unique ptr

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add tensor to pre_tfm_desc logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add inputs logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable op_none initialisation for testing

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing return from init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: load ztensors in cgraph exec

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: work on moving output ztensor as well

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable logging and breakpoints for full test

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at manually changing the layout

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at using default nwhc format instead

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable global load ztensor for now

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix errorenous output load tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add guards to prevent loading ztensor if transformed

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code cleanup

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring load ztensor back to init routine

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix ztensor deallocation abort

stabilise ggml <-> zdnn api

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up matmul selection

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up project structure

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update documentation, prepare for upstream

Signed-off-by: Aaron Teo <redacted>
* chore: add codeowners

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable batched matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing tensor views during matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: deny all view tensors directly

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix pr comments

Signed-off-by: Aaron Teo <redacted>
* docs: update ops docs for zdnn

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: redo test-backend-ops for ops.md

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix typo in build-s390x.md

Signed-off-by: Aaron Teo <redacted>
* codeowners: remove taronaeo for now

Signed-off-by: Aaron Teo <redacted>
* Revert "codeowners: remove taronaeo for now"

This reverts commit 411ea4ed78d08778967bd0bd33a6538cfcbe082f.

* ggml-zdnn: remove unused ggml_zdnn macro

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
7 weeks agoci : fix ios-xcode-build (#15324)
Sigbjørn Skjæret [Fri, 15 Aug 2025 12:02:39 +0000 (14:02 +0200)]
ci : fix ios-xcode-build (#15324)

* fix ios-xcode-build

* use xcode-select with fixed version

* switch to macos-15 to get xcode 16.4

7 weeks agoci : move ccache action to ggml-org fork (#15328)
Diego Devesa [Fri, 15 Aug 2025 10:27:02 +0000 (03:27 -0700)]
ci : move ccache action to ggml-org fork (#15328)

7 weeks agotest-opt: fix backend support check (#15317)
Johannes Gäßler [Fri, 15 Aug 2025 09:23:17 +0000 (11:23 +0200)]
test-opt: fix backend support check (#15317)

* test-opt: fix backend support check

* Update tests/test-opt.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agoCUDA: fix negative KV_max values in FA (#15321)
Johannes Gäßler [Thu, 14 Aug 2025 21:21:24 +0000 (23:21 +0200)]
CUDA: fix negative KV_max values in FA (#15321)

7 weeks agoeval-callback : stop on first NaN (#15320)
Georgi Gerganov [Thu, 14 Aug 2025 19:10:51 +0000 (22:10 +0300)]
eval-callback : stop on first NaN (#15320)

* eval-callback : stop on first NaN

* cont : log error

7 weeks agochat : include kwargs in template example (#15309)
Diego Devesa [Thu, 14 Aug 2025 17:28:29 +0000 (10:28 -0700)]
chat : include kwargs in template example (#15309)

7 weeks agollama : add 18-layer model type for Gemma 3-270m (#15319)
Daniel Bevenius [Thu, 14 Aug 2025 15:56:26 +0000 (17:56 +0200)]
llama : add 18-layer model type for Gemma 3-270m (#15319)

This commit adds support for the 18-layer model type in the Gemma3
series, which is the size of the Gemma3-270m model.

The motivation for this commit is was the only change required for
Gemma3-270m to be converted to GGUF format and used with llama.cpp.

Once the model has been converted and uploaded to Huggingface it can be
used like this:
```console
$ ./build/bin/llama-cli -hf ggml-org/gemma-3-270m-GGUF:Q8_0
```

7 weeks agodevops : fix compile bug when the BASE_CUDA_DEV_CONTAINER is based on Ubuntu 24.04...
simevo [Thu, 14 Aug 2025 15:45:27 +0000 (17:45 +0200)]
devops : fix compile bug when the BASE_CUDA_DEV_CONTAINER is based on Ubuntu 24.04 (#15005)

fixes #15004

Co-authored-by: Paolo Greppi <redacted>
7 weeks agoHIP: Cleanup hipification header (#15285)
uvos [Thu, 14 Aug 2025 14:23:56 +0000 (16:23 +0200)]
HIP: Cleanup hipification header (#15285)

add expicit conversion operator to support older versions of rocm
Switch over to hip_bf16 from legacy hip_bfloat16
Simplify RDNA3 define
Reduce swap over of new hipblas api to rocm 6.5 as this version is used for rocm 7.0 previews

---------

Co-authored-by: Johannes Gäßler <redacted>
7 weeks agogpt-oss: implement harmony parsing (#15181) upstream/0.0.6164
Aldehir Rojas [Thu, 14 Aug 2025 14:23:11 +0000 (09:23 -0500)]
gpt-oss: implement harmony parsing (#15181)

* model : add harmony parser for gpt-oss

* gpt-oss : fix grammar trigger from causing empty stack

* gpt-oss: tweak the grammar trigger again

* gpt-oss : add support for recipient in role header

* gpt-oss : fix ungrouped tool calls in grammar

* gpt-oss : loosen function name matching during parse

* gpt-oss : clean up workarounds

* gpt-oss : add template tests

* gpt-oss : simulate thinking and tool call tags

* gpt-oss : undo think tags when reasoning_format is none

* gpt-oss : set special tokens back to user defined

* gpt-oss : update openai-gpt-oss template

* server : filter out harmony thought messages

* gpt-oss : simplify parsing

7 weeks agodocker : Enable GGML_CPU_ALL_VARIANTS for ARM (#15267)
Christian Kastner [Thu, 14 Aug 2025 14:22:58 +0000 (16:22 +0200)]
docker : Enable GGML_CPU_ALL_VARIANTS for ARM (#15267)

7 weeks agoreadme : update hot topics (#15315)
Georgi Gerganov [Thu, 14 Aug 2025 14:16:03 +0000 (17:16 +0300)]
readme : update hot topics (#15315)

7 weeks agovulkan: perf_logger improvements (#15246)
Jeff Bolz [Thu, 14 Aug 2025 13:38:10 +0000 (08:38 -0500)]
vulkan: perf_logger improvements (#15246)

* vulkan: perf_logger improvements

- Account for batch dimension in flops calculation.
- Fix how "_VEC" is detected for mat_mul_id.
- Fix "n" dimension for mat_mul_id (in case of broadcasting).
- Include a->type in name.

* use <=mul_mat_vec_max_cols rather than ==1

7 weeks agoserver : add SWA checkpoints (#15293)
Georgi Gerganov [Thu, 14 Aug 2025 11:59:50 +0000 (14:59 +0300)]
server : add SWA checkpoints (#15293)

* server : add SWA checkpoints

ggml-ci

* cont : server clean-up

* server : handle state restore fails

* llama : add extended llama_state_seq_ API

* server : do not make checkpoints if --swa-full

ggml-ci

* llama : remove flags value for NONE

* server : configure number of SWA checkpoints with CLI arg

ggml-ci

* args : fix scope of new argument

7 weeks agosync : ggml
Georgi Gerganov [Thu, 14 Aug 2025 11:19:23 +0000 (14:19 +0300)]
sync : ggml

ggml-ci

7 weeks agoggml: fix ggml_conv_1d_dw bug (ggml/1323)
Jason Ni [Thu, 14 Aug 2025 11:17:51 +0000 (19:17 +0800)]
ggml: fix ggml_conv_1d_dw bug (ggml/1323)

* ggml: fix ggml_conv_1d_dw bug

* Fixed conv1d_dw weight tensor dimension.

7 weeks agotests : remove unused includes (ggml/0)
Georgi Gerganov [Thu, 14 Aug 2025 10:41:03 +0000 (13:41 +0300)]
tests : remove unused includes (ggml/0)

7 weeks agoperplexity : provide a helpful hint for has_cpl case in split_equal error. (#15304)
kallewoof [Thu, 14 Aug 2025 11:03:30 +0000 (20:03 +0900)]
perplexity : provide a helpful hint for has_cpl case in split_equal error. (#15304)

When attempting to do llama-perplexity on certain tasks which have coupled sequences there is a cryptic error that does not tell you what to do, which is to set the -kvu flag. This adds a hint about that fact.

7 weeks agocuda : fix GGML_CUDA_GRAPHS=OFF (#15300)
Sigbjørn Skjæret [Thu, 14 Aug 2025 10:22:07 +0000 (12:22 +0200)]
cuda : fix GGML_CUDA_GRAPHS=OFF (#15300)

* fix USE_CUDA_GRAPH=OFF

ggml-ci

* check capture status

* completely disable capturing check instead

7 weeks agofinetune: SGD optimizer, more CLI args (#13873)
Jonathan Graehl [Thu, 14 Aug 2025 10:03:57 +0000 (03:03 -0700)]
finetune: SGD optimizer, more CLI args (#13873)

* examples/finetune -opt SGD (stochastic gradient descent) memory opt

add unit tested GGML_OPT_OPTIMIZER_SGD to ggml - avoids allocating
m, v tensors.

support finetune.cpp arg -opt SGD (or sgd). (default adamw as before)

llama 3.2-1b-F32 result: observed 11gb gpu ram (41 sec/epoch)
when using SGD instead of 19gb (55 sec/epoch) using adamw.
(wikipedia 100 lines finetune)

(
using the same GPU memory, adamw can only do before OOM 512
batch/context, reaching:
train: [███████▉] data=0000140/0000140 loss=0.02575±0.00099 acc=99.52±0.03% t=00:00:47 ETA=00:00:00
val:   [███████▉] data=0000008/0000008 loss=4.76565±0.28810 acc=41.46±0.77% t=00:00:00 ETA=00:00:00

SGD is superior, though it converges slower, with max before OOM 1728
batch/context (esp see the better validation perf):
train: [███████▉] data=0000039/0000039 loss=0.00371±0.00010 acc=99.96±0.01% t=00:00:41 ETA=00:00:00
val:   [███████▉] data=0000003/0000003 loss=5.11406±0.76034 acc=48.01±0.69% t=00:00:01 ETA=00:00:00
)

note: when finetuning long enough (or w/ enough -lr),
validation accuracy *eventually* drops ('catastrophic forgetting')

-lr-half (halflife) option useful for SGD to avoid oscillation or
super slow underdamped learning (makes setting -lr more forgiving).
terminal -lr for now is set by lr-halvings i.e. if you want at most
1/8 the inital -lr you set -lr-halvings 3.

note: objective loss not directly comparable between adamw, sgd? -
check perplexity or accuracy or consider relative improvements
for convergence

new finetune args -wd 1e-9 to enable weight decay in sgd or adamw,
and max -epochs N (default 2 as before)

cache (1 - wd*alpha) in 'adamw' opt struct -
no noticeable perf benefit, disabled (still done
for new SGD though)

since opt. memory is pre-allocated, the ggml_opt_get_optimizer_params
would probably be able to change between SGD and AdamW with each epoch
but would need to use adamw for the first (unconfirmed - no cmdline arg
to set such a policy yet)

test-opt checks adamw as before and now sgd (except for a few disabled
tests for sgd only; probably just needs logging values and adding
alternate reference values);  tolerance on the 'regression'
test is broader for sgd (so we don't need many more epochs)

* Vulkan: Implement GGML_OP_OPT_STEP_SGD

* tests: Fix OPT_STEP_SGD test-backend-ops

* SGD op param store weight-decay and not 1-alpha*wd

* minor + cosmetic changes

* fix vulkan sgd

* try CI fix

---------

Co-authored-by: 0cc4m <redacted>
Co-authored-by: Johannes Gäßler <redacted>
7 weeks agoperplexity: give more information about constraints on failure (#15303)
kallewoof [Thu, 14 Aug 2025 06:16:32 +0000 (15:16 +0900)]
perplexity: give more information about constraints on failure (#15303)

* perplexity: give more information about constraints on failure

This checks whether -np is insufficient vs context, and provides clues as to how much is needed for each.

* log formatting

* log error and return instead of storing max_seq_exceeded int

* check if s0 is zero for -np check

7 weeks agoHIP: bump requirement to rocm 6.1 (#15296)
uvos [Wed, 13 Aug 2025 18:44:30 +0000 (20:44 +0200)]
HIP: bump requirement to rocm 6.1 (#15296)

7 weeks agofix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)
Bas Nijholt [Wed, 13 Aug 2025 18:21:31 +0000 (11:21 -0700)]
fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)

The flake.nix included references to llama-cpp.cachix.org cache with a comment
claiming it's 'Populated by the CI in ggml-org/llama.cpp', but:

1. No visible CI workflow populates this cache
2. The cache is empty for recent builds (tested b6150, etc.)
3. This misleads users into expecting pre-built binaries that don't exist

This change removes the non-functional cache references entirely, leaving only
the working cuda-maintainers cache that actually provides CUDA dependencies.

Users can still manually add the llama-cpp cache if it becomes functional in the future.

7 weeks agoserver : enable -td and -tbd parameters (#15172)
Sigbjørn Skjæret [Wed, 13 Aug 2025 13:43:00 +0000 (15:43 +0200)]
server : enable -td and -tbd parameters (#15172)

7 weeks agoggml : update `ggml_rope_multi` (#12665)
Judd [Wed, 13 Aug 2025 10:45:15 +0000 (18:45 +0800)]
ggml : update `ggml_rope_multi` (#12665)

* update `rope_multi`:

1. add `ggml_rope_multi_inplace`;
1. use `GGML_MROPE_SECTIONS` instead of 4.

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks ago common : add --override-tensor-draft, --cpu-moe-draft and --n-cpu-moe-draft paramete...
Copilot [Wed, 13 Aug 2025 10:44:40 +0000 (12:44 +0200)]
 common : add --override-tensor-draft, --cpu-moe-draft and --n-cpu-moe-draft parameters (#15191)

* Checkpoint from VS Code for coding agent session

* Initial plan

* Fix typo in --override-tensor-draft flag implementation

* Add null termination for speculative tensor buffer overrides

* Apply suggestions from code review

* Apply suggestions from code review

* Extract tensor override parsing logic to common function (addresses @slaren's feedback)

* Apply suggestions from code review

* Apply suggestions

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Diego Devesa <redacted>
7 weeks agoserver : filter out harmony thought messages (#15278)
Aldehir Rojas [Wed, 13 Aug 2025 10:28:21 +0000 (05:28 -0500)]
server : filter out harmony thought messages (#15278)

7 weeks agoci : Added CI with RISC-V RVV1.0 Hardware (#14439)
Ali Tariq [Wed, 13 Aug 2025 10:14:44 +0000 (15:14 +0500)]
ci : Added CI with RISC-V RVV1.0 Hardware (#14439)

* Changed the CI file to hw

* Changed the CI file to hw

* Added to sudoers for apt

* Removed the clone command and used checkout

* Added libcurl

* Added gcc-14

* Checking gcc --version

* added gcc-14 symlink

* added CC and C++ variables

* Added the gguf weight

* Changed the weights path

* Added system specification

* Removed white spaces

* ci: Replace Jenkins riscv native build Cloud-V pipeline with GitHub Actions workflow

Removed the legacy .devops/cloud-v-pipeline Jenkins CI configuration and introduced .github/workflows/build-riscv-native.yml for native RISC-V builds using GitHub Actions.

* removed trailing whitespaces

---------

Co-authored-by: Akif Ejaz <redacted>
7 weeks agoci : add more python requirements to copilot-setup-steps (#15289)
Sigbjørn Skjæret [Wed, 13 Aug 2025 09:30:45 +0000 (11:30 +0200)]
ci : add more python requirements to copilot-setup-steps (#15289)

* ci : add flake8 and pyright to copilot-setup-steps.yml

* add tools/server/tests/requirements.txt

7 weeks agoggml : repack block_iq4_nlx8 (#14904)
Georgi Gerganov [Wed, 13 Aug 2025 08:09:39 +0000 (11:09 +0300)]
ggml : repack block_iq4_nlx8 (#14904)

ggml-ci

7 weeks agoCUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel...
Oliver Simons [Wed, 13 Aug 2025 08:04:46 +0000 (10:04 +0200)]
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132)

* Factor out `reduce_rows_f32` from common.cuh

This increases iteration cycle speed by not having to recompile
every kernel all the time

* Hide memory-latency by loop unrolling in reduce_rows_f32

* Further optimizations to `reduce_rows_f32`

1. Increase threadblock size to better hide latency of memory requests.
   As a consequence of bigger threadblocks, do 2-step summation, using
   shared memory to communicate results between invocations
2. Use sum_temp array to reduce waits on sum
3. Adjust num_unroll to reflext bigger threadblock
4. Improve default block_dims, increase support for more block_dims

* Add perf tests for `reduce_rows_f32` kernel

* Add heuristic to toggle 128/512 threads based on sm count

Break even point was the minimum of the following multiples.

| GPU Model                     | Nrow SM Count Multiple |
| -----------                   | -----------            |
| RTX 4000 SFF ADA              | 2.0x                   |
| RTX 6000 ADA                  | 2.5x                   |
| RTX PRO 6000 Blackwell Max-Q  | 3.04x                  |
| RTX PRO 4500 Blackwell | 3.15x                  |

* Ensure perf gains also for small ncols and large nrows

Alternative to this, one could have also made the number of unrollings
template-able, but that would require compiling the kernel multiple
times, increasing binary size unnecessarily

* Modify perf and unit-tests

* Apply auto-formatting by clang

* Fix CI build failure

See https://github.com/ggml-org/llama.cpp/actions/runs/16798370266/job/47573716079?pr=15132#step:7:486
Building with VS generator worked though.

* Remove sm_count property from `ggml_backend_cuda_context`

Requested by @JohannesGaessler, and should fix remaining CI issues as a
side-effect

* Add CUB-based implementation for GGML_OP_MEAN

Currently this branch is only executed for nrows==1

* Add heuristics to execute CUB branch only when it brings perf

Heuristics were determined on the following HW:

* RTX 4000 SFF ADA
* RTX 6000 ADA
* RTX PRO 6000 Blackwell Max-Q
* RTX PRO 4500 Blackwell

* Add unit-test for CUB-based mean

Tests should run with CUDA Graphs enabled per default on NVGPUs

* Rename `USE_CUB` to `GGML_CUDA_USE_CUB`

Suggested by @JohannesGaessler

* Unindent Preprocessor directives

See
https://github.com/ggml-org/llama.cpp/pull/15132#discussion_r2269213506

7 weeks agoci : add copilot-setup-steps.yml (#15214)
Sigbjørn Skjæret [Wed, 13 Aug 2025 07:07:13 +0000 (09:07 +0200)]
ci : add copilot-setup-steps.yml (#15214)

7 weeks agoggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS...
Tak-RS [Wed, 13 Aug 2025 05:54:30 +0000 (14:54 +0900)]
ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others) (#15188)

* ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others). Fixes #15055

* ggml-rpc: rename RPC_IO_CHUNK->MAX_CHUNK_SIZE, use std::min() for cap, switch to GGML_LOG_ERROR, handle 0-length send/recv

* rpc: drop n==0 special case in send_data(); retry in loop per review

* rpc: remove trailing whitespace in send_data()

---------

Co-authored-by: Shinnosuke Takagi <redacted>
7 weeks agoHIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (#15273)
uvos [Tue, 12 Aug 2025 20:15:12 +0000 (22:15 +0200)]
HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (#15273)

7 weeks agosycl: Fix and disable more configurations of mul_mat (#15151)
Romain Biessy [Tue, 12 Aug 2025 11:58:22 +0000 (13:58 +0200)]
sycl: Fix and disable more configurations of mul_mat (#15151)

* sycl: Fix and disable more configurations of mul_mat

* Disable more configurations

7 weeks agoopencl: allow mixed f16/f32 `add` (#15140)
rmatif [Tue, 12 Aug 2025 09:42:41 +0000 (11:42 +0200)]
opencl: allow mixed f16/f32 `add` (#15140)

7 weeks agoCUDA cmake: add `-lineinfo` for easier debug (#15260)
Aman Gupta [Tue, 12 Aug 2025 09:21:45 +0000 (17:21 +0800)]
CUDA cmake: add `-lineinfo` for easier debug (#15260)

7 weeks agoCANN: GGML_OP_CPY optimization (#15070)
Chenguang Li [Tue, 12 Aug 2025 08:12:13 +0000 (16:12 +0800)]
CANN: GGML_OP_CPY optimization (#15070)

Signed-off-by: noemotiovon <redacted>
7 weeks agomusa: fix failures in test-backend-ops for mul_mat_id op (#15236)
R0CKSTAR [Tue, 12 Aug 2025 02:02:51 +0000 (10:02 +0800)]
musa: fix failures in test-backend-ops for mul_mat_id op (#15236)

* musa: fix failures in test-backend-ops for mul_mat_id op

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
7 weeks agoCANN: Add broadcast for softmax and FA (#15208)
hipudding [Mon, 11 Aug 2025 14:50:31 +0000 (22:50 +0800)]
CANN: Add broadcast for softmax and FA (#15208)

* refactor softmax

* fix fa

* fix mask shape

* format

* add comments

* Remove whitespace

7 weeks agomtmd : Fix MinicpmV model converter and clip to avoid using hardcode. (#14750)
rainred [Mon, 11 Aug 2025 14:12:12 +0000 (22:12 +0800)]
mtmd : Fix MinicpmV model converter and clip to avoid using hardcode. (#14750)

* Fix MinicpmV model converter and clip to avoid using hardcode.

* Code update for pr/14750

* Remove unused field, update script path in docs.

* Add version 5 for fallback code.

---------

Co-authored-by: lzhang <redacted>
7 weeks agochat : hotfix gpt-oss jinja raising an exception (#15243)
Xuan-Son Nguyen [Mon, 11 Aug 2025 13:31:35 +0000 (15:31 +0200)]
chat : hotfix gpt-oss jinja raising an exception (#15243)

* chat : hotfix gpt-oss jinja raising an exception

* fix

7 weeks agoserver : allow specifying reasoning_format in HTTP request (#15238)
Xuan-Son Nguyen [Mon, 11 Aug 2025 12:48:41 +0000 (14:48 +0200)]
server : allow specifying reasoning_format in HTTP request (#15238)

7 weeks agoreadme : update infra list (#15234)
Zagaj [Mon, 11 Aug 2025 12:27:54 +0000 (14:27 +0200)]
readme : update infra list (#15234)

7 weeks agokv-cache : fix seq_rm with seq_id == -1 (#15226)
Georgi Gerganov [Mon, 11 Aug 2025 10:58:24 +0000 (13:58 +0300)]
kv-cache : fix seq_rm with seq_id == -1 (#15226)

* kv-cache : fix seq_rm with seq_id == -1

ggml-ci

* cont : iterate over streams

ggml-ci

7 weeks agokv-cache : log (debug) all streams in find_slot (#15176)
Daniel Bevenius [Mon, 11 Aug 2025 09:21:19 +0000 (11:21 +0200)]
kv-cache : log (debug) all streams in find_slot (#15176)

This commit updates `llama_kv_cache_unified::find_slot` to log
information for all streams when debug is enabled.

The motivation for this change is that currently if a non-unified
kv-cache is used, then only one stream will be logged because the
code was currently uses `seq_to_stream[1]`.

7 weeks agoconvert : fix merge conflicts (#15229)
Sigbjørn Skjæret [Mon, 11 Aug 2025 09:15:44 +0000 (11:15 +0200)]
convert : fix merge conflicts (#15229)

7 weeks agoperplexity : update comments/error msg to use decode [no ci] (#15227)
Daniel Bevenius [Mon, 11 Aug 2025 08:21:24 +0000 (10:21 +0200)]
perplexity : update comments/error msg to use decode [no ci] (#15227)

This commit updates comments and error messages to use "decode" instead
of "eval" in perplexity.cpp.

The motivation for this is that `llama_eval` was renamed to
`llama_decode` a while ago, but the comments and error messages
still referred to "eval". This change ensures consistency and clarity.

7 weeks agoconvert : improve Mistral models integration (#14737)
Julien Denize [Mon, 11 Aug 2025 08:07:49 +0000 (10:07 +0200)]
convert : improve Mistral models integration (#14737)

* Improve Mistral models integration with llama.cpp

* Revert changes and fix gguf

* Revert change

* refactor convert_mistral_to_gguf.py in convert_hf_to_gguf.py

* Revert collateral

* Rename model name

* refactor

* revert

* remove duplicate

* Remove duplication code

* Fixes

* Fix flake issues

* Apply comments

* Apply comments

* Apply comments

* Fix remote

* add default chat template

* Revert

* nit

7 weeks agokleidiai: fix unsigned overflow bug (#15150)
Charles Xu [Mon, 11 Aug 2025 07:59:26 +0000 (09:59 +0200)]
kleidiai: fix unsigned overflow bug (#15150)

* kleidiai: fix unsigned overflow bug

* address review comments

7 weeks agocuda: refactored ssm_scan and use CUB (#13291)
David Zhao [Sat, 9 Aug 2025 18:29:43 +0000 (13:29 -0500)]
cuda: refactored ssm_scan and use CUB (#13291)

* cuda: refactored ssm_scan to use CUB

* fixed compilation error when when not using CUB

* assign L to constant and use size_t instead of int

* deduplicated functions

* change min blocks per mp to 1

* Use cub load and store warp transpose

* suppress clang warning

7 weeks agoCUDA: add attention sinks for tile and wmma (#15178)
Aman Gupta [Sat, 9 Aug 2025 12:00:24 +0000 (20:00 +0800)]
CUDA: add attention sinks for tile and wmma (#15178)

* CUDA: add attention sinks for tile and wmma

* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma

8 weeks agogguf-py : add Numpy MXFP4 de/quantization support (#15111)
compilade [Fri, 8 Aug 2025 21:48:26 +0000 (17:48 -0400)]
gguf-py : add Numpy MXFP4 de/quantization support (#15111)

* gguf-py : add MXFP4 de/quantization support

* ggml-quants : handle zero amax for MXFP4

8 weeks agoserver-bench: external OAI servers, sqlite (#15179)
Johannes Gäßler [Fri, 8 Aug 2025 21:04:36 +0000 (23:04 +0200)]
server-bench: external OAI servers, sqlite (#15179)

* server-bench: external OAI servers, sqlite

* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* raise_for_status

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
8 weeks agoggml : fix field name when new ggml_backend (#14944)
AN Long [Fri, 8 Aug 2025 12:37:22 +0000 (21:37 +0900)]
ggml : fix field name when new ggml_backend (#14944)

8 weeks agovendor: sync minja (#15161)
Olivier Chafik [Fri, 8 Aug 2025 09:45:18 +0000 (10:45 +0100)]
vendor: sync minja (#15161)

* vendor: sync minja

* Update minja.hpp

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
8 weeks agoCUDA: attention sinks for mma FlashAttention (#15157)
Johannes Gäßler [Fri, 8 Aug 2025 06:19:58 +0000 (08:19 +0200)]
CUDA: attention sinks for mma FlashAttention (#15157)

8 weeks agoopencl: support sink in `soft_max` (attn sinks) (#15152)
lhez [Fri, 8 Aug 2025 04:47:03 +0000 (13:47 +0900)]
opencl: support sink in `soft_max` (attn sinks) (#15152)

8 weeks agoconvert : support non-mxfp4 HF model (#15153)
Xuan-Son Nguyen [Thu, 7 Aug 2025 21:26:03 +0000 (23:26 +0200)]
convert : support non-mxfp4 HF model (#15153)

* convert : support non-mxfp4 HF model

* rm redundant check

* disable debug check

8 weeks agovulkan: support fattn sinks (#15126)
Jeff Bolz [Thu, 7 Aug 2025 20:44:20 +0000 (15:44 -0500)]
vulkan: support fattn sinks (#15126)

8 weeks agovulkan: Add env var to disable host visible vidmem (#15109)
Jeff Bolz [Thu, 7 Aug 2025 20:07:11 +0000 (15:07 -0500)]
vulkan: Add env var to disable host visible vidmem (#15109)

8 weeks agollama : Support intern-s1 (#14875)
RunningLeon [Thu, 7 Aug 2025 16:20:40 +0000 (00:20 +0800)]
llama : Support intern-s1 (#14875)

* support internvl

* support interns1

* resolve comments

* put interns1 in tensor mapping

* resolve comment

* move tokenizer changes to sub class

8 weeks agoHIP: add cmake option to enable compiler output of kernel resource usage metrics...
uvos [Thu, 7 Aug 2025 14:44:14 +0000 (16:44 +0200)]
HIP: add cmake option to enable compiler output of kernel resource usage metrics (#15103)

8 weeks agoggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
Christian Kastner [Thu, 7 Aug 2025 11:45:41 +0000 (13:45 +0200)]
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)

Any available libraries are found and loaded dynamically at runtime.

8 weeks agoCUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131)
Johannes Gäßler [Thu, 7 Aug 2025 08:53:21 +0000 (10:53 +0200)]
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131)

* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16

8 weeks agoscripts: fix crash when --tool is not set (#15133)
Johannes Gäßler [Thu, 7 Aug 2025 06:50:30 +0000 (08:50 +0200)]
scripts: fix crash when --tool is not set (#15133)

8 weeks agorequirements : fix PyTorch uint64 compatibility (#15134)
Daniel Bevenius [Thu, 7 Aug 2025 03:31:48 +0000 (05:31 +0200)]
requirements : fix PyTorch uint64 compatibility (#15134)

This commit addresses an issue with the convert_hf_to_gguf script
which is currently failing with:
```console
AttributeError: module 'torch' has no attribute 'uint64'
```

This occurred because safetensors expects torch.uint64 to be available
in the public API, but PyTorch 2.2.x only provides limited support for
unsigned types beyond uint8 it seems. The torch.uint64 dtype exists but
is not exposed in the standard torch namespace
(see pytorch/pytorch#58734).

PyTorch 2.4.0 properly exposes torch.uint64 in the public API, resolving
the compatibility issue with safetensors. This also required torchvision
to updated to =0.19.0 for compatibility.

Refs: https://huggingface.co/spaces/ggml-org/gguf-my-repo/discussions/186#68938de803e47d990aa087fb
Refs: https://github.com/pytorch/pytorch/issues/58734

8 weeks agoggml: Add basic SET_ROWS support in WebGPU (#15137)
Reese Levine [Wed, 6 Aug 2025 22:14:40 +0000 (15:14 -0700)]
ggml: Add basic SET_ROWS support in WebGPU (#15137)

* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments

8 weeks agofix profiling crash (#15072)
rmatif [Wed, 6 Aug 2025 21:17:51 +0000 (23:17 +0200)]
fix profiling crash (#15072)

8 weeks agoopencl: add `swiglu_oai` and `add_id` (#15121)
lhez [Wed, 6 Aug 2025 19:12:17 +0000 (04:12 +0900)]
opencl: add `swiglu_oai` and  `add_id` (#15121)

* opencl: add `swiglu-oai`

* opencl: add `add_id`

* opencl: add missing `add_id.cl`

8 weeks agochat : support Granite model reasoning and tool call (#14864)
Sachin Desai [Wed, 6 Aug 2025 18:27:30 +0000 (11:27 -0700)]
chat : support Granite model reasoning and tool call (#14864)

8 weeks agoFixed name `-override-tensors` to `-override-tensor` (#15129)
Juk Armstrong [Wed, 6 Aug 2025 16:28:48 +0000 (17:28 +0100)]
Fixed name `-override-tensors` to `-override-tensor` (#15129)

8 weeks agoggml : fix fallback to CPU for ununsupported ops (#15118)
Diego Devesa [Wed, 6 Aug 2025 12:37:35 +0000 (05:37 -0700)]
ggml : fix fallback to CPU for ununsupported ops (#15118)

8 weeks agochat : fix yandex chat template (#15116)
Sigbjørn Skjæret [Wed, 6 Aug 2025 11:26:49 +0000 (13:26 +0200)]
chat : fix yandex chat template (#15116)

8 weeks agochat : fix hunyuan auto-detection (#15114)
stevenkuang [Wed, 6 Aug 2025 09:48:30 +0000 (17:48 +0800)]
chat : fix hunyuan auto-detection (#15114)

Signed-off-by: stevenkuang <redacted>
8 weeks agoCANN: add support for ACL Graph (#15065)
Chenguang Li [Wed, 6 Aug 2025 06:12:42 +0000 (14:12 +0800)]
CANN: add support for ACL Graph (#15065)

* feat(cann): add optional support for ACL Graph execution

This commit adds support for executing ggml computational graphs using
Huawei's ACL graph mode via the USE_CANN_GRAPH flag. The support can be
enabled at compile time using the CMake option:

    -DUSE_CANN_GRAPH=ON

By default, ACL graph execution is **disabled**, and the fallback path
uses node-by-node execution.

Key additions:
- CMake option  to toggle graph mode
- Graph capture and execution logic using
- Tensor property matching to determine whether graph update is required
- Safe fallback and logging if the environment variable LLAMA_SET_ROWS
  is unset or invalid

This prepares the backend for performance improvements in repetitive graph
execution scenarios on Ascend devices.

Signed-off-by: noemotiovon <redacted>
* Fix review comments

Signed-off-by: noemotiovon <redacted>
* remane USE_CANN_GRAPH to USE_ACL_GRAPH

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
8 weeks agoggml: WebGPU disable SET_ROWS for now (#15078)
Reese Levine [Tue, 5 Aug 2025 23:26:38 +0000 (16:26 -0700)]
ggml: WebGPU disable SET_ROWS for now (#15078)

* Add paramater buffer pool, batching of submissions, refactor command building/submission

* Add header for linux builds

* Free staged parameter buffers at once

* Format with clang-format

* Fix thread-safe implementation

* Use device implicit synchronization

* Update workflow to use custom release

* Remove testing branch workflow

* Disable set_rows until it's implemented

* Fix potential issue around empty queue submission

* Try synchronous submission

* Try waiting on all futures explicitly

* Add debug

* Add more debug messages

* Work on getting ssh access for debugging

* Debug on failure

* Disable other tests

* Remove extra if

* Try more locking

* maybe passes?

* test

* Some cleanups

* Restore build file

* Remove extra testing branch ci

8 weeks agollama : add gpt-oss (#15091)
Georgi Gerganov [Tue, 5 Aug 2025 19:10:36 +0000 (22:10 +0300)]
llama : add gpt-oss (#15091)

* oai moe

* compat with new checkpoint

* add attn sink impl

* add rope scaling yarn

* logits match with latest transformers code

* wip chat template

* rm trailing space

* use ggml_scale_bias

* rm redundant is_swa_all

* convert interleaved gate_up

* graph : fix activation function to match reference (#7)

* vocab : handle o200k_harmony special tokens

* ggml : add attention sinks support (#1)

* llama : add attn sinks

* ggml : add attn sinks

* cuda : add attn sinks

* vulkan : add support for sinks in softmax

remove unnecessary return

* ggml : add fused swiglu_oai op (#11)

* ggml : add fused swiglu_oai op

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Georgi Gerganov <redacted>
* update CUDA impl

* cont : metal impl

* add vulkan impl

* test-backend-ops : more test cases, clean up

* llama : remove unfused impl

* remove extra lines

---------

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: slaren <redacted>
* repack mxfp4 upon conversion

* clean up a bit

* enable thinking

* add quick hack to render only some special tokens

* fix bf16 conversion

* remove vocab hack

* webui ok

* support chat parsing for gpt-oss

* fix webui

* direct mapping mxfp4, FINALLY

* force using mxfp4

* properly use lazy tensor

* ggml : add mxfp4

ggml : use e8m0 conversion instead of powf

Co-authored-by: Diego Devesa <redacted>
change kvalues_mxfp4 table to match e2m1 (#6)

metal : remove quantization for now (not used)

cuda : fix disabled CUDA graphs due to ffn moe bias

vulkan : add support for mxfp4

cont : add cm2 dequant

* ggml : add ggml_add_id (#13)

* ggml : add ggml_add_id

* add cuda impl

* llama : add weight support check for add_id

* perf opt

* add vulkan impl

* rename cuda files

* add metal impl

* allow in-place ggml_add_id

* llama : keep biases on CPU with --cpu-moe

* llama : fix compile error

ggml-ci

* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw

ggml-ci

* cleanup

ggml-ci

* sycl : fix supports_op for MXFP4

ggml-ci

* fix Unknown reasoning format

* ggml-cpu : fix AVX build

ggml-ci

* fix hip build

ggml-ci

* cuda : add mxfp4 dequantization support for cuBLAS

ggml-ci

* ggml-cpu : fix mxfp4 fallback definitions for some architectures

ggml-ci

* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw

---------

Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: slaren <redacted>
8 weeks agochat : only remove double bos/eos if added (#15086)
Sigbjørn Skjæret [Tue, 5 Aug 2025 18:43:36 +0000 (20:43 +0200)]
chat : only remove double bos/eos if added (#15086)

* only remove double bos/eos if added

* fix tests

8 weeks agoreadme : update hot topics (#15097)
Georgi Gerganov [Tue, 5 Aug 2025 17:19:33 +0000 (20:19 +0300)]
readme : update hot topics (#15097)

8 weeks agosycl: fix mul_mat selection (#15092)
Romain Biessy [Tue, 5 Aug 2025 16:39:55 +0000 (18:39 +0200)]
sycl: fix mul_mat selection (#15092)

8 weeks agoFix `glm4moe` bug (#15088)
Juk Armstrong [Tue, 5 Aug 2025 12:56:44 +0000 (13:56 +0100)]
Fix `glm4moe` bug (#15088)

8 weeks agowebui: fix markdown table (#15081)
Alex Wu [Tue, 5 Aug 2025 11:56:44 +0000 (19:56 +0800)]
webui: fix markdown table (#15081)

* webui: fix markdown table

* webui: fix table display with themes

8 weeks agocontext : fix index overflow on huge outputs (#15080)
compilade [Tue, 5 Aug 2025 09:27:45 +0000 (05:27 -0400)]
context : fix index overflow on huge outputs (#15080)

* context : fix overflow when re-ordering huge outputs

* context : fix logits size overflow for huge batches

8 weeks agollama : add --n-cpu-moe option (#15077)
Diego Devesa [Mon, 4 Aug 2025 23:05:36 +0000 (16:05 -0700)]
llama : add --n-cpu-moe option (#15077)

* llama : add --n-cpu-moe option

Keeps the MoE weights of the first N layers in the CPU

8 weeks agoimatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076)
compilade [Mon, 4 Aug 2025 21:26:52 +0000 (17:26 -0400)]
imatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076)

* imatrix : add warning when suffix is not .gguf for GGUF imatrix

* imatrix : only warn about suffix when output format is unspecified

8 weeks agocmake: Add GGML_BACKEND_DIR option (#15074)
Christian Kastner [Mon, 4 Aug 2025 19:29:14 +0000 (21:29 +0200)]
cmake: Add GGML_BACKEND_DIR option (#15074)

* cmake: Add GGML_BACKEND_DIR option

This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.

* Fix phrasing

8 weeks agogguf-py : add --chat-template-file to gguf_new_metadata (#15075)
Sigbjørn Skjæret [Mon, 4 Aug 2025 19:01:48 +0000 (21:01 +0200)]
gguf-py : add --chat-template-file to gguf_new_metadata (#15075)

8 weeks agomodel: support GLM 4.5 family of models (#14939)
Sam [Mon, 4 Aug 2025 18:29:25 +0000 (04:29 +1000)]
model: support GLM 4.5 family of models (#14939)

* model: Add GLM 4.5 (#14921)

Co-authored-by: Sigbjørn Skjæret <redacted>
* Merge in PR suggestions

Co-authored-by: Sigbjørn Skjæret <redacted>
* model: Add GLM 4.5 family of models (#14921)

1. Updated tensor_mapping.py with NextN tensor mappings

- Added proper tensor mappings for all NextN/MTP tensors in /Users/samm/git/llama.cpp/gguf-py/gguf/tensor_mapping.py
- Added mappings for: eh_proj, embed_tokens, enorm, hnorm, shared_head.head, shared_head.norm

2. Added num_nextn_predict_layers configuration

- Added LLM_KV_NUM_NEXTN_PREDICT_LAYERS constant to llama-arch.h and llama-arch.cpp
- Added num_nextn_predict_layers field to llama_hparams struct
- Updated GLM4_MOE parameter loading in llama-model.cpp to read this parameter
- Modified tensor loading logic to conditionally load NextN tensors based on num_nextn_predict_layers
- Added GGUF writer support in gguf_writer.py with add_num_nextn_predict_layers() method
- Updated conversion script to extract and write this parameter from HuggingFace config

3. Added FIM tokens for GLM4_MOE

- Added GLM-4.5's FIM tokens to llama-vocab.cpp:
  - <|code_prefix|> for FIM_PRE
  - <|code_suffix|> for FIM_SUF
  - <|code_middle|> for FIM_MID

4. Removed manual NextN tensor handling

- Removed the special-case handling in convert_hf_to_gguf.py that manually mapped NextN tensors
- NextN tensors are now handled automatically through the proper tensor mapping system

* glm 4.5 update tensors names

* model: glm 4.5 apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* model: glm 4.5 apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* model: glm 4.5 apply suggestions from code review

* Apply suggestions from code review

* patch broken chat template

* typings fix

* add TENSOR_SKIP flag

Co-authored-by: Diego Devesa <redacted>
* Update src/llama-model-loader.h

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Diego Devesa <redacted>
8 weeks agoquantize : fix confusing error message if ftype is invalid (#15071)
Sigbjørn Skjæret [Mon, 4 Aug 2025 16:11:02 +0000 (18:11 +0200)]
quantize : fix confusing error message if ftype is invalid (#15071)