]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
6 weeks agohexagon: Q4_0 and MXFP4 repack fixes (#20527)
Max Krasnyansky [Sat, 14 Mar 2026 18:09:08 +0000 (11:09 -0700)]
hexagon: Q4_0 and MXFP4 repack fixes (#20527)

* hexagon: fix tail corruption with rows sizes not multiple of 256

* hexagon: use different stride for repacking partial blocks

* hex-mm: update repack and kernels to avoid shuffles for full 256-element blocks

Previous commit changed the repacking to use even:odd (0:1,2:3,..) packing
instead of the original (0:128,1:129,...) packing in order to fix tail corruption.
Since the mm kernels already deal with partial tails we can use even:odd
packing only for the last block.
This avoid performance penalty of having to shuffle to zip the elements
in the common case.

* hex-mm: update rmpy x8 for better optimizations

* hex-mm: tighten supported MUL_MAT checks to avoid spurios failures

* hex-mm: use vzero to init accumulators

* hex-mm: properly call partial rmpy_x8

6 weeks agoci : reduce webgpu tests timeout to 900s (#20538)
Georgi Gerganov [Sat, 14 Mar 2026 15:08:26 +0000 (17:08 +0200)]
ci : reduce webgpu tests timeout to 900s (#20538)

[no ci]

6 weeks agomtmd: add llama-mtmd-debug binary (#20508)
Xuan-Son Nguyen [Sat, 14 Mar 2026 14:52:29 +0000 (15:52 +0100)]
mtmd: add llama-mtmd-debug binary (#20508)

* mtmd: add llama-mtmd-debug binary

* adapt

* fixes

* fix compile error

* fix windows compile error

* rm legacy clip_debug_encode()

* add MTMD_API to fix build

6 weeks agoadd op gated_delta_net (#20455)
Neo Zhang [Sat, 14 Mar 2026 14:01:57 +0000 (22:01 +0800)]
add op gated_delta_net (#20455)

6 weeks agowebui: restore code preview iframe origin isolation (#20477)
Chedrian07 [Sat, 14 Mar 2026 10:28:28 +0000 (19:28 +0900)]
webui: restore code preview iframe origin isolation (#20477)

6 weeks agoscripts : remove get-wikitext-103.sh (#20543)
Adrien Gallouët [Sat, 14 Mar 2026 10:22:04 +0000 (11:22 +0100)]
scripts : remove get-wikitext-103.sh (#20543)

It doesn't work and no one seems to use it.

    $ wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
    HTTP request sent, awaiting response... 301 Moved Permanently
    Location: unspecified
    ERROR: Redirection (301) without location.

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoscripts : update get-hellaswag.sh and get-winogrande.sh (#20542)
Adrien Gallouët [Sat, 14 Mar 2026 10:21:50 +0000 (11:21 +0100)]
scripts : update get-hellaswag.sh and get-winogrande.sh (#20542)

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoggml : add native AVX512-FP16 support for F16 operations (#20529)
Adrien Gallouët [Sat, 14 Mar 2026 09:06:14 +0000 (10:06 +0100)]
ggml : add native AVX512-FP16 support for F16 operations (#20529)

The overall benchmark speed remains almost the same because the CPU is
now calculating faster than the RAM can deliver the data. (See perf stat
results below showing 2.7 billion fewer instructions).

Also note that this path will be only enabled for native build or with
custom flags.

now:
```
 Performance counter stats for 'build/bin/llama-bench -m Qwen3-0.6B-f16.gguf -p 512 -n 128':

        189,073.52 msec task-clock                       #   14.658 CPUs utilized
               404      context-switches                 #    2.137 /sec
                19      cpu-migrations                   #    0.100 /sec
           372,390      page-faults                      #    1.970 K/sec
   310,877,195,595      instructions                     #    0.54  insn per cycle
   581,071,530,602      cycles                           #    3.073 GHz
    19,352,107,994      branches                         #  102.352 M/sec
        48,304,438      branch-misses                    #    0.25% of all branches
    84,998,431,152      L1-dcache-loads                  #  449.552 M/sec
    12,186,410,279      L1-dcache-load-misses            #   14.34% of all L1-dcache accesses

      12.899358742 seconds time elapsed

     187.823044000 seconds user
       1.253416000 seconds sys
```

before:
```
 Performance counter stats for 'build/bin/llama-bench -m Qwen3-0.6B-f16.gguf -p 512 -n 128':

        190,594.56 msec task-clock                       #   14.652 CPUs utilized
               436      context-switches                 #    2.288 /sec
                22      cpu-migrations                   #    0.115 /sec
           372,782      page-faults                      #    1.956 K/sec
   313,574,921,966      instructions                     #    0.54  insn per cycle
   586,064,970,425      cycles                           #    3.075 GHz
    19,585,778,563      branches                         #  102.761 M/sec
        48,437,488      branch-misses                    #    0.25% of all branches
    86,219,336,628      L1-dcache-loads                  #  452.370 M/sec
    12,232,085,771      L1-dcache-load-misses            #   14.19% of all L1-dcache accesses

      13.007923164 seconds time elapsed

     189.395316000 seconds user
       1.202612000 seconds sys
```

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoUse fp32 in cuBLAS V100 to avoid overflows, env variables to override cuBLAS compute...
Wallentri [Sat, 14 Mar 2026 07:43:13 +0000 (10:43 +0300)]
Use fp32 in cuBLAS V100 to avoid overflows, env variables to override cuBLAS compute type (#19959)

* Update ggml-cuda.cu

* Update ggml-cuda.cu

* Update build.md

* Update build.md

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* Update ggml-cuda.cu

* Update build.md

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* Update build.md

* Update ggml-cuda.cu

* Update ggml-cuda.cu

---------

Co-authored-by: Johannes Gäßler <redacted>
6 weeks agoggml : add OpenVINO backend (#15307)
Zijun Yu [Sat, 14 Mar 2026 05:56:55 +0000 (13:56 +0800)]
ggml : add OpenVINO backend (#15307)

* Update build doc

* Add cgraph tensor output name to OV op name

* Update openvino build instructions

* Add initial NPU support

* draft NPU support version 2: prefill + kvcache

* NPU support version 2: prefill + kvcache

* Change due to ggml cgraph changes, not correct yet

* Change due to ggml cgraph changes, llama-3.2 CPU work

* Add AMD64 to CMakeLists

* Change due to ggml cgraph changes, all device work

* Refactor: clean, fix warning

* Update clang-format

* Statful transformation for CPU GPU

* Add SwiGLU

* Fuse to SDPA

* Replace Concat with Broadcast in MulMat for GQA

* Pull out indices creation for kv cache update

* Refactor: remove past_token_len from extra_inputs

* Fix Phi3 SwiGLU and SoftMax

* Pull out sin cos from rope

* Reduce memory: free ov weights node after graph conversion

* Fix CPY due to cgraph change

* Added OpenVINO CI/CD. Updated docs

* Fix llama-cli

* Fix Phi3 ROPE; Add test-backend-ops

* Fix NPU

* Fix llama-bench; Clang-format

* Fix llama-perplexity

* temp. changes for mark decomp

* matmul in fp32

* mulmat input conversion fix

* mulmat type conversion update

* add mark decomp pass

* Revert changes in fuse_to_sdpa

* Update build.md

* Fix test-backend-ops

* Skip test-thread-safety; Run ctest only in ci/run.sh

* Use CiD for NPU

* Optimize tensor conversion, improve TTFT

* Support op SET_ROWS

* Fix NPU

* Remove CPY

* Fix test-backend-ops

* Minor updates for raising PR

* Perf: RMS fused to OV internal RMS op

* Fix after rebasing

- Layout of cache k and cache v are unified: [seq, n_head, head_size]
- Add CPY and FLASH_ATTN_EXT, flash attn is not used yet
- Skip test-backend-ops due to flash attn test crash
- Add mutex around graph conversion to avoid test-thread-safety fali in the future
- Update NPU config
- Update GPU config to disable SDPA opt to make phi-3 run

* Change openvino device_type to GPU; Enable flash_attn

* Update supports_buft and supports_op for quantized models

* Add quant weight conversion functions from genai gguf reader

* Quant models run with accuracy issue

* Fix accuracy: disable cpu_repack

* Fix CI; Disable test-backend-ops

* Fix Q4_1

* Fix test-backend-ops: Treat quantized tensors as weights

* Add NPU Q4_0 support

* NPU perf: eliminate zp

* Dequantize q4_1 q4_k q6_k for NPU

* Add custom quant type: q8_1_c, q4_0_128

* Set m_is_static=false as default in decoder

* Simpilfy translation of get_rows

* Fix after rebasing

* Improve debug util; Eliminate nop ReshapeReshape

* STYLE: make get_types_to_requant a function

* Support BF16 model

* Fix NPU compile

* WA for npu 1st token acc issue

* Apply EliminateZP only for npu

* Add GeGLU

* Fix Hunyuan

* Support iSWA

* Fix NPU accuracy

* Fix ROPE accuracy when freq_scale != 1

* Minor: not add attention_size_swa for non-swa model

* Minor refactor

* Add Q5_K to support phi-3-q4_k_m

* Requantize Q6_K (gs16) to gs32 on GPU

* Fix after rebasing

* Always apply Eliminate_ZP to fix GPU compile issue on some platforms

* kvcachefusion support

* env variable GGML_OPENVINO_DISABLE_SDPA_OPTIMIZATION added

* Fix for Phi3

* Fix llama-cli (need to run with --no-warmup)

* Fix add_sliced_mask; Revert mulmat, softmax; Remove input attention_size, iSWA model not working

* fix after rebasing

* Fix llama-3-8b and phi3-mini q4_0 NPU

* Update to OV-2025.3 and CMakeLists.txt

* Add OV CI cache

* Apply CISC review and update CI to OV2025.3

* Update CI to run OV dep install before build

* Update OV dockerfile to use OV2025.3 and update build docs

* Style: use switch in supports_ops

* Style: middle ptr and ref align, omit optional struct keyword

* NPU Unify PD (#14)

* Stateless. Fix llama-cli llama-server

* Simplify broadcast op in attention

* Replace get_output_tensor+memcpy with set_output_tensor

* NPU unify PD. Unify dynamic and static dims

* Clean placeholders in ggml-openvino.cpp

* NPU unify PD (handled internally)

* change graph to 4d, support multi sequences

* Fix llama-bench

* Fix NPU

* Update ggml-decoder.cpp

Hitting error while compiling on windows:

error C3861: 'unsetenv': identifier not found

Reason: unsetenv() is a POSIX function; it doesn’t exist on Windows. Visual Studio (MSVC) won’t recognize it.

Proposed fix: Use _putenv_s() (Windows equivalent)
This is supported by MSVC and achieves the same effect: it removes the environment variable from the process environment.

This keeps cross-platform compatibility.

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Remove the second decoder for node. Moving the function into the model decoder

* Fix error for naive

* NPU prefill chunking

* NPU fix llama-bench

* fallback naive run with accuracy issue

* NPU support llma-perplexity -b 512 --no-warmup

* Refactor: split ov_graph_compute for dynamic and static

* remove unused API GgmlOvDecoder::get_output_stride(const std::string & name)

* minor update due to ov 2025.4

* remove unused API GgmlOvDecoder::get_output_names()

* remove unused API get_output_shape(const std::string & name)

* Modified API GgmlOvDecoder::get_output_type(const std::string & name)

* Removed API GgmlOvDecoder::get_output_op_params(const std::string & name)

* Removed API get_output_ggml_tensor(const std::string & name)

* Removed API m_outputs

* Removed m_output_names

* Removed API GgmlOvDecoder::get_input_names()

* Removed API GgmlOvDecoder::get_input_stride(const std::string& name)

* Removed API get_input_type

* Removed API get_input_type

* Removed API GgmlOvDecoder::get_input_shape(const std::string & name)

* Removed API GgmlOvDecoder::get_input_op_params(const std::string & name)

* Fix error for decoder cache

* Reuse cached decoder

* GPU remove Q6_K requantization

* NPU fix wrong model output shape

* NPU fix q4 perf regression

* Remove unused variable nodes

* Fix decoder can_reuse for llama-bench

* Update build.md for Windows

* backend buffer: allocate on host

* Use shared_buffer for GPU NPU; Refactor

* Add ov_backend_host_buffer; Use cached remote context

* Put kvcache on GPU

* Use ggml_aligned_malloc

* only use remote tensor for kvcache

* only use remote tensor for kvcache for GPU

* FIX: use remote tensor from singleton

* Update build.md to include OpenCL

* NPU always requant to q4_0_128

* Optimize symmetric quant weight extraction: use single zp

* Use Q8_0_C in token embd, lm_head, and for 5 and 6 bits quant

* Update build.md

* Support -ctk f32

* Initial stateful graph support

* Update ggml/src/ggml-openvino/ggml-decoder.cpp

Co-authored-by: Yamini Nimmagadda <redacted>
* code cleanup

* npu perf fix

* requant to f16 for Q6 embed on NPU

* Update ggml/src/ggml-openvino/ggml-decoder.cpp

* Update ggml/src/ggml-openvino/ggml-openvino-extra.cpp

* Create OPENVINO.md in llama.cpp backend docs

* Update OPENVINO.md

* Update OPENVINO.md

* Update OPENVINO.md

* Update build.md

* Update OPENVINO.md

* Update OPENVINO.md

* Update OPENVINO.md

* kq_mask naming fix

* Syntax correction for workflows build file

* Change ov backend buffer is_host to false

* Fix llama-bench -p -n where p<=256

* Fix --direct-io 0

* Don't put kvcache on GPU in stateful mode

* Remove hardcode names

* Fix stateful shapes

* Simplification for stateful and update output shape processing

* Remove hardcode names

* Avoid re-compilation in llama-bench

* Extract zp directly instead of bias

* Refactor weight tensor processing

* create_weight_node accept non-ov backend buffer

* remove changes in llama-graph.cpp

* stateful masking fix (#38)

Fix for stateful accuracy issues and cl_out_of_resources error in stateful GPU with larger context sizes.

* Fix test-backend-ops crash glu, get_rows, scale, rms_norm, add

* hardcoded name handling for rope_freqs.weight

* Suppress logging and add error handling to allow test-backend-ops to complete

* Fix MUL_MAT with broadcast; Add unsupported MUL_MAT FLASH_ATTN cases

* Use bias instead of zp in test-backend-ops

* Update OV in CI, Add OV CI Tests in GH Actions

* Temp fix for multithreading bug

* Update OV CI, fix review suggestions.

* fix editorconfig-checker, update docs

* Fix tabs to spaces for editorconfig-checker

* fix editorconfig-checker

* Update docs

* updated model link to be GGUF model links

* Remove GGML_CPU_REPACK=OFF

* Skip permuted ADD and MUL

* Removed static variables from utils.cpp

* Removed initializing non-existing variable

* Remove unused structs

* Fix test-backend-ops for OV GPU

* unify api calling

* Update utils.cpp

* When the dim is dynamic, throw an error, need to is stastic forst

* Add interface compute_model_outputs(), which get the model output through computing the node use count & status in the cgraph to avoid the flag using

* No need to return

* Fix test-backend-ops for OV GPU LNL

* Fix test-thread-safety

* use the shape from infer request of output tensor create to avoid issue

* fix dynamic output shape  issue

* fix issue for the unused node in tests

* Remove unused lock

* Add comment

* Update openvino docs

* update to OV release version 2026.0

* add ci ov-gpu self hosted runner

* fix editorconfig

* Fix perplexity

* Rewrite the model inputs finding mechanism  (#54)

* Rewrite the model inputs finding logistic

* Put stateful shape handle in get input shape

* Put the iteration logistic in func

* Added ggml-ci-intel-openvino-gpu and doc update

* .hpp files converted to .h

* fix ggml-ci-x64-intel-openvino-gpu

* Fix for stateful execution bug in llama-bench

* Minor updates after stateful llama-bench fix

* Update ggml/src/ggml-openvino/utils.cpp

Co-authored-by: Yamini Nimmagadda <redacted>
* Remove multiple get_shape calls

* Bring back mutex into compute

* Fix VIEW op, which slice the input node

* Added token_len_per_seq existence check before slicing masks and moved node retrieval inside guarded block to prevent missing-key access

* Temp. fix for test requant errors

* Update to OV ggml-ci to low-perf

* ci : temporary disable "test-llama-archs"

* ci : cache v4 -> v5, checkout v4 -> v6, fix runner tag

* docs : update url

* Fix OV link in docker and Update docs

---------

Co-authored-by: Ravi Panchumarthy <redacted>
Co-authored-by: Cavus Mustafa <redacted>
Co-authored-by: Arshath <redacted>
Co-authored-by: XuejunZhai <redacted>
Co-authored-by: Yamini Nimmagadda <redacted>
Co-authored-by: Xuejun Zhai <redacted>
Co-authored-by: Georgi Gerganov <redacted>
6 weeks agovendor : update cpp-httplib to 0.37.2 (#20484)
Adrien Gallouët [Sat, 14 Mar 2026 05:51:02 +0000 (06:51 +0100)]
vendor : update cpp-httplib to 0.37.2 (#20484)

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agoFix data race in CUDA's "cpy" kernel (influences GGML's DUP, CONT operations). (...
Rail Chabdarov [Sat, 14 Mar 2026 05:19:44 +0000 (06:19 +0100)]
Fix data race in CUDA's "cpy" kernel (influences GGML's DUP, CONT operations). (#20507)

* Fix datarace in CUDA's "cpy" kernel.

* Remove extra barrier by using more of shared memory.

6 weeks agoopencl: fix l2_norm (#20480)
lhez [Sat, 14 Mar 2026 05:18:52 +0000 (22:18 -0700)]
opencl: fix l2_norm (#20480)

6 weeks agotools : enable kvu in perplexity for hellaswag, winogrande, multiple-choice (#19954)
Adrien Gallouët [Fri, 13 Mar 2026 20:25:57 +0000 (21:25 +0100)]
tools : enable kvu in perplexity for hellaswag, winogrande, multiple-choice (#19954)

llama-perplexity -hf unsloth/Qwen3-0.6B-GGUF:Q4_K_M -f winogrande-debiased-eval.csv --winogrande

    winogrande_score : tokenizing selected tasks
    winogrande_score : calculating winogrande score over selected tasks.
    split_equal: sequential split is not supported when there are coupled sequences in the input batch (you may need to use the -kvu flag)
    decode: failed to find a memory slot for batch of size 46
    failed to decode the batch, n_batch = 2048, ret = 1
    winogrande_score: llama_decode() failed

same for hellaswag:

    split_equal: sequential split is not supported when there are coupled sequences in the input batch (you may need to use the -kvu flag)
    decode: failed to find a memory slot for batch of size 99
    failed to decode the batch, n_batch = 2048, ret = 1
    hellaswag_score: llama_decode() failed

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agograph : remove redundant GDN state transposes (#20443)
Georgi Gerganov [Fri, 13 Mar 2026 20:12:54 +0000 (22:12 +0200)]
graph : remove redundant GDN state transposes (#20443)

* ggml : transpose fused GDN state access for coalesced memory reads (#20436)

The fused Gated Delta Net kernel accessed the [S_v, S_v] state matrix
column-wise on row-major storage, causing strided reads (stride S_v =
128 floats = 512 bytes) that waste GPU cache bandwidth. This produced a
39% regression on Qwen3.5-9B (Metal, M4 Max) compared to the unfused
path.

Transpose the state indexing so threads read contiguously:
- Metal: s_ptr[is*S_v] -> s_ptr[is] (stride 1 vs S_v)
- CUDA:  curr_state[i*S_v+col] -> curr_state[col*S_v+i] (coalesced)
- CPU:   restructured loops for row-wise transposed access

Also add --fused-gdn [on|off|auto] CLI flag (mirrors --flash-attn) so
users can control fused GDN independently of auto-detection.

All GATED_DELTA_NET backend-ops tests pass.

Co-Authored-By: Claude Opus 4.6 <redacted>
* ggml : use SIMD dot products in CPU GDN kernel, couple AR/chunked fused flags

- Replace scalar inner loops with ggml_vec_dot_f32 for SIMD-optimized
  dot products in the CPU fused GDN kernel (delta and attention output)
- Couple fused_gdn_ar and fused_gdn_ch flags in auto-detection: if one
  path lacks device support, disable both to prevent state layout mismatch
  between transposed (fused) and non-transposed (unfused) formats

Co-Authored-By: Claude Opus 4.6 <redacted>
* llama : rever fgdn argument changes

* graph : remove GDN state transposes

* vulkan : adapt

* cuda : remove obsolete smem code

---------

Co-authored-by: Paul Flynn <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
Co-authored-by: Oliver Simons <redacted>
6 weeks agocommon/parser: gracefully handle undetected tool parser, print error message. (#20286)
Piotr Wilkin (ilintar) [Fri, 13 Mar 2026 19:56:10 +0000 (20:56 +0100)]
common/parser: gracefully handle undetected tool parser, print error message. (#20286)

6 weeks agollama : fix pooling assertion crash in chunked GDN detection path (#20468)
ZeroV0LT [Fri, 13 Mar 2026 18:53:42 +0000 (19:53 +0100)]
llama : fix pooling assertion crash in chunked GDN detection path (#20468)

* llama : fix pooling assertion crash in chunked GDN detection path

The chunked fused Gated Delta Net detection in sched_reserve() calls
graph_reserve(16*n_seqs, n_seqs, n_outputs, ...) where n_outputs = n_seqs.
This creates a dimension mismatch in build_pooling() for embedding models
with mean/rank pooling: build_inp_mean() creates a tensor with shape
[n_tokens=16*n_seqs, ...] while t_embd is reduced to [n_outputs=n_seqs, ...]
via out_ids, causing ggml_mul_mat to assert on ggml_can_mul_mat(a, b).

Fix: pass n_tokens as n_outputs in the chunked GDN graph reservation,
matching the pattern used by the pp/tg worst-case reservations.

Regression introduced by #20340 (d28961d).
Same class of bug as #12517, fixed by #12545.

* server : add mean pooling tests to embedding test suite

Add test_embedding_pooling_mean and test_embedding_pooling_mean_multiple
to cover the --pooling mean codepath, which was previously untested.

These tests would have caught the regression introduced by #20340 where
build_pooling() crashes with a ggml_mul_mat assertion due to mismatched
dimensions in the chunked GDN detection path.

---------

Co-authored-by: Domenico Crupi <redacted>
6 weeks agoserver: reset counter related to kill-switch on client error (#20513)
SoftwareRenderer [Fri, 13 Mar 2026 17:58:09 +0000 (13:58 -0400)]
server: reset counter related to kill-switch on client error (#20513)

* server: reset kill-switch on client error

This avoids triggering a server kill switch.

If the client sends a request that exceeds the configured context size, an appropriate HTTP 400 response is provided and no tokens are generated.

However since no tokens are generated, update_slots() increments n_empty_consecutive. If the client sends 3 such messages in a row, the server terminates.

* moved counter reset as per recommendation

* cont : minor

---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoggml-cpu: add RVV vec dot kernels for quantization types (#18859)
rehan-10xengineer [Fri, 13 Mar 2026 15:36:04 +0000 (20:36 +0500)]
ggml-cpu: add RVV vec dot kernels for quantization types (#18859)

* ggml-cpu: add rvv quantize_row_q8_K kernel

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: add rvv vec_dot for iq4_nl, mxfp4, iq2_xxs

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: add rvv vec_dot for iq4_xs, refactor

* ggml-cpu: remove ifunc for rvv vec dot

* ggml-cpu: add vec_dot for iq2_xs, iq3_xxs

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: refactor quants.c

---------

Co-authored-by: taimur-10x <redacted>
Co-authored-by: Rehan Qasim <redacted>
Co-authored-by: Rehan Qasim <redacted>
6 weeks agoggml : fix typo gmml (#20512)
Adrien Gallouët [Fri, 13 Mar 2026 13:36:13 +0000 (14:36 +0100)]
ggml : fix typo gmml (#20512)

Signed-off-by: Adrien Gallouët <redacted>
6 weeks agomtmd : rename mtmd_get_audio_bitrate to mtmd_get_audio_sample_rate (#20105)
Daniel Bevenius [Fri, 13 Mar 2026 11:30:02 +0000 (12:30 +0100)]
mtmd : rename mtmd_get_audio_bitrate to mtmd_get_audio_sample_rate (#20105)

This commit renames the the function `mtmd_get_audio_bitrate` to
`mtmd_get_audio_sample_rate` to better reflect its purpose.

The motivation for this is that the function currently returns the audio
sample rate, not the bitrate (sample_rate × bit_depth × channels), and
that is how it is used in the code as well.

This is a breaking change, but I believe mtmd is still in
experimental/development phase so it might be alright to simply rename.

6 weeks agogeneral: CONTRIBUTING.md - guidelines for quantization schemes (#19762)
Piotr Wilkin (ilintar) [Fri, 13 Mar 2026 11:21:33 +0000 (12:21 +0100)]
general: CONTRIBUTING.md - guidelines for quantization schemes (#19762)

* Guidelines for quantization schemes

* Update CONTRIBUTING.md

Co-authored-by: Johannes Gäßler <redacted>
* Change required precision from Q8 to FP16/BF16

* Update CONTRIBUTING.md

Co-authored-by: Johannes Gäßler <redacted>
* Update CONTRIBUTING.md

Co-authored-by: Johannes Gäßler <redacted>
* Update CONTRIBUTING.md

Co-authored-by: Johannes Gäßler <redacted>
* Update CONTRIBUTING.md

Co-authored-by: Johannes Gäßler <redacted>
* Update CONTRIBUTING.md [no ci]

* Update CONTRIBUTING.md [no ci]

---------

Co-authored-by: Johannes Gäßler <redacted>
6 weeks agometal : fix l2 norm scale (#20493)
Georgi Gerganov [Fri, 13 Mar 2026 09:43:20 +0000 (11:43 +0200)]
metal : fix l2 norm scale (#20493)

6 weeks agoconvert : fix/suppress pyright errors (#20442)
Daniel Bevenius [Fri, 13 Mar 2026 05:00:52 +0000 (06:00 +0100)]
convert : fix/suppress pyright errors (#20442)

* convert : fix/suppress pyright errors

This commit fixes the pyright errors that are generated by pyright for
convert_hf_to_gguf.py.

The motivation for this is that running this locally generates errors
that CI does not, and it can be difficult to spot new errors. One use
case is when working on new models which cannot be run in CI due to
privacy. Having the ability to run pyright locally is would be helpful
in this cases.

In the linked issue there is the mention of switching to `ty` which I
don't know anything about but in the meantime I would appreciate if we
could suppress these errors for now, and later perhaps revert this
commit.

With this change there are no errors but there are 4 informations
messages if the `mistral_common` package is installed. The
`--level error` flag can be used to suppress them.

Resolves: https://github.com/ggml-org/llama.cpp/issues/20417

6 weeks agollama : disable graph reuse with pipeline parallelism (#20463)
Georgi Gerganov [Thu, 12 Mar 2026 19:04:13 +0000 (21:04 +0200)]
llama : disable graph reuse with pipeline parallelism (#20463)

6 weeks agovendor : update cpp-httplib to 0.37.1 (#20390)
Alessandro de Oliveira Faria (A.K.A.CABELO) [Thu, 12 Mar 2026 12:57:06 +0000 (09:57 -0300)]
vendor : update cpp-httplib to 0.37.1 (#20390)

6 weeks agotests : use `reasoning` instead of `reasoning_budget` in server tests (#20432)
Piotr Wilkin (ilintar) [Thu, 12 Mar 2026 12:41:01 +0000 (13:41 +0100)]
tests : use `reasoning` instead of `reasoning_budget` in server tests (#20432)

6 weeks agotest-backend-ops: allow loading tests from file and parsing model operators into...
Ruben Ortlam [Thu, 12 Mar 2026 12:26:00 +0000 (13:26 +0100)]
test-backend-ops: allow loading tests from file and parsing model operators into file (#19896)

* tests: allow loading test-backend-ops tests from json

* add error threshold based on op

* add error when file cannot be read

* add graph operator json extraction tool

* add nb parameter for non-contiguous input tensors

* fix view check

* only use view if non-contiguous/permuted, use C++ random instead of rand()

* replace internal API calls with public llama_graph_reserve call

* reduce test description length

* fix nb[0] not getting set for view

* add name to tests

* fix inplace error

* use text file instead of json

* move llama_graph_reserve function to new llama-ext header, move export-graph-ops to tests/

* fix missing declaration

* use pragma once

* fix indent

* fix Windows build

6 weeks agocommon : update completion executables list [no ci] (#19934)
Daniel Bevenius [Thu, 12 Mar 2026 11:12:01 +0000 (12:12 +0100)]
common : update completion executables list [no ci] (#19934)

This commit updates the bash completion executables list, adding missing
executables and removing some that non longer exist.

6 weeks agogrammar: Fix grammar root symbol check (#19761)
Asbjørn Olling [Thu, 12 Mar 2026 11:04:56 +0000 (12:04 +0100)]
grammar: Fix grammar root symbol check (#19761)

* grammar: fix bad check for root symbol, correct error logging

* add tests to demonstrate root symbol check failure

6 weeks agovulkan: add GATED_DELTA_NET op support (#20334)
ProgenyAlpha [Thu, 12 Mar 2026 10:32:04 +0000 (06:32 -0400)]
vulkan: add GATED_DELTA_NET op support (#20334)

* vulkan: add GATED_DELTA_NET op support

Implements the fused gated delta net recurrence as a Vulkan compute
shader with full support for scalar gate, KDA vector gate, GQA
broadcast, multi-token sequences, and permuted (non-contiguous) q/k
inputs. Specialization constants select head size (32/64/128) and
KDA mode at pipeline creation time.

Passes all 13 test-backend-ops cases on AMD Radeon 890M (RADV GFX1150).

Co-Authored-By: Claude Opus 4.6 <redacted>
* vulkan: optimize GATED_DELTA_NET shader (Phase 1)

- vec4 dot products on all inner loops (dp4 hardware intrinsic)
- Cache exp(g) in shared memory for KDA path, eliminating ~32K
  redundant global reads and ~16K redundant exp() calls per token
- vec4 fused decay + rank-1 update (3 vec4 ops vs 12 scalar ops)
- Add perf benchmark cases for GATED_DELTA_NET to test-backend-ops

KDA TG: +5.4% throughput. Non-KDA: no regressions.
13/13 test-backend-ops passing on AMD Radeon 890M (RADV GFX1150).

Co-Authored-By: Claude Opus 4.6 <redacted>
* vulkan: address review feedback for GATED_DELTA_NET

Pipeline array refactor [3][2], A_TYPE/D_TYPE/FLOAT_TYPE shader macros,
scale in push constants, supports_op fix, dispatch restructuring.

Co-Authored-By: Claude Opus 4.6 <redacted>
* vulkan: use FLOAT_TYPE for buffer/shared declarations, align formatting

Co-Authored-By: Claude Opus 4.6 <redacted>
* vulkan: add explicit FLOAT_TYPE casts for buffer loads

Wrap data_q, data_k, and data_g buffer reads with FLOAT_TYPE() casts
to ensure correct behavior across all Vulkan configurations.

Co-Authored-By: Claude Opus 4.6 <redacted>
* vulkan: fix Q/K broadcast for interleaved head layout

Adapt to the interleaved broadcast convention from #20340:
head_id / rq1 → head_id % neq1

Co-Authored-By: Claude Opus 4.6 <redacted>
---------

Co-authored-by: Progeny Alpha <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
6 weeks agoconvert : better mtp check and fix return [no ci] (#20419)
Sigbjørn Skjæret [Thu, 12 Mar 2026 09:04:20 +0000 (10:04 +0100)]
convert : better mtp check and fix return [no ci] (#20419)

6 weeks agovulkan: fix SSM_CONV PP scaling with large ubatch sizes (#20379)
ProgenyAlpha [Thu, 12 Mar 2026 09:03:18 +0000 (05:03 -0400)]
vulkan: fix SSM_CONV PP scaling with large ubatch sizes (#20379)

* vulkan: optimize SSM_CONV workgroup dispatch for large ubatch

Tile tokens into 2D workgroups (32x16) to reduce workgroup launch
overhead at large ubatch sizes. Add vec4 fast path for nc=4 (common
d_conv size). Fixes PP performance degradation with ubatch > 512.

Ref: ggml-org/llama.cpp#18725

Co-Authored-By: Claude Opus 4.6 <redacted>
* vulkan: remove unused shared memory declaration in SSM_CONV

Co-Authored-By: Claude Opus 4.6 <redacted>
---------

Co-authored-by: Progeny Alpha <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
6 weeks agoNew conversations now auto-select the first loaded model (#20403)
Pascal [Thu, 12 Mar 2026 08:07:05 +0000 (09:07 +0100)]
New conversations now auto-select the first loaded model (#20403)

* webui: auto-select first loaded model for new conversations in router mode

* chore: update webui build output

6 weeks agoggml-virtgpu: Fix some build commands (#20341)
Masashi Yoshimura [Thu, 12 Mar 2026 07:47:45 +0000 (16:47 +0900)]
ggml-virtgpu: Fix some build commands (#20341)

6 weeks agometal : avoid divisions in bin kernel (#20426)
Georgi Gerganov [Thu, 12 Mar 2026 07:42:40 +0000 (09:42 +0200)]
metal : avoid divisions in bin kernel (#20426)

* metal : avoid modulus in bin kernel when not broadcasting

* metal : fix capture_started flag

6 weeks agoci: Setup self-hosted CI for Intel Linux Vulkan backend (#20154)
Masato Nakasaka [Thu, 12 Mar 2026 05:43:22 +0000 (22:43 -0700)]
ci: Setup self-hosted CI for Intel Linux Vulkan backend (#20154)

6 weeks agovulkan: fix l2_norm epsilon handling (#20350)
Jeff Bolz [Thu, 12 Mar 2026 05:39:41 +0000 (00:39 -0500)]
vulkan: fix l2_norm epsilon handling (#20350)

6 weeks agovulkan: fix OOB check in flash_attn_mask_opt (#20296)
Jeff Bolz [Thu, 12 Mar 2026 05:35:49 +0000 (00:35 -0500)]
vulkan: fix OOB check in flash_attn_mask_opt (#20296)

6 weeks agovulkan: Fix ErrorOutOfHostMemory on Intel GPU when loading large models with --no...
Masato Nakasaka [Thu, 12 Mar 2026 05:30:16 +0000 (22:30 -0700)]
vulkan: Fix ErrorOutOfHostMemory on Intel GPU when loading large models with --no-mmap (#20059)

* Changed to reuse command buffers to fix crashing on Intel GPU

* Removed unused parameter

* Fixed compile error and minor mistake

* Fix logging

* Changing to use usage flag per command buffer

* fixed style

* added buffer reset

* Removed cmd_buffer_idx for reuse consistency

* Fixed style

6 weeks agoopencl: use larger workgroup size for get_rows (#20316)
lhez [Thu, 12 Mar 2026 05:03:27 +0000 (22:03 -0700)]
opencl: use larger workgroup size for get_rows (#20316)

6 weeks agoopencl: add cumsum op (#18981)
shaofeiqi [Thu, 12 Mar 2026 05:03:07 +0000 (22:03 -0700)]
opencl: add cumsum op (#18981)

* OpenCL: add CUMSUM op support

* remove unused argument

* opencl: refactor cumsum

* opencl: refactor

* opencl: refactor tmp buffer

* opencl: adjust max number of subgroups

* opencl: fix whitespace

* opencl: fix global size when cumsum the tmp buffer

---------

Co-authored-by: Li He <redacted>
6 weeks agohip: compile debug builds with -O2 on hip to avoid a compiler bug (#20392)
uvos [Thu, 12 Mar 2026 02:37:10 +0000 (03:37 +0100)]
hip: compile debug builds with -O2 on hip to avoid a compiler bug (#20392)

6 weeks agocommon/parser: add GigaChatV3/3.1 models support (#19931)
Mishusha [Thu, 12 Mar 2026 00:22:25 +0000 (03:22 +0300)]
common/parser: add GigaChatV3/3.1 models support (#19931)

Co-authored-by: Mishusha <redacted>
6 weeks agomodel : add support for Phi4ForCausalLMV (#20168)
DAN™ [Wed, 11 Mar 2026 23:25:54 +0000 (19:25 -0400)]
model : add support for Phi4ForCausalLMV (#20168)

* Add support for Phi4ForCausalLMV.

* Fix Phi-4 vision parity (correcting SigLIP2 patch-kernel export layout) and matching HF NaFlex resize behavior in mtmd.

* Rename contants + fix tokenizer label

* Clean-ups.

* Fix GGUF export.

* Set tokenizer.ggml.pre explicitly.

* Default vocab name rather than forcing it.

* Clean-ups.

* Fix indent.

* Fix subscriptable error.

* remov overcomplicated code path

* Clean-ups.

---------

Co-authored-by: Xuan Son Nguyen <redacted>
6 weeks agograph : add optional scale parameter to build_lora_mm [no ci] (#20427)
Richard Davison [Wed, 11 Mar 2026 23:22:49 +0000 (00:22 +0100)]
graph : add optional scale parameter to build_lora_mm [no ci] (#20427)

6 weeks agocommon : fix --n-cpu-moe, --cpu-moe for models with fused gate + up (#20416)
ddh0 [Wed, 11 Mar 2026 23:13:28 +0000 (18:13 -0500)]
common : fix --n-cpu-moe, --cpu-moe for models with fused gate + up (#20416)

6 weeks agoggml-webgpu: Add supports for `GGML_OP_REPEAT` (#20230)
Masashi Yoshimura [Wed, 11 Mar 2026 21:40:36 +0000 (06:40 +0900)]
ggml-webgpu: Add supports for `GGML_OP_REPEAT` (#20230)

* Add GGML_OP_REPEAT to webgpu backend.

* Add i16 support for GGML_OP_REPEAT.

6 weeks agollama : enable chunked fused GDN path (#20340)
Georgi Gerganov [Wed, 11 Mar 2026 20:46:40 +0000 (22:46 +0200)]
llama : enable chunked fused GDN path (#20340)

* llama : enable chunked fused GDN path

* models : avoid Q and K repeats when using fused GDA

* cont : fix comment

Co-authored-by: Aman Gupta <redacted>
* cont : fix the fix

Co-authored-by: Aman Gupta <redacted>
* cont : fix

* metal : add GDN kernel (#20361)

* metal : add Metal backend for GGML_OP_GATED_DELTA_NET

Add a fused Metal kernel for the gated delta net recurrence op
(#19504), enabling GPU-accelerated inference for DeltaNet-based
models (Qwen3.5, etc.) on Apple Silicon.

Supports both GDA (scalar gate) and KDA (per-row gate) modes
with head_size 64 and 128. Unsupported configurations (head_size
32, non-contiguous tensors) gracefully fall back to CPU.

Performance: Qwen3.5-0.8B Q4_K_M on M4 Max
  tg128: 170 -> 213 t/s (+25%)

Co-Authored-By: Claude Opus 4.6 <redacted>
* metal : validate contiguity of all input tensors in supports_op

Co-Authored-By: Claude Opus 4.6 <redacted>
* metal : add algorithm equivalence comment for GDA decay path

Co-Authored-By: Claude Opus 4.6 <redacted>
* cont : unslop + optimize

* cont : clean-up

---------

Co-authored-by: Paul Flynn <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
* CUDA: AR gated delta net improvements (#20391)

* Add FastDiv to gated_delta_net_cuda

* Shard columns across warps

This reduces register pressure (avoids spill for S_v = 128) and gives
the warp-scheduler more CTAs to schedule (thus hiding data-access
latencies).

* Remove unneded include in gated_delta_net.cu

* Improve comments

* Apply code-formating

* Make sharding HIP-compatible

1. Use ggml_cuda_get_physical_warp_size() to determine warp size flexibly
2. Add test with partial warp to test sum reduction on CUDA

* Remove fastdiv_s64, as we can treat neqk1 and rq3 as uint32_t

* Rename variables

* Enable GDN also for prefill, move TODO for chunked_GDN

* Actually remove the TODO from 206890897546bd16602c3b79394fd5ea09ef199f

* Get warp size at runtime

warp_size is not known at compile time in hip host code.

* Don't expose ggml_cuda_get_physical_warp_size on host

---------

Co-authored-by: uvos <redacted>
* llama : refactor llm_build_delta_net_base API

---------

Co-authored-by: Aman Gupta <redacted>
Co-authored-by: Paul Flynn <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
Co-authored-by: Oliver Simons <redacted>
Co-authored-by: uvos <redacted>
6 weeks agollama : whitespace cleanup (#20422)
Sigbjørn Skjæret [Wed, 11 Mar 2026 20:18:29 +0000 (21:18 +0100)]
llama : whitespace cleanup (#20422)

6 weeks agoggml : add NVFP4 quantization type support (#19769)
Richard Davison [Wed, 11 Mar 2026 20:02:54 +0000 (21:02 +0100)]
ggml : add NVFP4 quantization type support (#19769)

* WIP: add NVFP4 quantization support

* tests

* improve NVFP4 dot product implementation performance and fix bad super call

* typo

* Use nvfp4 kvalues

* vulkan : fix NVFP4 shader compilation by including kvalues_mxfp4 lookup table

* vulcal and perf fixes

* wip

* Fix metal

* fix vulcan

* Rename threshold & fix wrong scale

* Fix MOE

* Shelf backend implementations (CUDA, Metal, Vulkan, arch-specific SIMD)

Remove NVFP4 support from GPU backends and architecture-specific
optimized dot products. These should be added in separate PRs so
backend specialists can review them independently.

Reverted files:
- ggml-cuda: common.cuh, convert.cu, mmq.cu/cuh, mmvq.cu, vecdotq.cuh,
  quantize.cu/cuh, mma.cuh, ggml-cuda.cu, fattn-tile.cuh
- ggml-metal: ggml-metal.metal, ggml-metal-device.cpp, ggml-metal-impl.h,
  ggml-metal-ops.cpp
- ggml-vulkan: ggml-vulkan.cpp, all vulkan-shaders/*
- ggml-cpu arch: arm/quants.c, x86/quants.c, powerpc/quants.c, s390/quants.c

Core NVFP4 support (type definition, CPU fallback dot product,
quantization, dequantization, conversion) is retained.

* Fix arch-fallback.h: add NVFP4 generic fallback for all platforms

After shelving backend-specific SIMD implementations, the generic
CPU dot product needs to be aliased on ARM, x86, PowerPC, and s390
platforms that previously relied on arch-specific versions.

* quantize: add NVFP4 as a quantization type option

* Fix ggml_fp32_to_ue4m3: handle subnormal values

Previously, values with ue4m3_exp <= 0 were clamped to 0, causing
all small scales to underflow. This made NVFP4 quantization via
llama-quantize produce garbage (PPL = 5.8M) since typical transformer
weights have amax/6.0 in the range 0.001-0.01, which falls in the
UE4M3 subnormal range.

Now subnormals are properly encoded as man * 2^-9 (exp=0, man=1..7),
matching the decode path in ggml_ue4m3_to_fp32.

Result: NVFP4 requantization now produces PPL = 15.25 (vs F16 = 14.33),
comparable to Q4_1 (PPL = 15.81) at slightly lower BPW (4.70 vs 5.15).

* Restore ARM NEON NVFP4 dot product implementation

Restores the optimized ggml_vec_dot_nvfp4_q8_0 for ARM NEON using
vqtbl1q_s8 lookup and ggml_vdotq_s32 dot products.

tg128 performance: 4.37 t/s (generic) -> 13.66 t/s (NEON) = 3.1x speedup

* Optimize ARM NEON NVFP4 dot product: LUT + vpaddq + vfmaq

- Add ue4m3_scale_lut[128] to ggml-common.h replacing branch-heavy
  ggml_ue4m3_to_fp32() in the hot loop
- Use vpaddq_s32 for pairwise int32 reduction instead of vaddvq_s32
- Accumulate with vfmaq_f32 into float32x4_t vector accumulators

tg128: 8.1 -> 31.0 t/s (3.8x speedup, 77% of Q4_1 speed)

* ARM NEON NVFP4: rearrange q8 to match nibble layout

Alternative approach: rearrange q8 data to match the NVFP4 lo/hi
nibble layout instead of rearranging the looked-up NVFP4 values.
Eliminates vcombine_s8(vget_low, vget_low) shuffles.

Performance is equivalent (~18.5 t/s) - the bottleneck is the 2x
block overhead from QK=16 vs QK=32, not the shuffle instructions.

* CPU only backend 64 super-block layout

* cleanup

* Remove unused LUT

* int

* exclude NVFP4 from unsupported ops in metal build

* remove quantization for now

* store scales as native UE4M3, preserve original model bits when possible

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* correct comment

* format

* reduce duplication and cleanup

* Address comments

* move detection to prepare_tensors

* Use math instead of const

* Move

* fix comment

* Shelf quantize tests

* Rebase and move check

* cleanup

* lint

* Update gguf-py/gguf/scripts/gguf_convert_endian.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Use fallback quant config

* Simplify

Co-authored-by: Sigbjørn Skjæret <redacted>
* organize

* Refactor

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* add quantize_nvfp4 (required for test_quants.py)

* add quantize_nvfp4 (required for test_quants.py)

* add quantize_nvfp4 (required for test_quants.py)

* fix return type

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agobenches : add nemotron super (#20420)
Georgi Gerganov [Wed, 11 Mar 2026 19:39:40 +0000 (21:39 +0200)]
benches : add nemotron super (#20420)

6 weeks agollama : add support for Nemotron 3 Super (#20411)
Daniel Bevenius [Wed, 11 Mar 2026 18:27:53 +0000 (19:27 +0100)]
llama : add support for Nemotron 3 Super (#20411)

* llama : add support for Nemotron 3 Super

This commit adds support for the Nemotron 3 Super model (120B.A12B)
enabling this model to be converted to GGUF format and run in llama.cpp.

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Matt Clayton <redacted>
6 weeks agometal : fix capture_compute counter logic (#20410)
Georgi Gerganov [Wed, 11 Mar 2026 16:38:22 +0000 (18:38 +0200)]
metal : fix capture_compute counter logic (#20410)

6 weeks agocompare-llama-bench: check remotes as well (#20406)
Aman Gupta [Wed, 11 Mar 2026 16:14:42 +0000 (00:14 +0800)]
compare-llama-bench: check remotes as well (#20406)

6 weeks agometal : fix q5_k mul_mv register spill (#20399)
Georgi Gerganov [Wed, 11 Mar 2026 14:25:27 +0000 (16:25 +0200)]
metal : fix q5_k mul_mv register spill (#20399)

6 weeks agometal : add env var to trigger graph capture (#20398)
Georgi Gerganov [Wed, 11 Mar 2026 14:25:10 +0000 (16:25 +0200)]
metal : add env var to trigger graph capture (#20398)

6 weeks ago[SYCL] Update SYCL.md for binary package for Windows (#20401)
Neo Zhang [Wed, 11 Mar 2026 14:21:22 +0000 (22:21 +0800)]
[SYCL] Update SYCL.md for binary package for Windows (#20401)

* add download binary package

* update prefix

6 weeks agoci: disable coopmat on ubuntu-24-cmake-vulkan job (#20294)
Ruben Ortlam [Wed, 11 Mar 2026 13:12:29 +0000 (14:12 +0100)]
ci: disable coopmat on ubuntu-24-cmake-vulkan job (#20294)

6 weeks agocommon/parser: use nlohmann::ordered_json to preserve parameter order (#20385)
Aldehir Rojas [Wed, 11 Mar 2026 09:26:51 +0000 (04:26 -0500)]
common/parser: use nlohmann::ordered_json to preserve parameter order (#20385)

6 weeks agocommon/parser: handle reasoning budget (#20297)
Piotr Wilkin (ilintar) [Wed, 11 Mar 2026 09:26:12 +0000 (10:26 +0100)]
common/parser: handle reasoning budget (#20297)

* v1

* Finished!

* Handlie cli

* Reasoning sampler

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Less explosive terminology :)

* Add utf-8 case and tests

* common : migrate reasoning budget sampler to common

* cont : clean up

* cont : expose state and allow passing as initial state

* cont : remove unused imports

* cont : update state machine doc string

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Alde Rojas <redacted>
6 weeks agoggml-cuda: gdn use shared mem for HIP (#20366)
uvos [Wed, 11 Mar 2026 05:06:19 +0000 (06:06 +0100)]
ggml-cuda: gdn use shared mem for HIP (#20366)

Suggested-by: Aman Gupta <redacted>
6 weeks agocuda/hip: fix loop unrolling in ssm-conv (#20369)
uvos [Wed, 11 Mar 2026 05:04:32 +0000 (06:04 +0100)]
cuda/hip: fix loop unrolling in ssm-conv (#20369)

6 weeks agoFix agentic mcp image single model (#20339)
Pascal [Wed, 11 Mar 2026 04:31:33 +0000 (05:31 +0100)]
Fix agentic mcp image single model (#20339)

* webui: fix MCP image attachments dropped during the agentic loop in single-model mode

* chore: update webui build output

6 weeks agovendor : update cpp-httplib to 0.37.0 (#20207)
Alessandro de Oliveira Faria (A.K.A.CABELO) [Wed, 11 Mar 2026 03:03:53 +0000 (00:03 -0300)]
vendor : update cpp-httplib to 0.37.0 (#20207)

6 weeks agovendor : update miniaudio to 0.11.25 (#20209)
Alessandro de Oliveira Faria (A.K.A.CABELO) [Wed, 11 Mar 2026 03:01:56 +0000 (00:01 -0300)]
vendor : update miniaudio to 0.11.25 (#20209)

6 weeks agofix op rope, add rope_back (#20293)
Neo Zhang [Wed, 11 Mar 2026 01:53:34 +0000 (09:53 +0800)]
fix op rope, add rope_back (#20293)

6 weeks agofix for failed UT case: ACC, L2_NORM, UPSCALE, fused_glu, unary (#20283)
Neo Zhang [Wed, 11 Mar 2026 01:53:05 +0000 (09:53 +0800)]
fix for failed UT case: ACC, L2_NORM, UPSCALE, fused_glu, unary (#20283)

6 weeks agomodel : qwen3vl reranker text support (#20332)
Vinicios Lugli [Tue, 10 Mar 2026 22:40:14 +0000 (19:40 -0300)]
model : qwen3vl reranker text support (#20332)

* model : fix qwen3vl reranker support

* Remove CLS_OUT

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agollama-quant : correct `n_attention_wv` usage (#20357)
ddh0 [Tue, 10 Mar 2026 19:43:29 +0000 (14:43 -0500)]
llama-quant : correct `n_attention_wv` usage (#20357)

* llama-quant : correct `n_attention_wv` usage

In #19770, I introduced a regression in the way the
`quantize_state_impl` counter values were initialized. I was
incrementing and using `n_attention_wv` in the same loop, when it should
have been fixed by the time we're deciding tensor types in
`llama_tensor_get_type_impl` (for `use_more_bits`).

I never observed a difference in any of [my
tests](https://github.com/ggml-org/llama.cpp/pull/19770#issuecomment-4000424712)
- it was only after @bartowski kindly pointed this out that I realized
it was incorrect. (Thanks!)

* simplify

6 weeks agoggml : bump RPC version (#20330)
Georgi Gerganov [Tue, 10 Mar 2026 19:36:57 +0000 (21:36 +0200)]
ggml : bump RPC version (#20330)

6 weeks agoggml webgpu: faster normal quant and some k-quant matrix operations, better shader...
Reese Levine [Tue, 10 Mar 2026 16:14:27 +0000 (09:14 -0700)]
ggml webgpu: faster normal quant and some k-quant matrix operations, better shader parameter handling (#20173)

* K quant speedup (#20)

* Basic JIT compilation for mul_mat, get_rows, and scale (#17)

* scale jit working

* preliminary working jit for getrows and mulmat, needs refining

* simplified mul_mat preprocessing switch statement

* get_rows fixes, mul_mat refinement

* formatted + last edits

* removed some extraneous prints

* fixed get_rows, fixed workgroup dispatch in mul_mat. no gibberish

* small fix

* some changes, working

* get_rows and mul_mat jit fixed and working

* Update formatting

* formatting

* Add header

---------

Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
* Start work on all-encompassing shader library

* refactor argmax, set_rows

* Refactor all but flashattention, mat mul

* no gibberish, all k quants added, merged

* vec memory fix

* q6_k matching metal on my machine, tests passing

* Set tile size for q6_k separately

* Separate out fast shaders

---------

Co-authored-by: neha-ha <redacted>
* Move towards writeBuffer for params

* Move away from multiple buffers for set_rows errors, remove host buffer for parameter buffers, minor cleanups

* Remove extra file

* Formatting

---------

Co-authored-by: neha-ha <redacted>
6 weeks agoReduce level of content parser warning message to avoid log spam on non-debug verbosi...
Piotr Wilkin (ilintar) [Tue, 10 Mar 2026 14:21:51 +0000 (15:21 +0100)]
Reduce level of content parser warning message to avoid log spam on non-debug verbosity (#20347)

6 weeks agoexamples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)
Ray Xu [Tue, 10 Mar 2026 13:38:18 +0000 (21:38 +0800)]
examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)

* Fix logic for retrieving schema items in `json_schema_to_grammar.py`

If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.

I think if `schema['items']` is `{}`, them items should just be `{}`

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <redacted>
* Add tests for arrays with empty items

Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agodocs: update CPU backend ops to mark POOL_1D as supported (#20304)
a3894281 [Tue, 10 Mar 2026 13:31:24 +0000 (15:31 +0200)]
docs: update CPU backend ops to mark POOL_1D as supported (#20304)

6 weeks agomodels : fix assert in mamba2 (cont) (#20335)
Georgi Gerganov [Tue, 10 Mar 2026 13:00:08 +0000 (15:00 +0200)]
models : fix assert in mamba2 (cont) (#20335)

* models : fix assert in mamba2 (cont)

* cont : add n_group mod

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoserver : make 2 checkpoints near the end of the prompt (#20288)
Georgi Gerganov [Tue, 10 Mar 2026 12:28:23 +0000 (14:28 +0200)]
server : make 2 checkpoints near the end of the prompt (#20288)

* server : make 2 checkpoints near the end of the prompt

* cont : adjust checkpoints

6 weeks agocommon : fix incorrect uses of stoul (#20313)
Sigbjørn Skjæret [Tue, 10 Mar 2026 10:40:26 +0000 (11:40 +0100)]
common : fix incorrect uses of stoul (#20313)

6 weeks agokleidiai : support for concurrent sme and neon kernel execution (#20070)
Charles Xu [Tue, 10 Mar 2026 07:25:25 +0000 (08:25 +0100)]
kleidiai : support for concurrent sme and neon kernel execution (#20070)

6 weeks agoggml-cpu: add RVV repack GEMM and GEMV for quantization types (#19121)
Taimur Ahmad [Tue, 10 Mar 2026 06:49:52 +0000 (11:49 +0500)]
ggml-cpu: add RVV repack GEMM and GEMV for quantization types (#19121)

* ggml-cpu: add rvv ggml_quantize_mat_4x8 for q8_0

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: add rvv repacking for iq4_nl

* ggml-cpu: add generic impl for iq4_nl gemm/gemv

* ggml-cpu: add rvv repacking for q8_0

* ggml-cpu: refactor; add rvv repacking for q4_0, q4_K

* ggml-cpu: refactor; add rvv repacking for q2_K

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: refactor rvv repack

---------

Co-authored-by: Rehan Qasim <redacted>
6 weeks agometal: handle command buffer failures gracefully in synchronize (#20306)
Julian Pscheid [Tue, 10 Mar 2026 06:32:24 +0000 (23:32 -0700)]
metal: handle command buffer failures gracefully in synchronize (#20306)

Replace GGML_ABORT("fatal error") in ggml_metal_synchronize() with
error flag + return. This aligns synchronize error handling with
graph_compute, which already returns GGML_STATUS_FAILED for the same
condition.

When a command buffer fails (e.g., iOS GPU access revocation during
backgrounding, macOS eGPU disconnect, OOM), the backend enters an
error state instead of killing the host process. Subsequent
graph_compute calls return GGML_STATUS_FAILED immediately. Recovery
requires recreating the backend.

Failed extra command buffers are properly released on the error path
to avoid Metal object leaks.

6 weeks agollama-quant : fail early on missing imatrix, refactor type selection, code cleanup...
ddh0 [Tue, 10 Mar 2026 06:16:05 +0000 (01:16 -0500)]
llama-quant : fail early on missing imatrix, refactor type selection, code cleanup (#19770)

* quantize : imatrix-fail early + code cleanup

* fix manual override printing

it's in the preliminary loop now, so needs to be on its own line

* revert header changes per ggerganov

* remove old #includes

* clarify naming

rename `tensor_quantization` to `tensor_typo_option` to descirbe its
functionality

* fix per barto

6 weeks agocommon: consolidate PEG string parsers (#20263)
Aldehir Rojas [Mon, 9 Mar 2026 23:29:21 +0000 (18:29 -0500)]
common: consolidate PEG string parsers (#20263)

* common : consolidate PEG string parsers
* cont : fix json_string_content()

6 weeks agomodel: fix step3.5 n_rot (#20318)
Xuan-Son Nguyen [Mon, 9 Mar 2026 22:42:24 +0000 (23:42 +0100)]
model: fix step3.5 n_rot (#20318)

6 weeks agollama: dynamic head_dim and n_rot for SWA (#20301)
Xuan-Son Nguyen [Mon, 9 Mar 2026 21:22:39 +0000 (22:22 +0100)]
llama: dynamic head_dim and n_rot for SWA (#20301)

* llama: dynamic head_dim and n_rot for SWA

* also add gguf_writer wrappers

* fix build

* build_rope_shift arg reorder

6 weeks agoserver: Parse port numbers from MCP server URLs in CORS proxy (#20208)
Evan Huus [Mon, 9 Mar 2026 16:47:54 +0000 (12:47 -0400)]
server: Parse port numbers from MCP server URLs in CORS proxy (#20208)

* Parse port numbers from MCP server URLs

* Pass scheme to http proxy for determining whether to use SSL

* Fix download on non-standard port and re-add port to logging

* add test

---------

Co-authored-by: Xuan Son Nguyen <redacted>
6 weeks agometal : extend mul_mv_ext to BF16, Q2_K, Q3_K (#20250)
Paul Flynn [Mon, 9 Mar 2026 14:48:12 +0000 (10:48 -0400)]
metal : extend mul_mv_ext to BF16, Q2_K, Q3_K (#20250)

Enable mul_mv_ext small-batch kernels (BS 2-8) for BF16, Q2_K,
and Q3_K quantization types. These types previously fell through
to the slower single-row mul_mv path.

BF16 uses the float4 dequantize path (like F16). Q2_K and Q3_K
use the float4x4 K-quant path (like Q4_K/Q5_K/Q6_K).

Co-authored-by: Claude Opus 4.6 <redacted>
6 weeks agoserver : fix checkpoints n_tokens calculation (#20287)
Georgi Gerganov [Mon, 9 Mar 2026 14:47:06 +0000 (16:47 +0200)]
server : fix checkpoints n_tokens calculation (#20287)

6 weeks agometal : add upscale (#20284)
Georgi Gerganov [Mon, 9 Mar 2026 14:45:11 +0000 (16:45 +0200)]
metal : add upscale (#20284)

6 weeks agoserver : warn swa-full is not supported for non-SWA models (#20291)
Georgi Gerganov [Mon, 9 Mar 2026 14:44:25 +0000 (16:44 +0200)]
server : warn swa-full is not supported for non-SWA models (#20291)

6 weeks agoserver : fix off-by-1 in server_tokens::size_up_to_pos() (#20279)
Georgi Gerganov [Mon, 9 Mar 2026 14:43:38 +0000 (16:43 +0200)]
server : fix off-by-1 in server_tokens::size_up_to_pos() (#20279)

* server : fix off-by-1 in server_tokens::size_up_to_pos()

* cont : fix typo [no ci]

6 weeks agocommon: map developer role to system (#20215)
Piotr Wilkin (ilintar) [Mon, 9 Mar 2026 13:25:11 +0000 (14:25 +0100)]
common: map developer role to system (#20215)

* Map developer role to system
* Simplify

6 weeks agomodels : fix assert in mamba2 graph (#20270)
Georgi Gerganov [Mon, 9 Mar 2026 11:15:15 +0000 (13:15 +0200)]
models : fix assert in mamba2 graph (#20270)

6 weeks agoserver : add kill switch when server is stuck (#20277)
Georgi Gerganov [Mon, 9 Mar 2026 08:33:12 +0000 (10:33 +0200)]
server : add kill switch when server is stuck (#20277)

6 weeks agoggml-cuda: disable gdn for musa (#20278)
Aman Gupta [Mon, 9 Mar 2026 08:15:36 +0000 (16:15 +0800)]
ggml-cuda: disable gdn for musa (#20278)

6 weeks agollama-quant : left-align tensor names in output (#20117)
ddh0 [Mon, 9 Mar 2026 07:28:41 +0000 (02:28 -0500)]
llama-quant : left-align tensor names in output (#20117)

6 weeks agocontributing: limit open PRs for new contributors to 1 (#20036)
Aman Gupta [Mon, 9 Mar 2026 07:05:34 +0000 (15:05 +0800)]
contributing: limit open PRs for new contributors to 1 (#20036)

6 weeks agoggml-vulkan: add SGN operator, auto-generate Vulkan.csv and ops.md (#20219)
Bertay Eren [Mon, 9 Mar 2026 06:24:16 +0000 (09:24 +0300)]
ggml-vulkan: add SGN operator, auto-generate Vulkan.csv and ops.md (#20219)

6 weeks agovulkan: skip zero size tensors in backend copies (#20233)
Ruben Ortlam [Mon, 9 Mar 2026 06:23:45 +0000 (07:23 +0100)]
vulkan: skip zero size tensors in backend copies (#20233)

6 weeks agocuda : display total and free VRAM capacity during device initialization (#20185)
Michael Huang [Mon, 9 Mar 2026 04:45:43 +0000 (21:45 -0700)]
cuda : display total and free VRAM capacity during device initialization (#20185)