]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
6 weeks agohip: compile debug builds with -O2 on hip to avoid a compiler bug (#20392)
uvos [Thu, 12 Mar 2026 02:37:10 +0000 (03:37 +0100)]
hip: compile debug builds with -O2 on hip to avoid a compiler bug (#20392)

6 weeks agocommon/parser: add GigaChatV3/3.1 models support (#19931)
Mishusha [Thu, 12 Mar 2026 00:22:25 +0000 (03:22 +0300)]
common/parser: add GigaChatV3/3.1 models support (#19931)

Co-authored-by: Mishusha <redacted>
6 weeks agomodel : add support for Phi4ForCausalLMV (#20168)
DAN™ [Wed, 11 Mar 2026 23:25:54 +0000 (19:25 -0400)]
model : add support for Phi4ForCausalLMV (#20168)

* Add support for Phi4ForCausalLMV.

* Fix Phi-4 vision parity (correcting SigLIP2 patch-kernel export layout) and matching HF NaFlex resize behavior in mtmd.

* Rename contants + fix tokenizer label

* Clean-ups.

* Fix GGUF export.

* Set tokenizer.ggml.pre explicitly.

* Default vocab name rather than forcing it.

* Clean-ups.

* Fix indent.

* Fix subscriptable error.

* remov overcomplicated code path

* Clean-ups.

---------

Co-authored-by: Xuan Son Nguyen <redacted>
6 weeks agograph : add optional scale parameter to build_lora_mm [no ci] (#20427)
Richard Davison [Wed, 11 Mar 2026 23:22:49 +0000 (00:22 +0100)]
graph : add optional scale parameter to build_lora_mm [no ci] (#20427)

6 weeks agocommon : fix --n-cpu-moe, --cpu-moe for models with fused gate + up (#20416)
ddh0 [Wed, 11 Mar 2026 23:13:28 +0000 (18:13 -0500)]
common : fix --n-cpu-moe, --cpu-moe for models with fused gate + up (#20416)

6 weeks agoggml-webgpu: Add supports for `GGML_OP_REPEAT` (#20230)
Masashi Yoshimura [Wed, 11 Mar 2026 21:40:36 +0000 (06:40 +0900)]
ggml-webgpu: Add supports for `GGML_OP_REPEAT` (#20230)

* Add GGML_OP_REPEAT to webgpu backend.

* Add i16 support for GGML_OP_REPEAT.

6 weeks agollama : enable chunked fused GDN path (#20340)
Georgi Gerganov [Wed, 11 Mar 2026 20:46:40 +0000 (22:46 +0200)]
llama : enable chunked fused GDN path (#20340)

* llama : enable chunked fused GDN path

* models : avoid Q and K repeats when using fused GDA

* cont : fix comment

Co-authored-by: Aman Gupta <redacted>
* cont : fix the fix

Co-authored-by: Aman Gupta <redacted>
* cont : fix

* metal : add GDN kernel (#20361)

* metal : add Metal backend for GGML_OP_GATED_DELTA_NET

Add a fused Metal kernel for the gated delta net recurrence op
(#19504), enabling GPU-accelerated inference for DeltaNet-based
models (Qwen3.5, etc.) on Apple Silicon.

Supports both GDA (scalar gate) and KDA (per-row gate) modes
with head_size 64 and 128. Unsupported configurations (head_size
32, non-contiguous tensors) gracefully fall back to CPU.

Performance: Qwen3.5-0.8B Q4_K_M on M4 Max
  tg128: 170 -> 213 t/s (+25%)

Co-Authored-By: Claude Opus 4.6 <redacted>
* metal : validate contiguity of all input tensors in supports_op

Co-Authored-By: Claude Opus 4.6 <redacted>
* metal : add algorithm equivalence comment for GDA decay path

Co-Authored-By: Claude Opus 4.6 <redacted>
* cont : unslop + optimize

* cont : clean-up

---------

Co-authored-by: Paul Flynn <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
* CUDA: AR gated delta net improvements (#20391)

* Add FastDiv to gated_delta_net_cuda

* Shard columns across warps

This reduces register pressure (avoids spill for S_v = 128) and gives
the warp-scheduler more CTAs to schedule (thus hiding data-access
latencies).

* Remove unneded include in gated_delta_net.cu

* Improve comments

* Apply code-formating

* Make sharding HIP-compatible

1. Use ggml_cuda_get_physical_warp_size() to determine warp size flexibly
2. Add test with partial warp to test sum reduction on CUDA

* Remove fastdiv_s64, as we can treat neqk1 and rq3 as uint32_t

* Rename variables

* Enable GDN also for prefill, move TODO for chunked_GDN

* Actually remove the TODO from 206890897546bd16602c3b79394fd5ea09ef199f

* Get warp size at runtime

warp_size is not known at compile time in hip host code.

* Don't expose ggml_cuda_get_physical_warp_size on host

---------

Co-authored-by: uvos <redacted>
* llama : refactor llm_build_delta_net_base API

---------

Co-authored-by: Aman Gupta <redacted>
Co-authored-by: Paul Flynn <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
Co-authored-by: Oliver Simons <redacted>
Co-authored-by: uvos <redacted>
6 weeks agollama : whitespace cleanup (#20422)
Sigbjørn Skjæret [Wed, 11 Mar 2026 20:18:29 +0000 (21:18 +0100)]
llama : whitespace cleanup (#20422)

6 weeks agoggml : add NVFP4 quantization type support (#19769)
Richard Davison [Wed, 11 Mar 2026 20:02:54 +0000 (21:02 +0100)]
ggml : add NVFP4 quantization type support (#19769)

* WIP: add NVFP4 quantization support

* tests

* improve NVFP4 dot product implementation performance and fix bad super call

* typo

* Use nvfp4 kvalues

* vulkan : fix NVFP4 shader compilation by including kvalues_mxfp4 lookup table

* vulcal and perf fixes

* wip

* Fix metal

* fix vulcan

* Rename threshold & fix wrong scale

* Fix MOE

* Shelf backend implementations (CUDA, Metal, Vulkan, arch-specific SIMD)

Remove NVFP4 support from GPU backends and architecture-specific
optimized dot products. These should be added in separate PRs so
backend specialists can review them independently.

Reverted files:
- ggml-cuda: common.cuh, convert.cu, mmq.cu/cuh, mmvq.cu, vecdotq.cuh,
  quantize.cu/cuh, mma.cuh, ggml-cuda.cu, fattn-tile.cuh
- ggml-metal: ggml-metal.metal, ggml-metal-device.cpp, ggml-metal-impl.h,
  ggml-metal-ops.cpp
- ggml-vulkan: ggml-vulkan.cpp, all vulkan-shaders/*
- ggml-cpu arch: arm/quants.c, x86/quants.c, powerpc/quants.c, s390/quants.c

Core NVFP4 support (type definition, CPU fallback dot product,
quantization, dequantization, conversion) is retained.

* Fix arch-fallback.h: add NVFP4 generic fallback for all platforms

After shelving backend-specific SIMD implementations, the generic
CPU dot product needs to be aliased on ARM, x86, PowerPC, and s390
platforms that previously relied on arch-specific versions.

* quantize: add NVFP4 as a quantization type option

* Fix ggml_fp32_to_ue4m3: handle subnormal values

Previously, values with ue4m3_exp <= 0 were clamped to 0, causing
all small scales to underflow. This made NVFP4 quantization via
llama-quantize produce garbage (PPL = 5.8M) since typical transformer
weights have amax/6.0 in the range 0.001-0.01, which falls in the
UE4M3 subnormal range.

Now subnormals are properly encoded as man * 2^-9 (exp=0, man=1..7),
matching the decode path in ggml_ue4m3_to_fp32.

Result: NVFP4 requantization now produces PPL = 15.25 (vs F16 = 14.33),
comparable to Q4_1 (PPL = 15.81) at slightly lower BPW (4.70 vs 5.15).

* Restore ARM NEON NVFP4 dot product implementation

Restores the optimized ggml_vec_dot_nvfp4_q8_0 for ARM NEON using
vqtbl1q_s8 lookup and ggml_vdotq_s32 dot products.

tg128 performance: 4.37 t/s (generic) -> 13.66 t/s (NEON) = 3.1x speedup

* Optimize ARM NEON NVFP4 dot product: LUT + vpaddq + vfmaq

- Add ue4m3_scale_lut[128] to ggml-common.h replacing branch-heavy
  ggml_ue4m3_to_fp32() in the hot loop
- Use vpaddq_s32 for pairwise int32 reduction instead of vaddvq_s32
- Accumulate with vfmaq_f32 into float32x4_t vector accumulators

tg128: 8.1 -> 31.0 t/s (3.8x speedup, 77% of Q4_1 speed)

* ARM NEON NVFP4: rearrange q8 to match nibble layout

Alternative approach: rearrange q8 data to match the NVFP4 lo/hi
nibble layout instead of rearranging the looked-up NVFP4 values.
Eliminates vcombine_s8(vget_low, vget_low) shuffles.

Performance is equivalent (~18.5 t/s) - the bottleneck is the 2x
block overhead from QK=16 vs QK=32, not the shuffle instructions.

* CPU only backend 64 super-block layout

* cleanup

* Remove unused LUT

* int

* exclude NVFP4 from unsupported ops in metal build

* remove quantization for now

* store scales as native UE4M3, preserve original model bits when possible

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* correct comment

* format

* reduce duplication and cleanup

* Address comments

* move detection to prepare_tensors

* Use math instead of const

* Move

* fix comment

* Shelf quantize tests

* Rebase and move check

* cleanup

* lint

* Update gguf-py/gguf/scripts/gguf_convert_endian.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Use fallback quant config

* Simplify

Co-authored-by: Sigbjørn Skjæret <redacted>
* organize

* Refactor

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* add quantize_nvfp4 (required for test_quants.py)

* add quantize_nvfp4 (required for test_quants.py)

* add quantize_nvfp4 (required for test_quants.py)

* fix return type

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agobenches : add nemotron super (#20420)
Georgi Gerganov [Wed, 11 Mar 2026 19:39:40 +0000 (21:39 +0200)]
benches : add nemotron super (#20420)

6 weeks agollama : add support for Nemotron 3 Super (#20411)
Daniel Bevenius [Wed, 11 Mar 2026 18:27:53 +0000 (19:27 +0100)]
llama : add support for Nemotron 3 Super (#20411)

* llama : add support for Nemotron 3 Super

This commit adds support for the Nemotron 3 Super model (120B.A12B)
enabling this model to be converted to GGUF format and run in llama.cpp.

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Matt Clayton <redacted>
6 weeks agometal : fix capture_compute counter logic (#20410)
Georgi Gerganov [Wed, 11 Mar 2026 16:38:22 +0000 (18:38 +0200)]
metal : fix capture_compute counter logic (#20410)

6 weeks agocompare-llama-bench: check remotes as well (#20406)
Aman Gupta [Wed, 11 Mar 2026 16:14:42 +0000 (00:14 +0800)]
compare-llama-bench: check remotes as well (#20406)

6 weeks agometal : fix q5_k mul_mv register spill (#20399)
Georgi Gerganov [Wed, 11 Mar 2026 14:25:27 +0000 (16:25 +0200)]
metal : fix q5_k mul_mv register spill (#20399)

6 weeks agometal : add env var to trigger graph capture (#20398)
Georgi Gerganov [Wed, 11 Mar 2026 14:25:10 +0000 (16:25 +0200)]
metal : add env var to trigger graph capture (#20398)

6 weeks ago[SYCL] Update SYCL.md for binary package for Windows (#20401)
Neo Zhang [Wed, 11 Mar 2026 14:21:22 +0000 (22:21 +0800)]
[SYCL] Update SYCL.md for binary package for Windows (#20401)

* add download binary package

* update prefix

6 weeks agoci: disable coopmat on ubuntu-24-cmake-vulkan job (#20294)
Ruben Ortlam [Wed, 11 Mar 2026 13:12:29 +0000 (14:12 +0100)]
ci: disable coopmat on ubuntu-24-cmake-vulkan job (#20294)

6 weeks agocommon/parser: use nlohmann::ordered_json to preserve parameter order (#20385)
Aldehir Rojas [Wed, 11 Mar 2026 09:26:51 +0000 (04:26 -0500)]
common/parser: use nlohmann::ordered_json to preserve parameter order (#20385)

6 weeks agocommon/parser: handle reasoning budget (#20297)
Piotr Wilkin (ilintar) [Wed, 11 Mar 2026 09:26:12 +0000 (10:26 +0100)]
common/parser: handle reasoning budget (#20297)

* v1

* Finished!

* Handlie cli

* Reasoning sampler

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Less explosive terminology :)

* Add utf-8 case and tests

* common : migrate reasoning budget sampler to common

* cont : clean up

* cont : expose state and allow passing as initial state

* cont : remove unused imports

* cont : update state machine doc string

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Alde Rojas <redacted>
6 weeks agoggml-cuda: gdn use shared mem for HIP (#20366)
uvos [Wed, 11 Mar 2026 05:06:19 +0000 (06:06 +0100)]
ggml-cuda: gdn use shared mem for HIP (#20366)

Suggested-by: Aman Gupta <redacted>
6 weeks agocuda/hip: fix loop unrolling in ssm-conv (#20369)
uvos [Wed, 11 Mar 2026 05:04:32 +0000 (06:04 +0100)]
cuda/hip: fix loop unrolling in ssm-conv (#20369)

6 weeks agoFix agentic mcp image single model (#20339)
Pascal [Wed, 11 Mar 2026 04:31:33 +0000 (05:31 +0100)]
Fix agentic mcp image single model (#20339)

* webui: fix MCP image attachments dropped during the agentic loop in single-model mode

* chore: update webui build output

6 weeks agovendor : update cpp-httplib to 0.37.0 (#20207)
Alessandro de Oliveira Faria (A.K.A.CABELO) [Wed, 11 Mar 2026 03:03:53 +0000 (00:03 -0300)]
vendor : update cpp-httplib to 0.37.0 (#20207)

6 weeks agovendor : update miniaudio to 0.11.25 (#20209)
Alessandro de Oliveira Faria (A.K.A.CABELO) [Wed, 11 Mar 2026 03:01:56 +0000 (00:01 -0300)]
vendor : update miniaudio to 0.11.25 (#20209)

6 weeks agofix op rope, add rope_back (#20293)
Neo Zhang [Wed, 11 Mar 2026 01:53:34 +0000 (09:53 +0800)]
fix op rope, add rope_back (#20293)

6 weeks agofix for failed UT case: ACC, L2_NORM, UPSCALE, fused_glu, unary (#20283)
Neo Zhang [Wed, 11 Mar 2026 01:53:05 +0000 (09:53 +0800)]
fix for failed UT case: ACC, L2_NORM, UPSCALE, fused_glu, unary (#20283)

6 weeks agomodel : qwen3vl reranker text support (#20332)
Vinicios Lugli [Tue, 10 Mar 2026 22:40:14 +0000 (19:40 -0300)]
model : qwen3vl reranker text support (#20332)

* model : fix qwen3vl reranker support

* Remove CLS_OUT

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agollama-quant : correct `n_attention_wv` usage (#20357)
ddh0 [Tue, 10 Mar 2026 19:43:29 +0000 (14:43 -0500)]
llama-quant : correct `n_attention_wv` usage (#20357)

* llama-quant : correct `n_attention_wv` usage

In #19770, I introduced a regression in the way the
`quantize_state_impl` counter values were initialized. I was
incrementing and using `n_attention_wv` in the same loop, when it should
have been fixed by the time we're deciding tensor types in
`llama_tensor_get_type_impl` (for `use_more_bits`).

I never observed a difference in any of [my
tests](https://github.com/ggml-org/llama.cpp/pull/19770#issuecomment-4000424712)
- it was only after @bartowski kindly pointed this out that I realized
it was incorrect. (Thanks!)

* simplify

6 weeks agoggml : bump RPC version (#20330)
Georgi Gerganov [Tue, 10 Mar 2026 19:36:57 +0000 (21:36 +0200)]
ggml : bump RPC version (#20330)

6 weeks agoggml webgpu: faster normal quant and some k-quant matrix operations, better shader...
Reese Levine [Tue, 10 Mar 2026 16:14:27 +0000 (09:14 -0700)]
ggml webgpu: faster normal quant and some k-quant matrix operations, better shader parameter handling (#20173)

* K quant speedup (#20)

* Basic JIT compilation for mul_mat, get_rows, and scale (#17)

* scale jit working

* preliminary working jit for getrows and mulmat, needs refining

* simplified mul_mat preprocessing switch statement

* get_rows fixes, mul_mat refinement

* formatted + last edits

* removed some extraneous prints

* fixed get_rows, fixed workgroup dispatch in mul_mat. no gibberish

* small fix

* some changes, working

* get_rows and mul_mat jit fixed and working

* Update formatting

* formatting

* Add header

---------

Co-authored-by: Neha Abbas <redacted>
Co-authored-by: Reese Levine <redacted>
* Start work on all-encompassing shader library

* refactor argmax, set_rows

* Refactor all but flashattention, mat mul

* no gibberish, all k quants added, merged

* vec memory fix

* q6_k matching metal on my machine, tests passing

* Set tile size for q6_k separately

* Separate out fast shaders

---------

Co-authored-by: neha-ha <redacted>
* Move towards writeBuffer for params

* Move away from multiple buffers for set_rows errors, remove host buffer for parameter buffers, minor cleanups

* Remove extra file

* Formatting

---------

Co-authored-by: neha-ha <redacted>
6 weeks agoReduce level of content parser warning message to avoid log spam on non-debug verbosi...
Piotr Wilkin (ilintar) [Tue, 10 Mar 2026 14:21:51 +0000 (15:21 +0100)]
Reduce level of content parser warning message to avoid log spam on non-debug verbosity (#20347)

6 weeks agoexamples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)
Ray Xu [Tue, 10 Mar 2026 13:38:18 +0000 (21:38 +0800)]
examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)

* Fix logic for retrieving schema items in `json_schema_to_grammar.py`

If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.

I think if `schema['items']` is `{}`, them items should just be `{}`

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <redacted>
* Add tests for arrays with empty items

Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agodocs: update CPU backend ops to mark POOL_1D as supported (#20304)
a3894281 [Tue, 10 Mar 2026 13:31:24 +0000 (15:31 +0200)]
docs: update CPU backend ops to mark POOL_1D as supported (#20304)

6 weeks agomodels : fix assert in mamba2 (cont) (#20335)
Georgi Gerganov [Tue, 10 Mar 2026 13:00:08 +0000 (15:00 +0200)]
models : fix assert in mamba2 (cont) (#20335)

* models : fix assert in mamba2 (cont)

* cont : add n_group mod

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoserver : make 2 checkpoints near the end of the prompt (#20288)
Georgi Gerganov [Tue, 10 Mar 2026 12:28:23 +0000 (14:28 +0200)]
server : make 2 checkpoints near the end of the prompt (#20288)

* server : make 2 checkpoints near the end of the prompt

* cont : adjust checkpoints

6 weeks agocommon : fix incorrect uses of stoul (#20313)
Sigbjørn Skjæret [Tue, 10 Mar 2026 10:40:26 +0000 (11:40 +0100)]
common : fix incorrect uses of stoul (#20313)

6 weeks agokleidiai : support for concurrent sme and neon kernel execution (#20070)
Charles Xu [Tue, 10 Mar 2026 07:25:25 +0000 (08:25 +0100)]
kleidiai : support for concurrent sme and neon kernel execution (#20070)

6 weeks agoggml-cpu: add RVV repack GEMM and GEMV for quantization types (#19121)
Taimur Ahmad [Tue, 10 Mar 2026 06:49:52 +0000 (11:49 +0500)]
ggml-cpu: add RVV repack GEMM and GEMV for quantization types (#19121)

* ggml-cpu: add rvv ggml_quantize_mat_4x8 for q8_0

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: add rvv repacking for iq4_nl

* ggml-cpu: add generic impl for iq4_nl gemm/gemv

* ggml-cpu: add rvv repacking for q8_0

* ggml-cpu: refactor; add rvv repacking for q4_0, q4_K

* ggml-cpu: refactor; add rvv repacking for q2_K

Co-authored-by: Rehan Qasim <redacted>
* ggml-cpu: refactor rvv repack

---------

Co-authored-by: Rehan Qasim <redacted>
6 weeks agometal: handle command buffer failures gracefully in synchronize (#20306)
Julian Pscheid [Tue, 10 Mar 2026 06:32:24 +0000 (23:32 -0700)]
metal: handle command buffer failures gracefully in synchronize (#20306)

Replace GGML_ABORT("fatal error") in ggml_metal_synchronize() with
error flag + return. This aligns synchronize error handling with
graph_compute, which already returns GGML_STATUS_FAILED for the same
condition.

When a command buffer fails (e.g., iOS GPU access revocation during
backgrounding, macOS eGPU disconnect, OOM), the backend enters an
error state instead of killing the host process. Subsequent
graph_compute calls return GGML_STATUS_FAILED immediately. Recovery
requires recreating the backend.

Failed extra command buffers are properly released on the error path
to avoid Metal object leaks.

6 weeks agollama-quant : fail early on missing imatrix, refactor type selection, code cleanup...
ddh0 [Tue, 10 Mar 2026 06:16:05 +0000 (01:16 -0500)]
llama-quant : fail early on missing imatrix, refactor type selection, code cleanup (#19770)

* quantize : imatrix-fail early + code cleanup

* fix manual override printing

it's in the preliminary loop now, so needs to be on its own line

* revert header changes per ggerganov

* remove old #includes

* clarify naming

rename `tensor_quantization` to `tensor_typo_option` to descirbe its
functionality

* fix per barto

6 weeks agocommon: consolidate PEG string parsers (#20263)
Aldehir Rojas [Mon, 9 Mar 2026 23:29:21 +0000 (18:29 -0500)]
common: consolidate PEG string parsers (#20263)

* common : consolidate PEG string parsers
* cont : fix json_string_content()

6 weeks agomodel: fix step3.5 n_rot (#20318)
Xuan-Son Nguyen [Mon, 9 Mar 2026 22:42:24 +0000 (23:42 +0100)]
model: fix step3.5 n_rot (#20318)

6 weeks agollama: dynamic head_dim and n_rot for SWA (#20301)
Xuan-Son Nguyen [Mon, 9 Mar 2026 21:22:39 +0000 (22:22 +0100)]
llama: dynamic head_dim and n_rot for SWA (#20301)

* llama: dynamic head_dim and n_rot for SWA

* also add gguf_writer wrappers

* fix build

* build_rope_shift arg reorder

6 weeks agoserver: Parse port numbers from MCP server URLs in CORS proxy (#20208)
Evan Huus [Mon, 9 Mar 2026 16:47:54 +0000 (12:47 -0400)]
server: Parse port numbers from MCP server URLs in CORS proxy (#20208)

* Parse port numbers from MCP server URLs

* Pass scheme to http proxy for determining whether to use SSL

* Fix download on non-standard port and re-add port to logging

* add test

---------

Co-authored-by: Xuan Son Nguyen <redacted>
6 weeks agometal : extend mul_mv_ext to BF16, Q2_K, Q3_K (#20250)
Paul Flynn [Mon, 9 Mar 2026 14:48:12 +0000 (10:48 -0400)]
metal : extend mul_mv_ext to BF16, Q2_K, Q3_K (#20250)

Enable mul_mv_ext small-batch kernels (BS 2-8) for BF16, Q2_K,
and Q3_K quantization types. These types previously fell through
to the slower single-row mul_mv path.

BF16 uses the float4 dequantize path (like F16). Q2_K and Q3_K
use the float4x4 K-quant path (like Q4_K/Q5_K/Q6_K).

Co-authored-by: Claude Opus 4.6 <redacted>
6 weeks agoserver : fix checkpoints n_tokens calculation (#20287)
Georgi Gerganov [Mon, 9 Mar 2026 14:47:06 +0000 (16:47 +0200)]
server : fix checkpoints n_tokens calculation (#20287)

6 weeks agometal : add upscale (#20284)
Georgi Gerganov [Mon, 9 Mar 2026 14:45:11 +0000 (16:45 +0200)]
metal : add upscale (#20284)

6 weeks agoserver : warn swa-full is not supported for non-SWA models (#20291)
Georgi Gerganov [Mon, 9 Mar 2026 14:44:25 +0000 (16:44 +0200)]
server : warn swa-full is not supported for non-SWA models (#20291)

6 weeks agoserver : fix off-by-1 in server_tokens::size_up_to_pos() (#20279)
Georgi Gerganov [Mon, 9 Mar 2026 14:43:38 +0000 (16:43 +0200)]
server : fix off-by-1 in server_tokens::size_up_to_pos() (#20279)

* server : fix off-by-1 in server_tokens::size_up_to_pos()

* cont : fix typo [no ci]

6 weeks agocommon: map developer role to system (#20215)
Piotr Wilkin (ilintar) [Mon, 9 Mar 2026 13:25:11 +0000 (14:25 +0100)]
common: map developer role to system (#20215)

* Map developer role to system
* Simplify

6 weeks agomodels : fix assert in mamba2 graph (#20270)
Georgi Gerganov [Mon, 9 Mar 2026 11:15:15 +0000 (13:15 +0200)]
models : fix assert in mamba2 graph (#20270)

6 weeks agoserver : add kill switch when server is stuck (#20277)
Georgi Gerganov [Mon, 9 Mar 2026 08:33:12 +0000 (10:33 +0200)]
server : add kill switch when server is stuck (#20277)

6 weeks agoggml-cuda: disable gdn for musa (#20278)
Aman Gupta [Mon, 9 Mar 2026 08:15:36 +0000 (16:15 +0800)]
ggml-cuda: disable gdn for musa (#20278)

6 weeks agollama-quant : left-align tensor names in output (#20117)
ddh0 [Mon, 9 Mar 2026 07:28:41 +0000 (02:28 -0500)]
llama-quant : left-align tensor names in output (#20117)

6 weeks agocontributing: limit open PRs for new contributors to 1 (#20036)
Aman Gupta [Mon, 9 Mar 2026 07:05:34 +0000 (15:05 +0800)]
contributing: limit open PRs for new contributors to 1 (#20036)

6 weeks agoggml-vulkan: add SGN operator, auto-generate Vulkan.csv and ops.md (#20219)
Bertay Eren [Mon, 9 Mar 2026 06:24:16 +0000 (09:24 +0300)]
ggml-vulkan: add SGN operator, auto-generate Vulkan.csv and ops.md (#20219)

6 weeks agovulkan: skip zero size tensors in backend copies (#20233)
Ruben Ortlam [Mon, 9 Mar 2026 06:23:45 +0000 (07:23 +0100)]
vulkan: skip zero size tensors in backend copies (#20233)

6 weeks agocuda : display total and free VRAM capacity during device initialization (#20185)
Michael Huang [Mon, 9 Mar 2026 04:45:43 +0000 (21:45 -0700)]
cuda : display total and free VRAM capacity during device initialization (#20185)

6 weeks agollama-bench: introduce `-hf` and `-hff` flags & use `--mmap 1` by default (#20211)
Aaron Teo [Mon, 9 Mar 2026 01:05:44 +0000 (09:05 +0800)]
llama-bench: introduce `-hf` and `-hff` flags & use `--mmap 1` by default (#20211)

6 weeks agoPEG parser for LFM2 (#20251)
Piotr Wilkin (ilintar) [Mon, 9 Mar 2026 00:11:22 +0000 (01:11 +0100)]
PEG parser for LFM2 (#20251)

* PEG parser for LFM2

* Simplify using python_value()

6 weeks agoserver : do not create checkpoints right after mtmd chunks (#20232)
Georgi Gerganov [Sun, 8 Mar 2026 20:16:46 +0000 (22:16 +0200)]
server : do not create checkpoints right after mtmd chunks (#20232)

7 weeks agograph : remove redundant scale_w parameter (#20235)
Sigbjørn Skjæret [Sun, 8 Mar 2026 17:58:28 +0000 (18:58 +0100)]
graph : remove redundant scale_w parameter (#20235)

7 weeks agocommon : gracefully handle incomplete output (#20191)
Aldehir Rojas [Sun, 8 Mar 2026 16:17:02 +0000 (11:17 -0500)]
common : gracefully handle incomplete output (#20191)

* common : handle incomplete UTF-8 at end of input in PEG parser

* cont : if reached end prematurely, emit needs_more_input to propagate partial output

* cont: refactor peg parse context to add lenient flag

* cont : remove partial flag, keep lenient flag

7 weeks agoFix compile bug (#20203)
Piotr Wilkin (ilintar) [Sun, 8 Mar 2026 16:15:49 +0000 (17:15 +0100)]
Fix compile bug (#20203)

* Fix compile bug

* Update common/chat-auto-parser-helpers.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
7 weeks agoFix structured outputs (#20223)
Piotr Wilkin (ilintar) [Sun, 8 Mar 2026 16:14:43 +0000 (17:14 +0100)]
Fix structured outputs (#20223)

* Fix structured outputs

* Update common/chat-auto-parser-generator.cpp

Co-authored-by: Aldehir Rojas <redacted>
---------

Co-authored-by: Aldehir Rojas <redacted>
7 weeks agoggml-vulkan: Add ELU op support (#20183)
GiantPrince [Sun, 8 Mar 2026 11:38:17 +0000 (07:38 -0400)]
ggml-vulkan: Add ELU op support (#20183)

* ggml-Vulkan: add ELU support

* ggml-Vulkan: remove extra spaces and variables

* ggml-Vulkan: fix format issue

* ggml-Vulkan: fix format issue

* fix whitespace issue

* Update Vulkan.csv and ops.md

7 weeks agovulkan: Fix data races in coopmat1 mul_mat(_id) (#20084)
Jeff Bolz [Sun, 8 Mar 2026 11:33:48 +0000 (06:33 -0500)]
vulkan: Fix data races in coopmat1 mul_mat(_id) (#20084)

* vulkan: Fix data races in coopmat1 mul_mat(_id)

Add barriers between coopmat store and regular loads. We sort of got away with
this because it was the same subgroup accessing the values, but it's still a
race and may not work.

* switch to subgroup control barriers

7 weeks agollama: end-to-end tests (#19802)
Johannes Gäßler [Sun, 8 Mar 2026 11:30:21 +0000 (12:30 +0100)]
llama: end-to-end tests (#19802)

* tests: add end-to-end tests per model architecture

* fixup for rebase

* fix use-after-free in llama-model-loader.cpp

* fix CI

* fix WebGPU

* fix CI

* disable CI for macOS-latest-cmake-arm64

* use expert_weights_scale only if != 0.0f

* comments

7 weeks agoreadme : update infra list (#20212)
Christopher Maher [Sun, 8 Mar 2026 10:42:28 +0000 (03:42 -0700)]
readme : update infra list (#20212)

7 weeks agoRevert to OAI-compatible args (#20213)
Piotr Wilkin (ilintar) [Sun, 8 Mar 2026 10:33:03 +0000 (11:33 +0100)]
Revert to OAI-compatible args (#20213)

* Revert to OAI-compatible args

* Apply workaround::func_args_not_string

7 weeks agoserver : correct index on finish in OAI completion streams (#20226)
decahedron1 [Sun, 8 Mar 2026 09:08:57 +0000 (04:08 -0500)]
server : correct index on finish in OAI completion streams (#20226)

7 weeks ago[SYCL] supprt Flash Attention for fp32/fp16/Q4/Q5/Q8 (#20190)
Neo Zhang [Sun, 8 Mar 2026 04:00:07 +0000 (12:00 +0800)]
[SYCL] supprt Flash Attention for fp32/fp16/Q4/Q5/Q8 (#20190)

* support flash-attention for fp32/fp16/Q4/Q5/Q8

* rm warining

* update for JIT

7 weeks agoggml: add GATED_DELTA_NET op (#19504)
Aman Gupta [Sat, 7 Mar 2026 07:41:10 +0000 (15:41 +0800)]
ggml: add GATED_DELTA_NET op (#19504)

* ggml: add GATED_DELTA_NET op

* remove the transpose

* add KDA

* add qwen35 dense

* llama : check for fused gated delta net backend support

---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agoopencl: add l2_norm (#20160)
lhez [Sat, 7 Mar 2026 02:03:05 +0000 (18:03 -0800)]
opencl: add l2_norm (#20160)

7 weeks agoAutoparser: True streaming (#20177)
Piotr Wilkin (ilintar) [Sat, 7 Mar 2026 00:55:33 +0000 (01:55 +0100)]
Autoparser: True streaming (#20177)

* Relax atomicity constraint for nicer, more pleasent, True Streaming parsing

* Whitespace

* Remove redundant atomics

7 weeks agoAutoparser: add optional argument reshuffle capability (#20171)
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 21:34:15 +0000 (22:34 +0100)]
Autoparser: add optional argument reshuffle capability (#20171)

* Allow reshuffled arguments in tagged argument parser format tool calls.

* Remove shuffle just keep the optional parsers in any order

* Remove unnecessary import

7 weeks agoquants : Add memsets and other fixes for IQ quants (#19861)
Bartowski [Fri, 6 Mar 2026 21:06:56 +0000 (16:06 -0500)]
quants : Add memsets and other fixes for IQ quants (#19861)

* Add memsets and other fixes for IQ quants

* Make memset unconditional, change Laux back to L

* Move another memset

7 weeks agoAdd @pwilkin to CODEOWNERS for autoparser code (#20174)
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 20:25:41 +0000 (21:25 +0100)]
Add @pwilkin to CODEOWNERS for autoparser code (#20174)

7 weeks agoAutoparser - complete refactoring of parser architecture (#18675)
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 20:01:00 +0000 (21:01 +0100)]
Autoparser - complete refactoring of parser architecture (#18675)

* Autoparser - full single commit squish

* Final pre-merge changes: minor fixes, Kimi 2.5 model parser

7 weeks agohexagon: add f32 ssm_conv op (#20122)
Todor Boinovski [Fri, 6 Mar 2026 17:59:26 +0000 (09:59 -0800)]
hexagon: add f32 ssm_conv op (#20122)

* hexagon: add ssm_conv op

* hexagon: hvx kernel is functional

* hexagon: improvements to ssm-conv hvx kernel

* hexagon: added dma to ssm-conv hvx kernel

* hexagon: ssm-conv dynamically compute gather scratchpad

* hex-ssm-conv: add local context and fix various issues (spad indexing, etc)

---------

Co-authored-by: Max Krasnyansky <redacted>
7 weeks agoserver : preserve anthropic thinking blocks in conversion (#20120)
Tom Vaucourt [Fri, 6 Mar 2026 16:41:12 +0000 (17:41 +0100)]
server : preserve anthropic thinking blocks in conversion (#20120)

* server : preserve anthropic thinking blocks in conversion (#20090)

* server : add tests for anthropic thinking block conversion

---------

Co-authored-by: root <redacted>
7 weeks agocpu: skip redudant ROPE cache updates (#20149)
Max Krasnyansky [Fri, 6 Mar 2026 16:32:40 +0000 (08:32 -0800)]
cpu: skip redudant ROPE cache updates (#20149)

7 weeks agoggml-cuda: add mem check for fusion (#19916)
Aman Gupta [Fri, 6 Mar 2026 16:05:43 +0000 (00:05 +0800)]
ggml-cuda: add mem check for fusion (#19916)

* ggml-cuda: add mem check for fusion

* Replace NaNs with -FLT_MAX

* fix typo

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
7 weeks agoggml: update comments for backends which have no memory to report (#20157)
Aaron Teo [Fri, 6 Mar 2026 15:24:38 +0000 (23:24 +0800)]
ggml: update comments for backends which have no memory to report (#20157)

Signed-off-by: Aaron Teo <redacted>
7 weeks agoggml-cpu: Fix gcc 15 ICE on ppc64le (#20083) (#20130)
shalinib-ibm [Fri, 6 Mar 2026 15:22:39 +0000 (20:52 +0530)]
ggml-cpu: Fix gcc 15 ICE on ppc64le (#20083) (#20130)

This patch addresses an Internal Compiler Error (Segmentation fault)
observed with gcc 15 by replacing the intrinsic + cast by doing
a cat on the data first and then calling the intrinsic. This bypasses the
buggy compiler path while maintaining identical instruction selection.

Performance Verification:
Assembly analysis on RHEL 9 (GCC 15.1.1) confirms that both the original
code and this fix generate the identical Power10 prefixed load instruction:
    `plxv 40, 2(14)`

This ensures zero performance regression while unblocking builds on
newer toolchains.

Reproduced on:
- Alpine Linux + GCC 15.2.0-r2
- RHEL 9  + GCC 15.1.1 (gcc-toolset-15)

Signed-off-by: Shalini Salomi Bodapati <redacted>
7 weeks agoCUDA: use shared mem for ssm_conv (#20128)
Aman Gupta [Fri, 6 Mar 2026 15:09:59 +0000 (23:09 +0800)]
CUDA: use shared mem for ssm_conv (#20128)

* CUDA: use shared mem for ssm_conv

* fuse silu + ssm_conv

* fuse unary + mul

* enable for fp16

* formatting

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
7 weeks agocontext: ignore zero scale LoRAs when checking sameness (#20166)
Tim Neumann [Fri, 6 Mar 2026 13:05:52 +0000 (14:05 +0100)]
context: ignore zero scale LoRAs when checking sameness (#20166)

7 weeks agoCheckpoint every n tokens: squash (#20087)
Piotr Wilkin (ilintar) [Fri, 6 Mar 2026 10:39:26 +0000 (11:39 +0100)]
Checkpoint every n tokens: squash (#20087)

7 weeks agowebui: Agentic Loop + MCP Client with support for Tools, Resources and Prompts (...
Aleksander Grygier [Fri, 6 Mar 2026 09:00:39 +0000 (10:00 +0100)]
webui: Agentic Loop + MCP Client with support for Tools, Resources and Prompts (#18655)

7 weeks agoggml-cpu: fix data race for debug asserts (#20148)
Johannes Gäßler [Fri, 6 Mar 2026 08:12:49 +0000 (09:12 +0100)]
ggml-cpu: fix data race for debug asserts (#20148)

7 weeks agokv-cache : fix M-RoPE checkpoints (#20132)
Georgi Gerganov [Fri, 6 Mar 2026 06:46:51 +0000 (08:46 +0200)]
kv-cache : fix M-RoPE checkpoints (#20132)

7 weeks agocli : Don't clear system prompt when using '/clear' (#20067)
Roj234 [Fri, 6 Mar 2026 05:41:11 +0000 (13:41 +0800)]
cli : Don't clear system prompt when using '/clear' (#20067)

* Enhance /clear command to include system prompt

Add system prompt to messages when clearing chat history.

* Use lambda

7 weeks agoopencl: add neg, exp and diag (#20127)
lhez [Fri, 6 Mar 2026 05:16:39 +0000 (21:16 -0800)]
opencl: add neg, exp and diag (#20127)

* opencl: add `neg`

* opencl: add `exp`

* opencl: add `diag`

7 weeks agohexagon: add fp16 support for binary ops: add,sub,mul,div (#20139)
YardenTal44 [Fri, 6 Mar 2026 02:29:13 +0000 (04:29 +0200)]
hexagon: add fp16 support for binary ops: add,sub,mul,div (#20139)

* hexagon: add fp16 support for binary ops: add,sub,mul,div

* hexagon: fix test-backend-ops failures for fp16 binary ops on older arches (<v79)

* hexagon: decide on n_threads (aka n_jobs) early to avoid overallocating scratchpad

* snapdragon: fix readme link

---------

Co-authored-by: Max Krasnyansky <redacted>
7 weeks agomodels : kda chunk size = 16 (#19827)
ymcki [Thu, 5 Mar 2026 15:01:23 +0000 (23:01 +0800)]
models : kda chunk size = 16 (#19827)

* models : add llm_build_delta_net_base

* cont : keep qwen35 and qwen35moe graphs intact

* cont : add comments [no ci]

* add kimi linear to delta-net-base

* removed unnecessary ggml_cont from g_exp_t

* removed ggml_cont from g_diff_exp_t. moved ggml_cont for o to kimi-linear.cpp

* removed unnecessary diag mask

* cont : simplify

* cont : avoid graph splits

* scale q after mul instead of beginning

* scale q after mul instead of beginning

* identical ppl

* cont : fix scale and decay mask

* minor : remove TODO

* block implementation for kda

* remove space at the end of line 101

* concat+pad

* pad+binary row concat

* chunk size 16 for kda

* removed minor differences to master

---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agoCUDA: Improve performance via less synchronizations between token (#17795)
Andreas Kieslinger [Thu, 5 Mar 2026 11:53:21 +0000 (12:53 +0100)]
CUDA:  Improve performance via less synchronizations between token (#17795)

* Adds CPU-to-CUDA copy capability to
ggml_backend_cuda_cpy_tensor_async()

* Adds function to relax sync requirements between input copies on
supported backends (CUDA for now)

* Exchanges synchronous copy with async copy function.

* Adds macro guards to allow compilation in non-CUDA builds

* Reworked backend detection in ggml-backend.cpp to avoid linking
conflicts

* Relax requirement of checks in async CUDA copies from backend and buffer type to just buffer type, to avoid linking issues

* Minor cleanup

* Makes opt-in to relax use of explicit syncs more general. Backends like
vulkan which require a synchronization between HtoD copies and graph
execution could also adopt this change now.

* Reintroduces stricter check for CPU->CUDA backend async copy via
GGML_DEVICE_TYPE_CPU.

* Corrects initialization of ggml_backend_sync_mode in
ggml_backend_sched_split initialization

* Simplifies synchronizations to adhere to `saaasg` pattern.

* Apply suggestion from @ggerganov (src->buffer to buf_src)

Co-authored-by: Georgi Gerganov <redacted>
* Apply suggestion from @ggerganov (src->buffer to buf_src) v2

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agomodel : update Qwen3.5 model type detection (#20126)
Eric Zhang [Thu, 5 Mar 2026 11:47:14 +0000 (19:47 +0800)]
model : update Qwen3.5 model type detection (#20126)

* model : fix Qwen3.5 model type detection

* Update src/llama-model.cpp

whoops, my bad

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
7 weeks agocli : add command and file auto-completion (#19985)
Sigbjørn Skjæret [Thu, 5 Mar 2026 09:47:28 +0000 (10:47 +0100)]
cli : add command and file auto-completion (#19985)

7 weeks agoconvert : register Qwen 3.5 ForCausalLM for text only (#20119)
Sigbjørn Skjæret [Thu, 5 Mar 2026 09:30:02 +0000 (10:30 +0100)]
convert : register Qwen 3.5 ForCausalLM for text only (#20119)

7 weeks agowebui: Improvements for Models Selector UI (#20066)
Aleksander Grygier [Thu, 5 Mar 2026 07:52:22 +0000 (08:52 +0100)]
webui: Improvements for Models Selector UI (#20066)