]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Xuan-Son Nguyen [Mon, 11 Aug 2025 13:31:35 +0000 (15:31 +0200)]
chat : hotfix gpt-oss jinja raising an exception (#15243)
* chat : hotfix gpt-oss jinja raising an exception
* fix
Xuan-Son Nguyen [Mon, 11 Aug 2025 12:48:41 +0000 (14:48 +0200)]
server : allow specifying reasoning_format in HTTP request (#15238)
Zagaj [Mon, 11 Aug 2025 12:27:54 +0000 (14:27 +0200)]
readme : update infra list (#15234)
Georgi Gerganov [Mon, 11 Aug 2025 10:58:24 +0000 (13:58 +0300)]
kv-cache : fix seq_rm with seq_id == -1 (#15226)
* kv-cache : fix seq_rm with seq_id == -1
ggml-ci
* cont : iterate over streams
ggml-ci
Daniel Bevenius [Mon, 11 Aug 2025 09:21:19 +0000 (11:21 +0200)]
kv-cache : log (debug) all streams in find_slot (#15176)
This commit updates `llama_kv_cache_unified::find_slot` to log
information for all streams when debug is enabled.
The motivation for this change is that currently if a non-unified
kv-cache is used, then only one stream will be logged because the
code was currently uses `seq_to_stream[1]`.
Sigbjørn Skjæret [Mon, 11 Aug 2025 09:15:44 +0000 (11:15 +0200)]
convert : fix merge conflicts (#15229)
Daniel Bevenius [Mon, 11 Aug 2025 08:21:24 +0000 (10:21 +0200)]
perplexity : update comments/error msg to use decode [no ci] (#15227)
This commit updates comments and error messages to use "decode" instead
of "eval" in perplexity.cpp.
The motivation for this is that `llama_eval` was renamed to
`llama_decode` a while ago, but the comments and error messages
still referred to "eval". This change ensures consistency and clarity.
Julien Denize [Mon, 11 Aug 2025 08:07:49 +0000 (10:07 +0200)]
convert : improve Mistral models integration (#14737)
* Improve Mistral models integration with llama.cpp
* Revert changes and fix gguf
* Revert change
* refactor convert_mistral_to_gguf.py in convert_hf_to_gguf.py
* Revert collateral
* Rename model name
* refactor
* revert
* remove duplicate
* Remove duplication code
* Fixes
* Fix flake issues
* Apply comments
* Apply comments
* Apply comments
* Fix remote
* add default chat template
* Revert
* nit
Charles Xu [Mon, 11 Aug 2025 07:59:26 +0000 (09:59 +0200)]
kleidiai: fix unsigned overflow bug (#15150)
* kleidiai: fix unsigned overflow bug
* address review comments
David Zhao [Sat, 9 Aug 2025 18:29:43 +0000 (13:29 -0500)]
cuda: refactored ssm_scan and use CUB (#13291)
* cuda: refactored ssm_scan to use CUB
* fixed compilation error when when not using CUB
* assign L to constant and use size_t instead of int
* deduplicated functions
* change min blocks per mp to 1
* Use cub load and store warp transpose
* suppress clang warning
Aman Gupta [Sat, 9 Aug 2025 12:00:24 +0000 (20:00 +0800)]
CUDA: add attention sinks for tile and wmma (#15178)
* CUDA: add attention sinks for tile and wmma
* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma
compilade [Fri, 8 Aug 2025 21:48:26 +0000 (17:48 -0400)]
gguf-py : add Numpy MXFP4 de/quantization support (#15111)
* gguf-py : add MXFP4 de/quantization support
* ggml-quants : handle zero amax for MXFP4
Johannes Gäßler [Fri, 8 Aug 2025 21:04:36 +0000 (23:04 +0200)]
server-bench: external OAI servers, sqlite (#15179)
* server-bench: external OAI servers, sqlite
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* raise_for_status
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
AN Long [Fri, 8 Aug 2025 12:37:22 +0000 (21:37 +0900)]
ggml : fix field name when new ggml_backend (#14944)
Olivier Chafik [Fri, 8 Aug 2025 09:45:18 +0000 (10:45 +0100)]
vendor: sync minja (#15161)
* vendor: sync minja
* Update minja.hpp
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Johannes Gäßler [Fri, 8 Aug 2025 06:19:58 +0000 (08:19 +0200)]
CUDA: attention sinks for mma FlashAttention (#15157)
lhez [Fri, 8 Aug 2025 04:47:03 +0000 (13:47 +0900)]
opencl: support sink in `soft_max` (attn sinks) (#15152)
Xuan-Son Nguyen [Thu, 7 Aug 2025 21:26:03 +0000 (23:26 +0200)]
convert : support non-mxfp4 HF model (#15153)
* convert : support non-mxfp4 HF model
* rm redundant check
* disable debug check
Jeff Bolz [Thu, 7 Aug 2025 20:44:20 +0000 (15:44 -0500)]
vulkan: support fattn sinks (#15126)
Jeff Bolz [Thu, 7 Aug 2025 20:07:11 +0000 (15:07 -0500)]
vulkan: Add env var to disable host visible vidmem (#15109)
RunningLeon [Thu, 7 Aug 2025 16:20:40 +0000 (00:20 +0800)]
llama : Support intern-s1 (#14875)
* support internvl
* support interns1
* resolve comments
* put interns1 in tensor mapping
* resolve comment
* move tokenizer changes to sub class
uvos [Thu, 7 Aug 2025 14:44:14 +0000 (16:44 +0200)]
HIP: add cmake option to enable compiler output of kernel resource usage metrics (#15103)
Christian Kastner [Thu, 7 Aug 2025 11:45:41 +0000 (13:45 +0200)]
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
Any available libraries are found and loaded dynamically at runtime.
Johannes Gäßler [Thu, 7 Aug 2025 08:53:21 +0000 (10:53 +0200)]
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131)
* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16
Johannes Gäßler [Thu, 7 Aug 2025 06:50:30 +0000 (08:50 +0200)]
scripts: fix crash when --tool is not set (#15133)
Daniel Bevenius [Thu, 7 Aug 2025 03:31:48 +0000 (05:31 +0200)]
requirements : fix PyTorch uint64 compatibility (#15134)
This commit addresses an issue with the convert_hf_to_gguf script
which is currently failing with:
```console
AttributeError: module 'torch' has no attribute 'uint64'
```
This occurred because safetensors expects torch.uint64 to be available
in the public API, but PyTorch 2.2.x only provides limited support for
unsigned types beyond uint8 it seems. The torch.uint64 dtype exists but
is not exposed in the standard torch namespace
(see pytorch/pytorch#58734).
PyTorch 2.4.0 properly exposes torch.uint64 in the public API, resolving
the compatibility issue with safetensors. This also required torchvision
to updated to =0.19.0 for compatibility.
Refs: https://huggingface.co/spaces/ggml-org/gguf-my-repo/discussions/186#
68938de803e47d990aa087fb
Refs: https://github.com/pytorch/pytorch/issues/58734
Reese Levine [Wed, 6 Aug 2025 22:14:40 +0000 (15:14 -0700)]
ggml: Add basic SET_ROWS support in WebGPU (#15137)
* Begin work on set_rows
* Work on set rows
* Add error buffers for reporting unsupported SET_ROWS indices
* Remove extra comments
rmatif [Wed, 6 Aug 2025 21:17:51 +0000 (23:17 +0200)]
fix profiling crash (#15072)
lhez [Wed, 6 Aug 2025 19:12:17 +0000 (04:12 +0900)]
opencl: add `swiglu_oai` and `add_id` (#15121)
* opencl: add `swiglu-oai`
* opencl: add `add_id`
* opencl: add missing `add_id.cl`
Sachin Desai [Wed, 6 Aug 2025 18:27:30 +0000 (11:27 -0700)]
chat : support Granite model reasoning and tool call (#14864)
Juk Armstrong [Wed, 6 Aug 2025 16:28:48 +0000 (17:28 +0100)]
Fixed name `-override-tensors` to `-override-tensor` (#15129)
Diego Devesa [Wed, 6 Aug 2025 12:37:35 +0000 (05:37 -0700)]
ggml : fix fallback to CPU for ununsupported ops (#15118)
Sigbjørn Skjæret [Wed, 6 Aug 2025 11:26:49 +0000 (13:26 +0200)]
chat : fix yandex chat template (#15116)
stevenkuang [Wed, 6 Aug 2025 09:48:30 +0000 (17:48 +0800)]
chat : fix hunyuan auto-detection (#15114)
Signed-off-by: stevenkuang <redacted>
Chenguang Li [Wed, 6 Aug 2025 06:12:42 +0000 (14:12 +0800)]
CANN: add support for ACL Graph (#15065)
* feat(cann): add optional support for ACL Graph execution
This commit adds support for executing ggml computational graphs using
Huawei's ACL graph mode via the USE_CANN_GRAPH flag. The support can be
enabled at compile time using the CMake option:
-DUSE_CANN_GRAPH=ON
By default, ACL graph execution is **disabled**, and the fallback path
uses node-by-node execution.
Key additions:
- CMake option to toggle graph mode
- Graph capture and execution logic using
- Tensor property matching to determine whether graph update is required
- Safe fallback and logging if the environment variable LLAMA_SET_ROWS
is unset or invalid
This prepares the backend for performance improvements in repetitive graph
execution scenarios on Ascend devices.
Signed-off-by: noemotiovon <redacted>
* Fix review comments
Signed-off-by: noemotiovon <redacted>
* remane USE_CANN_GRAPH to USE_ACL_GRAPH
Signed-off-by: noemotiovon <redacted>
* fix typo
Signed-off-by: noemotiovon <redacted>
---------
Signed-off-by: noemotiovon <redacted>
Reese Levine [Tue, 5 Aug 2025 23:26:38 +0000 (16:26 -0700)]
ggml: WebGPU disable SET_ROWS for now (#15078)
* Add paramater buffer pool, batching of submissions, refactor command building/submission
* Add header for linux builds
* Free staged parameter buffers at once
* Format with clang-format
* Fix thread-safe implementation
* Use device implicit synchronization
* Update workflow to use custom release
* Remove testing branch workflow
* Disable set_rows until it's implemented
* Fix potential issue around empty queue submission
* Try synchronous submission
* Try waiting on all futures explicitly
* Add debug
* Add more debug messages
* Work on getting ssh access for debugging
* Debug on failure
* Disable other tests
* Remove extra if
* Try more locking
* maybe passes?
* test
* Some cleanups
* Restore build file
* Remove extra testing branch ci
Georgi Gerganov [Tue, 5 Aug 2025 19:10:36 +0000 (22:10 +0300)]
llama : add gpt-oss (#15091)
* oai moe
* compat with new checkpoint
* add attn sink impl
* add rope scaling yarn
* logits match with latest transformers code
* wip chat template
* rm trailing space
* use ggml_scale_bias
* rm redundant is_swa_all
* convert interleaved gate_up
* graph : fix activation function to match reference (#7)
* vocab : handle o200k_harmony special tokens
* ggml : add attention sinks support (#1)
* llama : add attn sinks
* ggml : add attn sinks
* cuda : add attn sinks
* vulkan : add support for sinks in softmax
remove unnecessary return
* ggml : add fused swiglu_oai op (#11)
* ggml : add fused swiglu_oai op
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <redacted>
* update CUDA impl
* cont : metal impl
* add vulkan impl
* test-backend-ops : more test cases, clean up
* llama : remove unfused impl
* remove extra lines
---------
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: slaren <redacted>
* repack mxfp4 upon conversion
* clean up a bit
* enable thinking
* add quick hack to render only some special tokens
* fix bf16 conversion
* remove vocab hack
* webui ok
* support chat parsing for gpt-oss
* fix webui
* direct mapping mxfp4, FINALLY
* force using mxfp4
* properly use lazy tensor
* ggml : add mxfp4
ggml : use e8m0 conversion instead of powf
Co-authored-by: Diego Devesa <redacted>
change kvalues_mxfp4 table to match e2m1 (#6)
metal : remove quantization for now (not used)
cuda : fix disabled CUDA graphs due to ffn moe bias
vulkan : add support for mxfp4
cont : add cm2 dequant
* ggml : add ggml_add_id (#13)
* ggml : add ggml_add_id
* add cuda impl
* llama : add weight support check for add_id
* perf opt
* add vulkan impl
* rename cuda files
* add metal impl
* allow in-place ggml_add_id
* llama : keep biases on CPU with --cpu-moe
* llama : fix compile error
ggml-ci
* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw
ggml-ci
* cleanup
ggml-ci
* sycl : fix supports_op for MXFP4
ggml-ci
* fix Unknown reasoning format
* ggml-cpu : fix AVX build
ggml-ci
* fix hip build
ggml-ci
* cuda : add mxfp4 dequantization support for cuBLAS
ggml-ci
* ggml-cpu : fix mxfp4 fallback definitions for some architectures
ggml-ci
* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: slaren <redacted>
Sigbjørn Skjæret [Tue, 5 Aug 2025 18:43:36 +0000 (20:43 +0200)]
chat : only remove double bos/eos if added (#15086)
* only remove double bos/eos if added
* fix tests
Georgi Gerganov [Tue, 5 Aug 2025 17:19:33 +0000 (20:19 +0300)]
readme : update hot topics (#15097)
Romain Biessy [Tue, 5 Aug 2025 16:39:55 +0000 (18:39 +0200)]
sycl: fix mul_mat selection (#15092)
Juk Armstrong [Tue, 5 Aug 2025 12:56:44 +0000 (13:56 +0100)]
Fix `glm4moe` bug (#15088)
Alex Wu [Tue, 5 Aug 2025 11:56:44 +0000 (19:56 +0800)]
webui: fix markdown table (#15081)
* webui: fix markdown table
* webui: fix table display with themes
compilade [Tue, 5 Aug 2025 09:27:45 +0000 (05:27 -0400)]
context : fix index overflow on huge outputs (#15080)
* context : fix overflow when re-ordering huge outputs
* context : fix logits size overflow for huge batches
Diego Devesa [Mon, 4 Aug 2025 23:05:36 +0000 (16:05 -0700)]
llama : add --n-cpu-moe option (#15077)
* llama : add --n-cpu-moe option
Keeps the MoE weights of the first N layers in the CPU
compilade [Mon, 4 Aug 2025 21:26:52 +0000 (17:26 -0400)]
imatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076)
* imatrix : add warning when suffix is not .gguf for GGUF imatrix
* imatrix : only warn about suffix when output format is unspecified
Christian Kastner [Mon, 4 Aug 2025 19:29:14 +0000 (21:29 +0200)]
cmake: Add GGML_BACKEND_DIR option (#15074)
* cmake: Add GGML_BACKEND_DIR option
This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.
* Fix phrasing
Sigbjørn Skjæret [Mon, 4 Aug 2025 19:01:48 +0000 (21:01 +0200)]
gguf-py : add --chat-template-file to gguf_new_metadata (#15075)
Sam [Mon, 4 Aug 2025 18:29:25 +0000 (04:29 +1000)]
model: support GLM 4.5 family of models (#14939)
* model: Add GLM 4.5 (#14921)
Co-authored-by: Sigbjørn Skjæret <redacted>
* Merge in PR suggestions
Co-authored-by: Sigbjørn Skjæret <redacted>
* model: Add GLM 4.5 family of models (#14921)
1. Updated tensor_mapping.py with NextN tensor mappings
- Added proper tensor mappings for all NextN/MTP tensors in /Users/samm/git/llama.cpp/gguf-py/gguf/tensor_mapping.py
- Added mappings for: eh_proj, embed_tokens, enorm, hnorm, shared_head.head, shared_head.norm
2. Added num_nextn_predict_layers configuration
- Added LLM_KV_NUM_NEXTN_PREDICT_LAYERS constant to llama-arch.h and llama-arch.cpp
- Added num_nextn_predict_layers field to llama_hparams struct
- Updated GLM4_MOE parameter loading in llama-model.cpp to read this parameter
- Modified tensor loading logic to conditionally load NextN tensors based on num_nextn_predict_layers
- Added GGUF writer support in gguf_writer.py with add_num_nextn_predict_layers() method
- Updated conversion script to extract and write this parameter from HuggingFace config
3. Added FIM tokens for GLM4_MOE
- Added GLM-4.5's FIM tokens to llama-vocab.cpp:
- <|code_prefix|> for FIM_PRE
- <|code_suffix|> for FIM_SUF
- <|code_middle|> for FIM_MID
4. Removed manual NextN tensor handling
- Removed the special-case handling in convert_hf_to_gguf.py that manually mapped NextN tensors
- NextN tensors are now handled automatically through the proper tensor mapping system
* glm 4.5 update tensors names
* model: glm 4.5 apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* model: glm 4.5 apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
* model: glm 4.5 apply suggestions from code review
* Apply suggestions from code review
* patch broken chat template
* typings fix
* add TENSOR_SKIP flag
Co-authored-by: Diego Devesa <redacted>
* Update src/llama-model-loader.h
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Diego Devesa <redacted>
Sigbjørn Skjæret [Mon, 4 Aug 2025 16:11:02 +0000 (18:11 +0200)]
quantize : fix confusing error message if ftype is invalid (#15071)
Reese Levine [Mon, 4 Aug 2025 15:52:43 +0000 (08:52 -0700)]
ggml: WebGPU backend host improvements and style fixing (#14978)
* Add parameter buffer pool, batching of submissions, refactor command building/submission
* Add header for linux builds
* Free staged parameter buffers at once
* Format with clang-format
* Fix thread-safe implementation
* Use device implicit synchronization
* Update workflow to use custom release
* Remove testing branch workflow
Jeff Bolz [Mon, 4 Aug 2025 05:09:19 +0000 (00:09 -0500)]
vulkan: fix build when using glslang that does not support coopmat2 (#15062)
compilade [Sun, 3 Aug 2025 20:00:05 +0000 (16:00 -0400)]
imatrix : use GGUF by default (#14842)
* imatrix : use GGUF by default
* imatrix : use GGUF regardless of the output filename
The legacy format can only be produced with --output-format dat
compilade [Sun, 3 Aug 2025 19:49:13 +0000 (15:49 -0400)]
imatrix : fix 3d activation handling for hybrid and recurrent models (#14994)
* imatrix : use a single count for dense 3d tensors
* imatrix : fix 3d activations when model tensor is 2d
* imatrix : fix 3d tensor counts
compilade [Sun, 3 Aug 2025 19:43:07 +0000 (15:43 -0400)]
memory : handle kv_unified for hybrid models (#15050)
Csaba Kecskemeti [Sun, 3 Aug 2025 19:38:18 +0000 (12:38 -0700)]
vocab : JetBrains Mellum pre-tokenizer (#15045)
Gabriel Larson [Sun, 3 Aug 2025 14:56:25 +0000 (09:56 -0500)]
model : add text-only support for Kimi-VL (and find special tokens in text_config) (#15051)
* basic kimi-vl textmodel conversion
* check config["text_config"] for special tokens
Jeff Bolz [Sun, 3 Aug 2025 12:23:57 +0000 (07:23 -0500)]
vulkan: Use coopmat2 for conv2d (#14982)
lhez [Sat, 2 Aug 2025 17:51:18 +0000 (10:51 -0700)]
opencl: fix adreno compiler detection logic (#15029)
Johannes Gäßler [Sat, 2 Aug 2025 14:37:08 +0000 (16:37 +0200)]
CUDA: use mma FA kernel for gqa > 4 on RTX 4000 (#15035)
leejet [Sat, 2 Aug 2025 14:15:36 +0000 (22:15 +0800)]
cuda: make im2col a little faster (#15025)
Daniel Bevenius [Sat, 2 Aug 2025 14:14:57 +0000 (16:14 +0200)]
kv-cache : skip alignment of n_stream in kv-cache log msg [no ci] (#15040)
This commit removes the right alignment the `n_stream` value in the
log message in the `llama_kv_cache_unified` constructor.
The motivation for this change is to enhance the readability of log
message. Currently the output looks like this:
```console
llama_kv_cache_unified: size = 2048.00 MiB ( 4096 cells, 32 layers, 1/ 1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```
Notice that the `n_stream` value is right aligned, which makes it a
little harder to read.
With the change in this commit the output will look like
```console
llama_kv_cache_unified: size = 2048.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```
Georgi Gerganov [Sat, 2 Aug 2025 14:14:21 +0000 (17:14 +0300)]
llama : enable LLAMA_SET_ROWS=1 by default (#14959)
ggml-ci
Georgi Gerganov [Sat, 2 Aug 2025 14:13:05 +0000 (17:13 +0300)]
cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (#15038)
* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1
ggml-ci
* cont : fix cont types
ggml-ci
* cont : adopt variable names and comment from the other branch
Sigbjørn Skjæret [Sat, 2 Aug 2025 12:39:01 +0000 (14:39 +0200)]
ci : check that pre-tokenizer hashes are up-to-date (#15032)
* torch is not required for convert_hf_to_gguf_update
* add --check-missing parameter
* check that pre-tokenizer hashes are up-to-date
Douglas Hanley [Sat, 2 Aug 2025 10:51:02 +0000 (05:51 -0500)]
convert : fix Qwen3-Embedding pre-tokenizer hash (#15030)
Jhen-Jie Hong [Sat, 2 Aug 2025 10:04:48 +0000 (18:04 +0800)]
chat : fix multiple tool_calls on hermes-2-pro (#14962)
Jeff Bolz [Sat, 2 Aug 2025 09:21:37 +0000 (04:21 -0500)]
vulkan: coopmat2 mul_mat optimizations (#14934)
- Increase tile size for k-quants, to match non-k-quants
- Choose more carefully between large and medium tiles, considering how it
interacts with split_k
- Allow larger/non-power of two split_k, and make the splits a multiple of 256
- Use split_k==3 to when >1/2 and <=2/3 of the SMs would hae been used
R0CKSTAR [Sat, 2 Aug 2025 09:20:40 +0000 (17:20 +0800)]
llama-bench: rename DB table name from test to llama_bench (#15003)
Signed-off-by: Xiaodong Ye <redacted>
Jeff Bolz [Sat, 2 Aug 2025 08:48:30 +0000 (03:48 -0500)]
vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (#15015)
Douglas Hanley [Sat, 2 Aug 2025 08:44:50 +0000 (03:44 -0500)]
model : support Qwen3-Embedding (#15023)
Johannes Gäßler [Sat, 2 Aug 2025 08:12:41 +0000 (10:12 +0200)]
server: enable token array inputs for OAI API (#15001)
Jeff Bolz [Sat, 2 Aug 2025 07:57:04 +0000 (02:57 -0500)]
vulkan: optimizations for direct convolution (#14933)
* vulkan: optimizations for direct convolution
- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.
* Three tiles sizes for CONV_2D, and a heuristic to choose
* reallow collectives for pre-Turing
* make SHMEM_PAD a spec constant
* fixes for intel perf - no shmem padding, placeholder shader core count
* shader variants with/without unrolling
* 0cc4m's fixes for AMD perf
Co-authored-by: 0cc4m <redacted>
---------
Co-authored-by: 0cc4m <redacted>
Johannes Gäßler [Fri, 1 Aug 2025 18:47:32 +0000 (20:47 +0200)]
CUDA: fix MMQ nwarps for AMD with warp_size==32 (#15014)
l-austenfeld [Fri, 1 Aug 2025 14:59:06 +0000 (16:59 +0200)]
vendor : update vendored copy of google/minja (#15011)
* vendor : update vendored copy of google/minja
Signed-off-by: Lennart Austenfeld <redacted>
* Re-remove trailing whitespace
Signed-off-by: Lennart Austenfeld <redacted>
* Remove another trailing whitespace
Signed-off-by: Lennart Austenfeld <redacted>
---------
Signed-off-by: Lennart Austenfeld <redacted>
stevenkuang [Fri, 1 Aug 2025 13:31:12 +0000 (21:31 +0800)]
model : add hunyuan dense (#14878)
* support hunyuan_v1_dense
Signed-off-by: stevenkuang <redacted>
* update hunyuan_moe to hunyuan_v1_moe
Signed-off-by: stevenkuang <redacted>
* fix rope alpha assert and bos token
Signed-off-by: stevenkuang <redacted>
* add blank line
Signed-off-by: stevenkuang <redacted>
* Revert "update hunyuan_moe to hunyuan_v1_moe"
This reverts commit
aa973ca21913aba77f6e81a935270ef7be222e75 .
* use hunyuan_dense instead of hunyuan_v1_dense
Signed-off-by: stevenkuang <redacted>
* fix hunyuan_moe chat template
Signed-off-by: stevenkuang <redacted>
* remove leftover code
Signed-off-by: stevenkuang <redacted>
* update hunyuan dense chat template
Signed-off-by: stevenkuang <redacted>
* fix hunyuan dense vocab and chat template
Signed-off-by: stevenkuang <redacted>
---------
Signed-off-by: stevenkuang <redacted>
lhez [Fri, 1 Aug 2025 11:15:44 +0000 (04:15 -0700)]
opencl: add f16 for `add`, `sub`, `mul`, `div` (#14984)
Srihari-mcw [Fri, 1 Aug 2025 06:20:33 +0000 (11:50 +0530)]
ggml : Q2k interleaving implementation - x86/x64 SIMD (#14373)
* Initial Q2_K Block Interleaving Implementation
* Addressed review comments and clean up of the code
* Post rebase fixes
* Initial CI/CD fixes
* Update declarations in arch-fallback.h
* Changes for GEMV Q2_K in arch-fallback.h
* Enable repacking only on AVX-512 machines
* Update comments in repack.cpp
* Address q2k comments
---------
Co-authored-by: Manogna-Sree <redacted>
Georgi Gerganov [Fri, 1 Aug 2025 03:38:12 +0000 (06:38 +0300)]
graph : fix equal_seq() check (#14986)
ggml-ci
diannao [Fri, 1 Aug 2025 02:02:34 +0000 (10:02 +0800)]
docker : add cann build pipline (#14591)
* docker: add cann build pipline
* docker: add cann build pipline
* docker: fix cann devops
* cann : fix multi card hccl
* Update ggml/src/ggml-cann/ggml-cann.cpp
Co-authored-by: Xuan-Son Nguyen <redacted>
* Update ggml-cann.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
R0CKSTAR [Fri, 1 Aug 2025 00:47:27 +0000 (08:47 +0800)]
compare-commits.sh: support both llama-bench and test-backend-ops (#14392)
* compare-commits.sh: support both llama-bench and test-backend-ops
Signed-off-by: Xiaodong Ye <redacted>
* Speed up the build by specifying -j 12
Signed-off-by: Xiaodong Ye <redacted>
* Remove build_number from test-backend-ops db
Signed-off-by: Xiaodong Ye <redacted>
* Apply suggestion from @JohannesGaessler
Co-authored-by: Johannes Gäßler <redacted>
* Refine tool selection logic
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: Johannes Gäßler <redacted>
Ed Addario [Thu, 31 Jul 2025 19:32:18 +0000 (20:32 +0100)]
quantize : skip tensor override when in fallback mode (#14995)
Diego Devesa [Thu, 31 Jul 2025 18:15:41 +0000 (11:15 -0700)]
llama : add simple option to enable CPU for MoE weights (--cpu-moe) (#14992)
Aman Gupta [Thu, 31 Jul 2025 17:22:58 +0000 (01:22 +0800)]
Fix params bug in diffusion example (#14993)
Diego Devesa [Thu, 31 Jul 2025 16:11:34 +0000 (09:11 -0700)]
llama : allow other bufts when overriding to CPU, add --no-repack option (#14990)
Ruben Ortlam [Thu, 31 Jul 2025 15:46:54 +0000 (17:46 +0200)]
Vulkan: Fix minor debug mode issues (#14899)
* vulkan: fix debug mode issues
* vulkan: remove broken check_results GGML_OP_SET_ROWS support
tc-mb [Thu, 31 Jul 2025 15:22:17 +0000 (23:22 +0800)]
mtmd : support MiniCPM-V 4.0 (#14983)
* support minicpm-v 4
* add md
* support MiniCPM-o 4.0
* add default location
* temp rm MiniCPM-o 4.0
* fix code
* fix "minicpmv_projector" default path
Csaba Kecskemeti [Thu, 31 Jul 2025 14:59:49 +0000 (07:59 -0700)]
MODEL_TENSOR.SSM_DT_NORM has defined twice (#14991)
* MODEL_TENSOR.SSM_DT_NORM has defined twice, and second overwritten the jamba model's layername
* correct order
g2mt [Thu, 31 Jul 2025 12:25:23 +0000 (05:25 -0700)]
server : implement universal assisted decoding (#12635)
* llama-server : implement universal assisted decoding
* Erase prompt tail for kv-cache
* set vocab_dft_compatible in common_speculative
* rename ctx_main to ctx_tgt
* move vocab_dft_compatible to spec struct
* clear mem_dft, remove mem
* detokenize id_last for incompatible models
* update comment
* add --spec-replace flag
* accept special tokens when translating between draft/main models
* Escape spec-replace
* clamp draft result to size to params.n_draft
* fix comment
* clean up code
* restore old example
* log common_speculative_are_compatible in speculative example
* fix
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Dongliang Wei [Thu, 31 Jul 2025 12:12:20 +0000 (20:12 +0800)]
llama : merge build_moe_ffn_from_probs function into build_moe_ffn (#14968)
Lukas Straub [Thu, 31 Jul 2025 12:08:23 +0000 (14:08 +0200)]
server : add openai-style logit_bias support (#14946)
Signed-off-by: Lukas Straub <redacted>
Aman Gupta [Thu, 31 Jul 2025 11:49:09 +0000 (19:49 +0800)]
Add LLaDA 8b Diffusion model (#14771)
* Add support for Llada-8b: diffusion model
* Add README
* Fix README and convert_hf_to_gguf
* convert_hf_to_gguf.py: address review comments
* Make everything in a single example
* Remove model-specific sampling
* Remove unused argmax
* Remove braced initializers, improve README.md a bit
* Add diffusion specific gguf params in set_vocab, remove setting rope_theta and rms_norm_eps
* Remove adding the mask token
* Move add_add_bos_token to set_vocab
* use add_bool in gguf_writer.py
hipudding [Thu, 31 Jul 2025 11:47:20 +0000 (19:47 +0800)]
CANN: Improve loading efficiency after converting weights to NZ format. (#14985)
* CANN: Improve loading efficiency after converting weights to NZ format.
* CANN: fix typo
compilade [Thu, 31 Jul 2025 05:02:46 +0000 (01:02 -0400)]
graph : reduce splits for recurrent and hybrid models (#14825)
* graph : avoid creating redundant s_copy views
* graph : comment the s_copy views
lhez [Wed, 30 Jul 2025 21:56:55 +0000 (14:56 -0700)]
opencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (#14809)
Ed Addario [Wed, 30 Jul 2025 19:11:56 +0000 (20:11 +0100)]
quantize : fix using combined imatrix GGUFs (multiple datasets) (#14973)
Daniel Bevenius [Wed, 30 Jul 2025 16:07:11 +0000 (18:07 +0200)]
server : add support for `embd_normalize` parameter (#14964)
This commit adds support for the `embd_normalize` parameter in the
server code.
The motivation for this is that currently if the server is started with
a pooling type that is not `none`, then Euclidean/L2 normalization will
be the normalization method used for embeddings. However, this is not
always the desired behavior, and users may want to use other
normalization (or none) and this commit allows that.
Example usage:
```console
curl --request POST \
--url http://localhost:8080/embedding \
--header "Content-Type: application/json" \
--data '{"input": "Hello world today", "embd_normalize": -1}
```
uvos [Wed, 30 Jul 2025 15:38:06 +0000 (17:38 +0200)]
HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (#14949)
Georgi Gerganov [Wed, 30 Jul 2025 13:03:13 +0000 (16:03 +0300)]
sync : ggml
ggml-ci
Kai Pastor [Wed, 30 Jul 2025 12:53:16 +0000 (14:53 +0200)]
cmake : Fix BLAS link interface (ggml/1316)
Kai Pastor [Wed, 30 Jul 2025 12:52:26 +0000 (14:52 +0200)]
vulkan : fix 32-bit builds (ggml/1313)
The pipeline member can be cast to VkPipeline.
This is a VkPipeline_T* on 64 bit but a uint64_t on 32 bit.
Cf. VK_DEFINE_NON_DISPATCHABLE_HANDLE documentation.