]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
compilade [Sun, 3 Aug 2025 19:43:07 +0000 (15:43 -0400)]
memory : handle kv_unified for hybrid models (#15050)
Csaba Kecskemeti [Sun, 3 Aug 2025 19:38:18 +0000 (12:38 -0700)]
vocab : JetBrains Mellum pre-tokenizer (#15045)
Gabriel Larson [Sun, 3 Aug 2025 14:56:25 +0000 (09:56 -0500)]
model : add text-only support for Kimi-VL (and find special tokens in text_config) (#15051)
* basic kimi-vl textmodel conversion
* check config["text_config"] for special tokens
Jeff Bolz [Sun, 3 Aug 2025 12:23:57 +0000 (07:23 -0500)]
vulkan: Use coopmat2 for conv2d (#14982)
lhez [Sat, 2 Aug 2025 17:51:18 +0000 (10:51 -0700)]
opencl: fix adreno compiler detection logic (#15029)
Johannes Gäßler [Sat, 2 Aug 2025 14:37:08 +0000 (16:37 +0200)]
CUDA: use mma FA kernel for gqa > 4 on RTX 4000 (#15035)
leejet [Sat, 2 Aug 2025 14:15:36 +0000 (22:15 +0800)]
cuda: make im2col a little faster (#15025)
Daniel Bevenius [Sat, 2 Aug 2025 14:14:57 +0000 (16:14 +0200)]
kv-cache : skip alignment of n_stream in kv-cache log msg [no ci] (#15040)
This commit removes the right alignment the `n_stream` value in the
log message in the `llama_kv_cache_unified` constructor.
The motivation for this change is to enhance the readability of log
message. Currently the output looks like this:
```console
llama_kv_cache_unified: size = 2048.00 MiB ( 4096 cells, 32 layers, 1/ 1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```
Notice that the `n_stream` value is right aligned, which makes it a
little harder to read.
With the change in this commit the output will look like
```console
llama_kv_cache_unified: size = 2048.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```
Georgi Gerganov [Sat, 2 Aug 2025 14:14:21 +0000 (17:14 +0300)]
llama : enable LLAMA_SET_ROWS=1 by default (#14959)
ggml-ci
Georgi Gerganov [Sat, 2 Aug 2025 14:13:05 +0000 (17:13 +0300)]
cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (#15038)
* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1
ggml-ci
* cont : fix cont types
ggml-ci
* cont : adopt variable names and comment from the other branch
Sigbjørn Skjæret [Sat, 2 Aug 2025 12:39:01 +0000 (14:39 +0200)]
ci : check that pre-tokenizer hashes are up-to-date (#15032)
* torch is not required for convert_hf_to_gguf_update
* add --check-missing parameter
* check that pre-tokenizer hashes are up-to-date
Douglas Hanley [Sat, 2 Aug 2025 10:51:02 +0000 (05:51 -0500)]
convert : fix Qwen3-Embedding pre-tokenizer hash (#15030)
Jhen-Jie Hong [Sat, 2 Aug 2025 10:04:48 +0000 (18:04 +0800)]
chat : fix multiple tool_calls on hermes-2-pro (#14962)
Jeff Bolz [Sat, 2 Aug 2025 09:21:37 +0000 (04:21 -0500)]
vulkan: coopmat2 mul_mat optimizations (#14934)
- Increase tile size for k-quants, to match non-k-quants
- Choose more carefully between large and medium tiles, considering how it
interacts with split_k
- Allow larger/non-power of two split_k, and make the splits a multiple of 256
- Use split_k==3 to when >1/2 and <=2/3 of the SMs would hae been used
R0CKSTAR [Sat, 2 Aug 2025 09:20:40 +0000 (17:20 +0800)]
llama-bench: rename DB table name from test to llama_bench (#15003)
Signed-off-by: Xiaodong Ye <redacted>
Jeff Bolz [Sat, 2 Aug 2025 08:48:30 +0000 (03:48 -0500)]
vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (#15015)
Douglas Hanley [Sat, 2 Aug 2025 08:44:50 +0000 (03:44 -0500)]
model : support Qwen3-Embedding (#15023)
Johannes Gäßler [Sat, 2 Aug 2025 08:12:41 +0000 (10:12 +0200)]
server: enable token array inputs for OAI API (#15001)
Jeff Bolz [Sat, 2 Aug 2025 07:57:04 +0000 (02:57 -0500)]
vulkan: optimizations for direct convolution (#14933)
* vulkan: optimizations for direct convolution
- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.
* Three tiles sizes for CONV_2D, and a heuristic to choose
* reallow collectives for pre-Turing
* make SHMEM_PAD a spec constant
* fixes for intel perf - no shmem padding, placeholder shader core count
* shader variants with/without unrolling
* 0cc4m's fixes for AMD perf
Co-authored-by: 0cc4m <redacted>
---------
Co-authored-by: 0cc4m <redacted>
Johannes Gäßler [Fri, 1 Aug 2025 18:47:32 +0000 (20:47 +0200)]
CUDA: fix MMQ nwarps for AMD with warp_size==32 (#15014)
l-austenfeld [Fri, 1 Aug 2025 14:59:06 +0000 (16:59 +0200)]
vendor : update vendored copy of google/minja (#15011)
* vendor : update vendored copy of google/minja
Signed-off-by: Lennart Austenfeld <redacted>
* Re-remove trailing whitespace
Signed-off-by: Lennart Austenfeld <redacted>
* Remove another trailing whitespace
Signed-off-by: Lennart Austenfeld <redacted>
---------
Signed-off-by: Lennart Austenfeld <redacted>
stevenkuang [Fri, 1 Aug 2025 13:31:12 +0000 (21:31 +0800)]
model : add hunyuan dense (#14878)
* support hunyuan_v1_dense
Signed-off-by: stevenkuang <redacted>
* update hunyuan_moe to hunyuan_v1_moe
Signed-off-by: stevenkuang <redacted>
* fix rope alpha assert and bos token
Signed-off-by: stevenkuang <redacted>
* add blank line
Signed-off-by: stevenkuang <redacted>
* Revert "update hunyuan_moe to hunyuan_v1_moe"
This reverts commit
aa973ca21913aba77f6e81a935270ef7be222e75 .
* use hunyuan_dense instead of hunyuan_v1_dense
Signed-off-by: stevenkuang <redacted>
* fix hunyuan_moe chat template
Signed-off-by: stevenkuang <redacted>
* remove leftover code
Signed-off-by: stevenkuang <redacted>
* update hunyuan dense chat template
Signed-off-by: stevenkuang <redacted>
* fix hunyuan dense vocab and chat template
Signed-off-by: stevenkuang <redacted>
---------
Signed-off-by: stevenkuang <redacted>
lhez [Fri, 1 Aug 2025 11:15:44 +0000 (04:15 -0700)]
opencl: add f16 for `add`, `sub`, `mul`, `div` (#14984)
Srihari-mcw [Fri, 1 Aug 2025 06:20:33 +0000 (11:50 +0530)]
ggml : Q2k interleaving implementation - x86/x64 SIMD (#14373)
* Initial Q2_K Block Interleaving Implementation
* Addressed review comments and clean up of the code
* Post rebase fixes
* Initial CI/CD fixes
* Update declarations in arch-fallback.h
* Changes for GEMV Q2_K in arch-fallback.h
* Enable repacking only on AVX-512 machines
* Update comments in repack.cpp
* Address q2k comments
---------
Co-authored-by: Manogna-Sree <redacted>
Georgi Gerganov [Fri, 1 Aug 2025 03:38:12 +0000 (06:38 +0300)]
graph : fix equal_seq() check (#14986)
ggml-ci
diannao [Fri, 1 Aug 2025 02:02:34 +0000 (10:02 +0800)]
docker : add cann build pipline (#14591)
* docker: add cann build pipline
* docker: add cann build pipline
* docker: fix cann devops
* cann : fix multi card hccl
* Update ggml/src/ggml-cann/ggml-cann.cpp
Co-authored-by: Xuan-Son Nguyen <redacted>
* Update ggml-cann.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
R0CKSTAR [Fri, 1 Aug 2025 00:47:27 +0000 (08:47 +0800)]
compare-commits.sh: support both llama-bench and test-backend-ops (#14392)
* compare-commits.sh: support both llama-bench and test-backend-ops
Signed-off-by: Xiaodong Ye <redacted>
* Speed up the build by specifying -j 12
Signed-off-by: Xiaodong Ye <redacted>
* Remove build_number from test-backend-ops db
Signed-off-by: Xiaodong Ye <redacted>
* Apply suggestion from @JohannesGaessler
Co-authored-by: Johannes Gäßler <redacted>
* Refine tool selection logic
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: Johannes Gäßler <redacted>
Ed Addario [Thu, 31 Jul 2025 19:32:18 +0000 (20:32 +0100)]
quantize : skip tensor override when in fallback mode (#14995)
Diego Devesa [Thu, 31 Jul 2025 18:15:41 +0000 (11:15 -0700)]
llama : add simple option to enable CPU for MoE weights (--cpu-moe) (#14992)
Aman Gupta [Thu, 31 Jul 2025 17:22:58 +0000 (01:22 +0800)]
Fix params bug in diffusion example (#14993)
Diego Devesa [Thu, 31 Jul 2025 16:11:34 +0000 (09:11 -0700)]
llama : allow other bufts when overriding to CPU, add --no-repack option (#14990)
Ruben Ortlam [Thu, 31 Jul 2025 15:46:54 +0000 (17:46 +0200)]
Vulkan: Fix minor debug mode issues (#14899)
* vulkan: fix debug mode issues
* vulkan: remove broken check_results GGML_OP_SET_ROWS support
tc-mb [Thu, 31 Jul 2025 15:22:17 +0000 (23:22 +0800)]
mtmd : support MiniCPM-V 4.0 (#14983)
* support minicpm-v 4
* add md
* support MiniCPM-o 4.0
* add default location
* temp rm MiniCPM-o 4.0
* fix code
* fix "minicpmv_projector" default path
Csaba Kecskemeti [Thu, 31 Jul 2025 14:59:49 +0000 (07:59 -0700)]
MODEL_TENSOR.SSM_DT_NORM has defined twice (#14991)
* MODEL_TENSOR.SSM_DT_NORM has defined twice, and second overwritten the jamba model's layername
* correct order
g2mt [Thu, 31 Jul 2025 12:25:23 +0000 (05:25 -0700)]
server : implement universal assisted decoding (#12635)
* llama-server : implement universal assisted decoding
* Erase prompt tail for kv-cache
* set vocab_dft_compatible in common_speculative
* rename ctx_main to ctx_tgt
* move vocab_dft_compatible to spec struct
* clear mem_dft, remove mem
* detokenize id_last for incompatible models
* update comment
* add --spec-replace flag
* accept special tokens when translating between draft/main models
* Escape spec-replace
* clamp draft result to size to params.n_draft
* fix comment
* clean up code
* restore old example
* log common_speculative_are_compatible in speculative example
* fix
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Dongliang Wei [Thu, 31 Jul 2025 12:12:20 +0000 (20:12 +0800)]
llama : merge build_moe_ffn_from_probs function into build_moe_ffn (#14968)
Lukas Straub [Thu, 31 Jul 2025 12:08:23 +0000 (14:08 +0200)]
server : add openai-style logit_bias support (#14946)
Signed-off-by: Lukas Straub <redacted>
Aman Gupta [Thu, 31 Jul 2025 11:49:09 +0000 (19:49 +0800)]
Add LLaDA 8b Diffusion model (#14771)
* Add support for Llada-8b: diffusion model
* Add README
* Fix README and convert_hf_to_gguf
* convert_hf_to_gguf.py: address review comments
* Make everything in a single example
* Remove model-specific sampling
* Remove unused argmax
* Remove braced initializers, improve README.md a bit
* Add diffusion specific gguf params in set_vocab, remove setting rope_theta and rms_norm_eps
* Remove adding the mask token
* Move add_add_bos_token to set_vocab
* use add_bool in gguf_writer.py
hipudding [Thu, 31 Jul 2025 11:47:20 +0000 (19:47 +0800)]
CANN: Improve loading efficiency after converting weights to NZ format. (#14985)
* CANN: Improve loading efficiency after converting weights to NZ format.
* CANN: fix typo
compilade [Thu, 31 Jul 2025 05:02:46 +0000 (01:02 -0400)]
graph : reduce splits for recurrent and hybrid models (#14825)
* graph : avoid creating redundant s_copy views
* graph : comment the s_copy views
lhez [Wed, 30 Jul 2025 21:56:55 +0000 (14:56 -0700)]
opencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (#14809)
Ed Addario [Wed, 30 Jul 2025 19:11:56 +0000 (20:11 +0100)]
quantize : fix using combined imatrix GGUFs (multiple datasets) (#14973)
Daniel Bevenius [Wed, 30 Jul 2025 16:07:11 +0000 (18:07 +0200)]
server : add support for `embd_normalize` parameter (#14964)
This commit adds support for the `embd_normalize` parameter in the
server code.
The motivation for this is that currently if the server is started with
a pooling type that is not `none`, then Euclidean/L2 normalization will
be the normalization method used for embeddings. However, this is not
always the desired behavior, and users may want to use other
normalization (or none) and this commit allows that.
Example usage:
```console
curl --request POST \
--url http://localhost:8080/embedding \
--header "Content-Type: application/json" \
--data '{"input": "Hello world today", "embd_normalize": -1}
```
uvos [Wed, 30 Jul 2025 15:38:06 +0000 (17:38 +0200)]
HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (#14949)
Georgi Gerganov [Wed, 30 Jul 2025 13:03:13 +0000 (16:03 +0300)]
sync : ggml
ggml-ci
Kai Pastor [Wed, 30 Jul 2025 12:53:16 +0000 (14:53 +0200)]
cmake : Fix BLAS link interface (ggml/1316)
Kai Pastor [Wed, 30 Jul 2025 12:52:26 +0000 (14:52 +0200)]
vulkan : fix 32-bit builds (ggml/1313)
The pipeline member can be cast to VkPipeline.
This is a VkPipeline_T* on 64 bit but a uint64_t on 32 bit.
Cf. VK_DEFINE_NON_DISPATCHABLE_HANDLE documentation.
Johannes Gäßler [Wed, 30 Jul 2025 13:46:13 +0000 (15:46 +0200)]
CUDA: skip masked KV slices for all FA kernels (#14924)
Georgi Gerganov [Wed, 30 Jul 2025 12:12:02 +0000 (15:12 +0300)]
tests : update for LLAMA_SET_ROWS=1 (#14961)
* test-thread-safety : each context uses a single sequence
* embedding : handle --parallel argument
ggml-ci
* save-load : handle -np 1
ggml-ci
* thread-safety : avoid overriding threads, reduce test case arg
ggml-ci
Georgi Gerganov [Wed, 30 Jul 2025 10:52:11 +0000 (13:52 +0300)]
graph : fix stack-use-after-return (#14960)
ggml-ci
Douglas Hanley [Wed, 30 Jul 2025 05:25:05 +0000 (00:25 -0500)]
embeddings: fix extraction of CLS pooling results (#14927)
* embeddings: fix extraction of CLS pooling results
* merge RANK pooling into CLS case for inputs
Xinpeng Dou [Wed, 30 Jul 2025 00:39:24 +0000 (08:39 +0800)]
CANN: update ops docs (#14935)
* CANN:add ops docs
* CANN: update ops docs
uvos [Tue, 29 Jul 2025 18:23:04 +0000 (20:23 +0200)]
HIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets (#14945)
uvos [Tue, 29 Jul 2025 15:44:30 +0000 (17:44 +0200)]
HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (#14930)
This is useful for testing for regressions on GCN with CDNA hardware.
With GGML_HIP_MMQ_MFMA=Off and GGML_CUDA_FORCE_MMQ=On we can conveniently test the GCN code path on CDNA. As CDNA is just GCN renamed with MFMA added and limited use ACC registers, this provides a good alternative for regression testing when GCN hardware is not available.
uvos [Tue, 29 Jul 2025 15:43:43 +0000 (17:43 +0200)]
HIP: Ignore unsupported unroll transformation in fattn-vec (#14931)
llvm with the amdgcn target dose not support unrolling loops with conditional break statements, when those statements can not be resolved at compile time. Similar to other places in GGML lets simply ignore this warning.
kallewoof [Tue, 29 Jul 2025 15:05:38 +0000 (00:05 +0900)]
common : avoid logging partial messages (which can contain broken UTF-8 sequences) (#14937)
* bug-fix: don't attempt to log partial parsed messages to avoid crash due to unfinished UTF-8 sequences
hipudding [Tue, 29 Jul 2025 14:36:43 +0000 (22:36 +0800)]
CANN: Add ggml_set_rows (#14943)
Sigbjørn Skjæret [Tue, 29 Jul 2025 12:22:03 +0000 (14:22 +0200)]
cuda : add softcap fusion (#14907)
Johannes Gäßler [Tue, 29 Jul 2025 08:40:50 +0000 (10:40 +0200)]
server-bench: make seed choice configurable (#14929)
* server-bench: make seed choice configurable
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* fix error formatting
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Aman Gupta [Tue, 29 Jul 2025 06:45:18 +0000 (14:45 +0800)]
CUDA: add roll (#14919)
* CUDA: add roll
* Make everything const, use __restrict__
lhez [Mon, 28 Jul 2025 16:50:17 +0000 (09:50 -0700)]
opencl : add ops docs (#14910)
Leonard Mosescu [Mon, 28 Jul 2025 16:04:27 +0000 (09:04 -0700)]
test-backend-ops : extend test case filtering (#14865)
* Extend test case filtering
1. Allow passing multiple (comma-separated?) ops to test-backend-ops. This can be convenient when working on a set of ops, when you'd want to test them together (but without having to run every single op). For example:
`test-backend-ops.exe test -o "ADD,RMS_NORM,ROPE,SILU,SOFT_MAX"`
2. Support full test-case variation string in addition to basic op names. This would make it easy to select a single variation, either for testing or for benchmarking. It can be particularly useful for profiling a particular variation (ex. a CUDA kernel), for example:
`test-backend-ops.exe perf -b CUDA0 -o "MUL_MAT(type_a=f16,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3],v=2)"`
These two can be combined. As the current `-o`, this change doesn't try to detect/report an error if an filter doesn't name existing ops (ex. misspelled)
* Updating the usage help text
* Update tests/test-backend-ops.cpp
Radoslav Gerganov [Mon, 28 Jul 2025 15:59:04 +0000 (18:59 +0300)]
llama-bench : use local GPUs along with RPC servers (#14917)
Currently if RPC servers are specified with '--rpc' and there is a local
GPU available (e.g. CUDA), the benchmark will be performed only on the
RPC device(s) but the backend result column will say "CUDA,RPC" which is
incorrect. This patch is adding all local GPU devices and makes
llama-bench consistent with llama-cli.
xctan [Mon, 28 Jul 2025 15:40:24 +0000 (23:40 +0800)]
ggml-cpu : deduplicate scalar implementations (#14897)
* remove redundant code in riscv
* remove redundant code in arm
* remove redundant code in loongarch
* remove redundant code in ppc
* remove redundant code in s390
* remove redundant code in wasm
* remove redundant code in x86
* remove fallback headers
* fix x86 ggml_vec_dot_q8_0_q8_0
Akarshan Biswas [Mon, 28 Jul 2025 15:02:15 +0000 (20:32 +0530)]
SYCL: Add set_rows support for quantized types (#14883)
* SYCL: Add set_rows support for quantized types
This commit adds support for GGML_OP_SET_ROWS operation for various
quantized tensor types (Q8_0, Q5_1, Q5_0, Q4_1, Q4_0, IQ4_NL) and BF16
type in the SYCL backend.
The quantization/dequantization copy kernels were moved from cpy.cpp
to cpy.hpp to make them available for set_rows.cpp.
This addresses part of the TODOs mentioned in the code.
* Use get_global_linear_id() instead
ggml-ci
* Fix formatting
ggml-ci
* Use const for ne11 and size_t variables in set_rows_sycl_q
ggml-ci
* Increase block size for q kernel to 256
ggml-ci
* Cleanup imports
* Add float.h to cpy.hpp
Xuan-Son Nguyen [Mon, 28 Jul 2025 13:01:48 +0000 (15:01 +0200)]
mtmd : add support for Voxtral (#14862)
* mtmd : add support for Voxtral
* clean up
* fix python requirements
* add [BEGIN_AUDIO] token
* also support Devstral conversion
* add docs and tests
* fix regression for ultravox
* minor coding style improvement
* correct project activation fn
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Johannes Gäßler [Mon, 28 Jul 2025 12:30:22 +0000 (14:30 +0200)]
CUDA: fix pointer incrementation in FA (#14916)
Dongliang Wei [Mon, 28 Jul 2025 11:47:00 +0000 (19:47 +0800)]
model : add support for SmallThinker series (#14898)
* support smallthinker
* support 20b softmax, 4b no sliding window
* new build_moe_ffn_from_probs, and can run 4b
* fix 4b rope bug
* fix python type check
* remove is_moe judge
* remove set_dense_start_swa_pattern function and modify set_swa_pattern function
* trim trailing whitespace
* remove get_vocab_base of SmallThinkerModel in convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* better whitespace
Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
* use GGML_ASSERT for expert count validation
Co-authored-by: Sigbjørn Skjæret <redacted>
* Improve null pointer check for probs
Co-authored-by: Sigbjørn Skjæret <redacted>
* use template parameter for SWA attention logic
* better whitespace
Co-authored-by: Georgi Gerganov <redacted>
* move the creation of inp_out_ids before the layer loop
* remove redundant judge for probs
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Alberto Cabrera Pérez [Mon, 28 Jul 2025 10:05:53 +0000 (11:05 +0100)]
sycl: refactor quantization to q8_1 (#14815)
* sycl: quantization to q8_1 refactor
* Refactored src1 copy logic in op_mul_mat
Georgi Gerganov [Mon, 28 Jul 2025 08:01:03 +0000 (11:01 +0300)]
ops : update BLAS (#14914)
Georgi Gerganov [Mon, 28 Jul 2025 05:22:56 +0000 (08:22 +0300)]
ops : update Metal (#14912)
Georgi Gerganov [Mon, 28 Jul 2025 05:14:20 +0000 (08:14 +0300)]
sync : ggml
Kai Pastor [Thu, 24 Jul 2025 17:58:02 +0000 (19:58 +0200)]
cmake : Indent ggml-config.cmake (ggml/1310)
Ed Addario [Sun, 27 Jul 2025 21:31:11 +0000 (22:31 +0100)]
quantize : update README.md (#14905)
* Update README.md
* Fix trailing whitespace
* Update README.md
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Ruben Ortlam [Sun, 27 Jul 2025 13:33:08 +0000 (15:33 +0200)]
vulkan: add ops docs (#14900)
Akarshan Biswas [Sun, 27 Jul 2025 12:22:58 +0000 (17:52 +0530)]
SYCL: add ops doc (#14901)
Daniel Bevenius [Sun, 27 Jul 2025 10:10:51 +0000 (12:10 +0200)]
llama : clarify comment about pp and tg graphs [no ci] (#14895)
* llama : clarify comment about pp and tg graphs [no ci]
This commit clarifies the comment in `llama-context.cpp` regarding the
prefill prompt (pp), and token generation (tg) graphs.
The motivation for this is that I've struggled to remember these and had
to look them up more than once, so I thought it would be helpful to add
a comment that makes it clear what these stand for.
* squash! llama : clarify comment about pp and tg graphs [no ci]
Change "pp" to "prompt processing".
Erik Scholz [Sun, 27 Jul 2025 10:04:33 +0000 (12:04 +0200)]
vulkan : add fp16 support for the conv_2d kernel (#14872)
* add f16 to conv_2d testing
* weaken conv2d test error threshold
Jeff Bolz [Sun, 27 Jul 2025 09:05:34 +0000 (04:05 -0500)]
vulkan: skip empty set_rows to avoid invalid API usage (#14860)
Gabriel Larson [Sun, 27 Jul 2025 08:18:37 +0000 (03:18 -0500)]
model : make rope_yarn_log_mul optional for deepseek2 (#14896)
* make rope_yarn_log_mul optional for deepseek2
* default rope_yarn_log_mul = 0.0f
Shunta Saito [Sun, 27 Jul 2025 07:38:44 +0000 (16:38 +0900)]
llama : fix kq_scale for the attention layers of PLaMo2 (#14892)
* Fix dimensions for expand
* Change dimensions to copy states to cache
* Fix the default value for plamo2 conversion
* Fix scale given to build_attn
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Aman Gupta [Sun, 27 Jul 2025 01:36:43 +0000 (09:36 +0800)]
Docs: add instructions for adding backends (#14889)
deepsek [Sat, 26 Jul 2025 22:28:14 +0000 (18:28 -0400)]
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (#14624)
This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.
hipudding [Sat, 26 Jul 2025 09:56:18 +0000 (17:56 +0800)]
CANN: Implement GLU ops (#14884)
Implement REGLU, GEGLU, SWIGLU ops according to #14158
R0CKSTAR [Sat, 26 Jul 2025 02:36:02 +0000 (10:36 +0800)]
musa: fix build warnings (unused variable) (#14869)
Signed-off-by: Xiaodong Ye <redacted>
Aaron Teo [Fri, 25 Jul 2025 17:09:03 +0000 (01:09 +0800)]
ggml-cpu : disable GGML_NNPA by default due to instability (#14880)
* docs: update s390x document for sentencepiece
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
e086c5e3a7ab3463d8e0906efcfa39352db0a48d )
* docs: update huggingface links + reword
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
8410b085ea8c46e22be38266147a1e94757ef108 )
* ggml-cpu: disable ggml-nnpa compile flag by default
fixes #14877
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
412f4c7c88894b8f55846b4719c76892a23cfe09 )
* docs: update s390x build docs to reflect nnpa disable
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
c1eeae1d0c2edc74ab9fbeff2707b0d357cf0b4d )
---------
Signed-off-by: Aaron Teo <redacted>
Gabe Goodhart [Fri, 25 Jul 2025 16:47:39 +0000 (10:47 -0600)]
metal: SSM_SCAN performance (#14743)
* feat: Add s_off as a parameter in the args struct
This may not be necessary, but it more closely mirrors the CUDA kernel
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* perf: Parallelize mamba2 SSM_SCAN metal kernel over d_state
This is a first attempt at optimizing the metal kernel. The changes here
are:
- Launch the kernel with a thread group of size d_state
- Use simd groups and shared memory to do the summation for the y
computation
When tested with G4 tiny preview, this shows roughly a 3x speedup on
prefill and 15% speedup on decode.
Signed-off-by: Gabe Goodhart <redacted>
* fix: Update logic to correctly do the multi-layer parallel sum
Signed-off-by: Gabe Goodhart <redacted>
* fix: Correctly size the shared memory bufer and assert expected size relationships
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Compute block offsets once rather than once per token
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Use local variable for state recursion
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Use a secondary simd_sum instead of a for loop
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Add assertion and comment about relationship between simd size and num simd groups
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallelize of d_state for mamba-1
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallel sum in SSM_CONV
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* Revert "feat: Parallel sum in SSM_CONV"
After discussion with @compilade, the size of the parallelism here is
not worth the cost in complexity or overhead of the parallel for.
https://github.com/ggml-org/llama.cpp/pull/14743#discussion_r2223395357
This reverts commit
16bc059660c1c59e566628201c0ca2c20c9f4bc3 .
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Simplify shared memory sizing
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
Co-Authored-By: Georgi Gerganov <redacted>
---------
Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
lhez [Fri, 25 Jul 2025 15:12:13 +0000 (08:12 -0700)]
opencl: add fused `rms_norm_mul` (#14841)
* opencl: add fused `rms_norm` + `mul`
* opencl: improve workgroup size for `rms_norm_mul`
wooksong [Fri, 25 Jul 2025 14:25:05 +0000 (23:25 +0900)]
docs : update HOWTO‑add‑model.md for ModelBase and new model classes (#14874)
This patch updates the example in docs/development/HOWTO-add-model.md to
reflect recent changes after `TextModel` and `MmprojModel` were introduced.
It replaces the outdated `Model` base class with `TextModel` or `MmprojModel`
and updates the registration example accordingly.
Signed-off-by: Wook Song <redacted>
Oliver Simons [Fri, 25 Jul 2025 11:29:57 +0000 (13:29 +0200)]
ggml : remove invalid portPos specifiers from dot files (#14838)
Neither "g" nor "x" are valid portPos specifiers per the official
[graphviz documents](https://graphviz.org/docs/attr-types/portPos/):
> If a compass point is used, it must have the form "n","ne","e","se","s","sw","w","nw","c","_".
I tested locally for it to fall back to default portPos specifier if an
invalid portPos is specified. As a consequence, we can remove associated
code.
Georgi Gerganov [Fri, 25 Jul 2025 11:28:06 +0000 (14:28 +0300)]
context : restore preemptive sched reset when LLAMA_SET_ROWS=0 (#14870)
ggml-ci
kiwi [Fri, 25 Jul 2025 11:08:04 +0000 (19:08 +0800)]
mtmd : fix 32-bit narrowing issue in export-lora and mtmd clip (#14503)
* [fix] Fix 32-bit narrowing issue in export-lora and mtmd clip
* Update export-lora.cpp
* Update clip.cpp
* Update export-lora.cpp
* format: use space to replace tab
Chris Rohlf [Fri, 25 Jul 2025 10:17:02 +0000 (06:17 -0400)]
rpc : check for null buffers in get/set/copy tensor endpoints (#14868)
Diego Devesa [Fri, 25 Jul 2025 08:07:26 +0000 (01:07 -0700)]
sched : fix multiple evaluations of the same graph with pipeline parallelism (#14855)
ggml-ci
R0CKSTAR [Thu, 24 Jul 2025 19:05:37 +0000 (03:05 +0800)]
musa: upgrade musa sdk to rc4.2.0 (#14498)
* musa: apply mublas API changes
Signed-off-by: Xiaodong Ye <redacted>
* musa: update musa version to 4.2.0
Signed-off-by: Xiaodong Ye <redacted>
* musa: restore MUSA graph settings in CMakeLists.txt
Signed-off-by: Xiaodong Ye <redacted>
* musa: disable mudnnMemcpyAsync by default
Signed-off-by: Xiaodong Ye <redacted>
* musa: switch back to non-mudnn images
Signed-off-by: Xiaodong Ye <redacted>
* minor changes
Signed-off-by: Xiaodong Ye <redacted>
* musa: restore rc in docker image tag
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Georgi Gerganov [Thu, 24 Jul 2025 15:30:33 +0000 (18:30 +0300)]
sync : ggml
ggml-ci
Kai Pastor [Tue, 22 Jul 2025 18:13:21 +0000 (20:13 +0200)]
cmake : fix usage issues (ggml/1257)
* CMake config: Create target only once
Fix error on repeated find_package(ggml).
For simplicity, check only for the top-level ggml::ggml.
* CMake config: Add CUDA link libs
* CMake config: Add OpenCL link libs
* CMake config: Use canonical find_dependency
Use set and append to control link lib variables.
Apply more $<LINK_ONLY...>.
* CMake config: Wire OpenMP dependency
Daniel Bevenius [Mon, 21 Jul 2025 13:53:12 +0000 (15:53 +0200)]
ggml-cpu : remove stdlib include from repack.cpp (ggml/1276)
This commit removes the inclusion of `<cstdlib>`.
The motivation for this change is that this source file does not seem to
use any functions from this header and the comment about `qsort` is a
little misleading/confusing.
Georgi Gerganov [Thu, 24 Jul 2025 13:31:48 +0000 (16:31 +0300)]
context : perform output reorder lazily upon access after sync (#14853)
* context : perform output reorder after lazily upon access after sync
ggml-ci
* cont : add TODO
Xuan-Son Nguyen [Thu, 24 Jul 2025 11:59:56 +0000 (13:59 +0200)]
chat : fix kimi-k2 chat template (#14852)