]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Ryan Mangeno [Thu, 10 Jul 2025 17:41:00 +0000 (13:41 -0400)]
Smoldocling support (#14597)
* support for smoldocling
* fixed merge conflicts
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Gabe Goodhart <redacted>
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Gabe Goodhart <redacted>
* merge conflicts
* pre tokenizer merge fix
* convert : fix smollm3 jinja template (#14586)
Signed-off-by: ryan-mangeno <redacted>
* support for smoldocling
Signed-off-by: ryan-mangeno <redacted>
* fixed merge conflicts
Signed-off-by: ryan-mangeno <redacted>
* Update src/llama-vocab.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.h
Co-authored-by: Sigbjørn Skjæret <redacted>
* safetensors tensor mapping
Signed-off-by: ryan-mangeno <redacted>
* added back accidental removal of clean spaces for hunyuan
* Update src/llama-vocab.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* updated hash and reordererd model list
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-vocab.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update include/llama.h
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf_update.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-vocab.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* removed old tensor name
* removed tensor mappings -> handled by smolvlm
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Signed-off-by: ryan-mangeno <redacted>
Co-authored-by: Gabe Goodhart <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: compilade <redacted>
Aman Gupta [Thu, 10 Jul 2025 15:29:01 +0000 (23:29 +0800)]
Docs: script to auto-generate ggml operations docs (#14598)
* Docs: script to auto-generate ggml operations docs
* Review: formatting changes + change github action
* Use built-in types instead of typing
* docs : add BLAS and Metal ops
---------
Co-authored-by: Georgi Gerganov <redacted>
Eric Zhang [Thu, 10 Jul 2025 12:29:05 +0000 (20:29 +0800)]
cmake : do not search for curl libraries by ourselves (#14613)
* cmake : do not search for curl libraries by ourselves
* run : do not search for curl libraries by ourselves
Akarshan Biswas [Thu, 10 Jul 2025 08:29:38 +0000 (13:59 +0530)]
SYCL: Initial set_rows kernel implementation (#14562)
* SYCL: Initial set_rows kernel implementation
* Revert max_threads to 256
* Refactor set_rows and address review comments
* Deduplicate conversion function
* Remove guard before kernel launch and refactor
* Fix and add back SFINAE
Xuan-Son Nguyen [Thu, 10 Jul 2025 07:00:20 +0000 (09:00 +0200)]
llama : minor coding style fix for smollm3 (#14605)
Eric Zhang [Thu, 10 Jul 2025 05:19:37 +0000 (13:19 +0800)]
cmake : bump llguidance version to v1.0.1 (#14609)
Eric Zhang [Thu, 10 Jul 2025 05:19:13 +0000 (13:19 +0800)]
cmake : llguidance build parser library only (#14608)
compilade [Thu, 10 Jul 2025 03:54:38 +0000 (23:54 -0400)]
cuda : support Falcon-H1 state size for SSM_SCAN (#14602)
Xuan-Son Nguyen [Wed, 9 Jul 2025 21:09:28 +0000 (23:09 +0200)]
llama : remove llm_graph_input_one (#14603)
compilade [Wed, 9 Jul 2025 18:59:57 +0000 (14:59 -0400)]
llama : support Jamba hybrid Transformer-Mamba models (#7531)
* wip: llama : separate recurrent states from the KV cache
This will be necessary to support Jamba
(and other recurrent models mixed with Attention).
Doesn't compile yet, and finding a slot isn't yet done correctly for recurrent states.
* llama : use std::find for seq_nodes in llama_rs_cache
* llama : state checkpoints for recurrent models
* llama : correctly handle more edge cases for the rs cache
* llama : rename many llama_kv_cache_* functions
* llama : remove useless return value for some llama_cache_* functions
* llama : rethink recurrent state cell counts
* llama : begin work on support for variable GQA
This will also be useful for Jamba if we consider the Mamba layers
to have 0 KV heads.
* llama : gracefully fail when not finding hybrid slot
* llama : support Jamba
* llama : fix BERT inference without KV cache
* convert-hf : check for unprocessed Jamba experts
* convert-hf : support Mini-Jamba conversion
* llama : fix Jamba quantization sanity checks
* llama : sequence-length-aware batch splitting
* llama : use equal-sequence-length sub-batches for recurrent models
* ggml : simplify SSM-related operators
* llama : make recurrent state slot allocation contiguous
* llama : adapt internal uses of batches to llama_ubatch
* llama : fix batch split output count for embeddings
* llama : minimize swaps when reordering logits
This reduces overhead when running hellaswag
on thousands of sequences with very small 100k params Mamba models.
* llama : fix edge case finding batch seq_id of split recurrent cell
This otherwise was a problem when running the HellaSwag benchmark
with small batch sizes, making it crash.
* llama : avoid copies for simple batch splits
* ggml : make ggml_ssm_scan not modify its source tensors
* llama : fix shared recurrent tail cell count for small ubatch sizes
Otherwise it was impossible to run the 'parallel' example with '-ub 1'
with a Mamba or Jamba model.
* llama : fix .base() compilation error on Windows
* llama : allow doing the equivalent of SSM_CONV with SUM_ROWS and MUL
* ggml : allow GGML_OP_CONCAT to work on non-contiguous tensors
The implementation already supported it,
and this makes Mamba's conv step slightly faster.
* mamba : fix non-contiguous usage of ggml_silu
* llama : session saving and reloading for hybrid models
* convert_hf : fix Jamba conversion
* llama : fix mixed signedness comparison
* llama : use unused n_embd_k_gqa in k_shift
This also slightly reduces the diff from the master branch
* llama : begin renaming llama_past back to llama_kv_cache
* llama : remove implicit recurrent state rollbacks
* llama : partially apply clang-format style
* convert : fix jamba conv1d shape squeezing
* graph : add back hybrid memory graph input
But this time it contains the sub-cache graph inputs.
This *should* make it easier to handle updating the inputs
when caching the graph (eventually).
* model : add Jamba to Mamba-specific hparams printing
* jamba : remove redundant nullptr initializations
* model : remove unnecessary prefix for tensor loading constants
Co-authored-by: Sigbjørn Skjæret <redacted>
* model : use ggml_swiglu_split for Mamba
Co-authored-by: Sigbjørn Skjæret <redacted>
* model : make falcon-h1 use shared mamba2 layer builder
* memory : avoid referring to KV in recurrent cache logs
* gguf-py : avoid adding duplicate tensor mappings for Jamba
Some of the tensor names are common with Llama4
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Xuan-Son Nguyen [Wed, 9 Jul 2025 16:16:12 +0000 (18:16 +0200)]
ggml : add ggml_scale_bias (#14417)
* ggml : add ggml_scale_bias
* ggml_vec_mad1_f32
* add more simd
* add CUDA
* sycl
* vulkan
* cann (placeholder)
* opencl
* will this fix cpu?
* fix cuda
* suggestions from coderabbit
* fix cann compile error
* vDSP_vsmsa
* rm __ARM_FEATURE_SVE
* use memcpy for op params
* make code looks more consistent
* use scalar for __ARM_FEATURE_SVE
* add x param to ggml_vec_mad1_f32
Miaoqian Lin [Wed, 9 Jul 2025 12:33:53 +0000 (20:33 +0800)]
ggml : prevent integer overflow in gguf tensor size calculation (#14595)
Dowon [Wed, 9 Jul 2025 08:22:31 +0000 (17:22 +0900)]
model : add skt/A.X-4.0 model vocabulary (#14589)
Sigbjørn Skjæret [Wed, 9 Jul 2025 08:19:50 +0000 (10:19 +0200)]
llama : remove unintended whitespace (#14592)
ibrahim khadraoui [Wed, 9 Jul 2025 08:03:49 +0000 (12:03 +0400)]
model : add support for Falcon-H1 family (#14534)
* v1
* push more fixes
* another fix
* fix
* more fixes
* minor fix
* more cleaning on python code
* python fixes
* changed precision for multipliers float 32->64
* fixes
* another fix
* fix
* pre-norm -> norm
* fix
* Revert "fix"
This reverts commit
243e4d1a50bd73467d99f6b289b9a1826f83b94b .
* fix
* small fix ffn_norm
* try
* mix instead of max
* fix vocab size
* conflict solve
* fixed multipliers
* falcon-h1 specefic vocab resolved
* read arch from gguf.MODEL_ARCH
* mamba_d_ssm added to d_inner find_hparam
* remove unused functions from gguf_writer.py
* override modify_tensors instead of get_tensors
* fix conversion and d_inner
* added some cb functions for debugging puposes
* inp_out_ids moved outside of layers loop
* mup_vec create as float64
* fix rope_theta
* injected mup
* clean ups
* rm extra space
* rm unused MAMBA_CHUNK_SIZE
* rm unused key
* add bos False
* changed ROPE_TYPE
* cleaning debugging stuff
* cleaning debug quant
* fix comment
* some cleanups
* some cleanups
* Update src/llama-model-loader.cpp
* more cleanups
* moe cleanuips
* d_ssm -> d_inner;
* cleaning unused hparams
* cleanup
* more cleanups
* more cleanups on python conversion;
* minor cleanups
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* remove todo
* added falcon-h1
* tensor not required
* clean
* remove unneeded attributes
* more cleanups and fixed conversion
* remove final_norm
* flake8 fixes
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* flake8 fixes
* Update src/llama-hparams.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-arch.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* added hashes
* Update src/llama-arch.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-vocab.cpp
Co-authored-by: Georgi Gerganov <redacted>
* update the update file
* Revert "update the update file"
This reverts commit
082ab4ad2a3927384d878666a5f8cae4eb15f577 .
* fix: address suggestions
* fix: update convert_hf_to_gguf.py
* Update gguf-py/gguf/constants.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model-loader.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* d_inner fixed
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* reshaping ssm_norm for 34B
* removing generate_mup
* remove duplicates metadata keys
* rm comment
* final comment
* fix unused args
* fix constants
* fix bad merge
* Update src/llama-model.cpp
Co-authored-by: compilade <redacted>
* falcon-h1: remove unused ssm_in_b and bad merge
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* falcon-h1: fix last comment
* Update convert_hf_to_gguf.py
Co-authored-by: compilade <redacted>
* falcon-h1: revert add_add_bos(False)
* falcon-h1: fix tied weights
* falcon-h1: remove whitespace
* falcon-h1: fix wrong size param
* falcon-h1: fix whitespace issues
---------
Co-authored-by: younesbelkada <redacted>
Co-authored-by: Younes B <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: compilade <redacted>
Xuan-Son Nguyen [Wed, 9 Jul 2025 06:26:13 +0000 (08:26 +0200)]
convert : fix smollm3 jinja template (#14586)
Jeff Bolz [Tue, 8 Jul 2025 18:11:42 +0000 (13:11 -0500)]
vulkan: optimize flash attention split_k_reduce (#14554)
* vulkan: allow FA split_k with smaller KV values
* vulkan: spread split_k_reduce work across more threads
k_num can get rather large. Use the whole workgroup to reduce the M/L values.
Launch a thread for each element in the HSV dimension of the output. Helps a
lot for large HSV (like deepseek).
stevenkuang [Tue, 8 Jul 2025 16:29:29 +0000 (00:29 +0800)]
model : fix hunyuan moe chat template (#14584)
Signed-off-by: stevenkuang <redacted>
Xuan-Son Nguyen [Tue, 8 Jul 2025 16:07:01 +0000 (18:07 +0200)]
model : add SmolLM3 (#14581)
* Init - first pass.
* Model -> ModelBase.
* fix errors in conversion.
* Update the graph.
* up.
* up.
* wip
* cgraph ok
* rm redundant code
---------
Co-authored-by: Vaibhavs10 <redacted>
compilade [Tue, 8 Jul 2025 15:37:47 +0000 (11:37 -0400)]
memory : fix broken batch splits for recurrent cache (#14575)
Splits producing more than one ubatch per batch for recurrent models
were broken with #14512.
This fixes it by moving the completeness check after the ubatch split loop.
Jeff Bolz [Tue, 8 Jul 2025 13:21:21 +0000 (08:21 -0500)]
vulkan : fix rope with partial rotation and non-cont src (#14582)
Alawode Oluwandabira [Tue, 8 Jul 2025 08:47:33 +0000 (11:47 +0300)]
server: Add ability to mount server at prefix (#14544)
* Add server_prefix
* Correct server path env
* Rename cli flag to --api-prefix
* Change all to api_prefix
Xuan-Son Nguyen [Tue, 8 Jul 2025 08:24:06 +0000 (10:24 +0200)]
model : add hunyuan moe (#14425)
* model : add hunyuan moe
* tokenizer ok
* fix tensor name
* cgraph init
* chat template
* wip
* almost working
* skip embed, fix bos
* cleanup
* yarn scaling
* cleanup
* correct rope type
* failed token fix
* ntk alpha freq_base
* tokenization working
* cleanup and pr changes
* vocab_size sanity check
* ntk alpha generic
* Update convert_hf_to_gguf.py
* Apply suggestions from code review
* fix regression
* fix style
---------
Co-authored-by: kooshi <redacted>
Jeff Bolz [Tue, 8 Jul 2025 07:38:31 +0000 (02:38 -0500)]
vulkan: increase timeout for CI (#14574)
Georgi Gerganov [Tue, 8 Jul 2025 07:15:21 +0000 (10:15 +0300)]
cuda : fix rope with partial rotation and non-cont src (#14580)
* cuda : fix rope non-cont
ggml-ci
* cont : fix multi-rope + add test
ggml-ci
* sycl : try fix
ggml-ci
* cont : fix sycl + clean-up cuda
ggml-ci
Aman Gupta [Tue, 8 Jul 2025 02:11:18 +0000 (10:11 +0800)]
CUDA: add bilinear interpolation for upscale (#14563)
R0CKSTAR [Mon, 7 Jul 2025 23:58:30 +0000 (07:58 +0800)]
musa: fix build warnings (unused variable) (#14561)
Signed-off-by: Xiaodong Ye <redacted>
Sigbjørn Skjæret [Mon, 7 Jul 2025 21:35:35 +0000 (23:35 +0200)]
llama : fix incorrect minicpm3 v_states shape (#14571)
Sigbjørn Skjæret [Mon, 7 Jul 2025 19:35:08 +0000 (21:35 +0200)]
llama : remove ggml_cont where possible (#14568)
Aman Gupta [Mon, 7 Jul 2025 13:45:43 +0000 (21:45 +0800)]
CUDA: add bf16 and i32 to getrows (#14529)
Eve [Sun, 6 Jul 2025 10:29:36 +0000 (10:29 +0000)]
vulkan: increase LOAD_VEC_A to 8 (IQ1/IQ2) or 4 (IQ3) (#14485)
Commit taken from remyoudompheng's PR https://github.com/ggml-org/llama.cpp/pull/12260
Co-authored-by: Rémy Oudompheng <redacted>
Jeff Bolz [Sun, 6 Jul 2025 08:08:16 +0000 (03:08 -0500)]
vulkan: fix rms_norm+mul fusion (#14545)
The fused operation was grabbing the epsilon value from the wrong place.
Add an env var to disable fusion.
Add some missing checks for supported shapes/types.
Handle fused rms_norm+mul in check_results.
Jeff Bolz [Sat, 5 Jul 2025 07:26:04 +0000 (02:26 -0500)]
vulkan: Handle updated FA dim2/3 definition (#14518)
* vulkan: Handle updated FA dim2/3 definition
Pack mask boolean and n_head_log2 into a single dword to keep the push
constant block under the 128B limit.
* handle null mask for gqa
* allow gqa with dim3>1
Sigbjørn Skjæret [Sat, 5 Jul 2025 07:17:14 +0000 (09:17 +0200)]
server : fix assistant prefilling when content is an array (#14360)
Sigbjørn Skjæret [Sat, 5 Jul 2025 06:24:56 +0000 (08:24 +0200)]
opencl: add GELU_ERF (#14476)
Georgi Gerganov [Sat, 5 Jul 2025 04:18:09 +0000 (07:18 +0300)]
eval-callback : check for empty input (#14539)
R0CKSTAR [Sat, 5 Jul 2025 04:10:53 +0000 (12:10 +0800)]
test-backend-ops: add support for specifying output format (#14368)
* test-backend-ops: add support for specifying output format
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* Add build_commit and build_number in test_result
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* refactor
Signed-off-by: Xiaodong Ye <redacted>
* Get build commit from ggml_commit()
Signed-off-by: Xiaodong Ye <redacted>
* Merge errors into test_operation_info && address review comments
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* remove visitor nonsense
* remove visitor comment
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: slaren <redacted>
Georgi Gerganov [Fri, 4 Jul 2025 16:19:09 +0000 (19:19 +0300)]
metal : disable fast math in all quantize kernels (#14528)
ggml-ci
Georgi Gerganov [Fri, 4 Jul 2025 06:08:59 +0000 (09:08 +0300)]
batch : add optional for sequential equal split (#14511)
ggml-ci
Georgi Gerganov [Fri, 4 Jul 2025 06:05:36 +0000 (09:05 +0300)]
graph : prepare for 4D mask (#14515)
ggml-ci
Georgi Gerganov [Fri, 4 Jul 2025 06:04:59 +0000 (09:04 +0300)]
batch : add n_used count (#14512)
ggml-ci
luyhcsu [Fri, 4 Jul 2025 03:50:07 +0000 (11:50 +0800)]
CANN: Replace aclrtMemsetSync with aclnnInplaceZero operator (#14002)
Co-authored-by: luyuhong <redacted>
Sigbjørn Skjæret [Thu, 3 Jul 2025 21:07:22 +0000 (23:07 +0200)]
ggml : implement GEGLU_ERF and GEGLU_QUICK ops (#14445)
lhez [Thu, 3 Jul 2025 18:22:24 +0000 (11:22 -0700)]
opencl : broadcast for soft_max (#14510)
Jeff Bolz [Thu, 3 Jul 2025 18:21:14 +0000 (13:21 -0500)]
vulkan: support mixed/deepseekR1 FA head sizes (#14509)
* vulkan: better parameterize FA by head sizes
* vulkan: support mixed/deepseekR1 FA head sizes
Johannes Gäßler [Thu, 3 Jul 2025 15:05:18 +0000 (17:05 +0200)]
ggml: backward pass for split swiglu (#14483)
Nicolò Scipione [Thu, 3 Jul 2025 09:00:03 +0000 (11:00 +0200)]
Fix conditional enabling following arch checks for ggml-sycl (#14504)
Signed-off-by: nscipione <redacted>
Xuan-Son Nguyen [Thu, 3 Jul 2025 08:03:06 +0000 (10:03 +0200)]
convert : correct gemma 3n conversion (#14450)
* convert : correct gemma 3n conversion
* rm redundant code
Georgi Gerganov [Thu, 3 Jul 2025 07:53:35 +0000 (10:53 +0300)]
kv-cache : use ggml_set_rows (#14285)
* kv-cache : use ggml_set_rows
ggml-ci
* graph : separate k and v indices
ggml-ci
* cont : remove redundant ifs
ggml-ci
* kv-cache : improve find_slot impl
* kv-cache : bounds-check when accessing slot_info indices
* kv-cache : add comments
ggml-ci
* ggml : add TODOs for adding GGML_OP_SET_ROWS support in the backends
ggml-ci
Georgi Gerganov [Thu, 3 Jul 2025 07:46:57 +0000 (10:46 +0300)]
ggml : fix FA mask dim 2 and 3 (#14505)
* ggml : fix FA mask dim 2 and 3
ggml-ci
* backends : unsupport batched FA in CUDA and Vulkan
ggml-ci
* vulkan : disable FA for mask->ne[2] != 1
Georgi Gerganov [Thu, 3 Jul 2025 04:48:32 +0000 (07:48 +0300)]
ggml : remove kompute backend (#14501)
ggml-ci
Aman Gupta [Wed, 2 Jul 2025 23:45:11 +0000 (07:45 +0800)]
CUDA: add dynamic shared mem to softmax, refactor general usage (#14497)
Sigbjørn Skjæret [Wed, 2 Jul 2025 19:02:35 +0000 (21:02 +0200)]
gguf-py : add support for chat template jinja files (#14508)
* add support for chat template jinja files
* remove gemma3n hack
compilade [Wed, 2 Jul 2025 17:10:24 +0000 (13:10 -0400)]
llama : initial Mamba-2 support (#9126)
* llama : initial Mamba-2 support
* ggml : SIMD ggml_ssm_scan for Mamba-2
* ggml : improve ggml_mul speed when masking recurrent states
* llama : support running Mamba-Codestral-7B-v0.1
* llama : fix Mamba-2 conv state saving
* ggml : make the ggml_mul fast broadcast path more consistently formatted
* llama : remove unused variable
* llama : add missing break
* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present
The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.
* llama : avoid redundant state copy for Mamba 1 and 2
* metal : attempt to adapt SSM_SCAN for Mamba-2
* metal : fix SSM_SCAN pipeline scope
* metal : use log and exp instead of log1pf and expf in SSM_SCAN
* metal : remove unused arguments for SSM_SCAN
The max index is 31, so trimming the arguments is necessary.
* metal : add back n_seqs to SSM_SCAN args
Whoops, this is needed for the offset in the concatenated output.
* metal : fix SSM_SCAN state head offset
* metal : fix wrong number of tokens per sequence in SSM_SCAN
* ggml : remove unused fast broadcast path in GGML_MUL
This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.
* ggml : avoid multiply by D in GGML_OP_SSM_SCAN
This makes the weight buft detection in src/llama.cpp simpler.
* convert : transpose Mamba-2 A, D and reshape SSM_NORM
This breaks existing conversions of Mamba-2 models
to avoid some reshapes.
Not sure if it's a good idea,
but it makes the graph slightly cleaner.
* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks
* convert : fix flake8 lint
* metal : fix confusion between ; and ,
* metal : add missing args for nb references in ssm_scan_f32_group
* metal : single-user mamba2 inference works
* kv-cache : remove const_cast when setting inputs for s_copy
And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.
* convert : avoid AutoConfig for Mamba and Mamba2 hparams
* kv-cache : allow context shift for recurrent models
* graph : fix recurrent state copies when avoiding copies
Works, but using lambda functions might not be that clean.
* ggml : fix mamba2 ssm scan when compiled with SVE
* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches
* cuda : implement ssm scan for Mamba2
There is still room for improvement, but it works!
* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2
* mamba : fix mismatched new and delete size for llm_build_mamba
Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON
* cuda : graceful fallback for Mamba-1 models with weird embd size
Georgi Gerganov [Wed, 2 Jul 2025 16:35:47 +0000 (19:35 +0300)]
sync : ggml
ggml-ci
Daniel Bevenius [Wed, 2 Jul 2025 11:55:32 +0000 (13:55 +0200)]
ggml : add version function to get lib version (ggml/1286)
* ggml : add version function to get lib version
This commit adds a function `ggml_version()` to the ggml library that
returns the version of the library as a string.
The motivation for this is that it can be useful to be able to
programmatically check the version of the ggml library being used.
Usage:
```c
printf("GGML version: %s\n", ggml_version());
```
Output:
```console
GGML version: 0.0.2219
```
* ggml : add ggml_commit()
---------
Co-authored-by: Georgi Gerganov <redacted>
Rotem Dan [Wed, 2 Jul 2025 16:37:16 +0000 (19:37 +0300)]
Set RPATH to "@loader_path" / "$ORIGIN" to ensure executables and dynamic libraries search for dependencies in their origin directory. (#14309)
Aman Gupta [Wed, 2 Jul 2025 12:34:24 +0000 (20:34 +0800)]
CUDA: add softmax broadcast (#14475)
* CUDA: add softmax broadcast
* Pass by const ref
* Review: Use blockDims for indexing, remove designated initializers
* Add TODO for noncontigous input/output
Johannes Gäßler [Wed, 2 Jul 2025 11:42:12 +0000 (13:42 +0200)]
CUDA: broadcasting for FlashAttention mask (#14500)
Jeff Bolz [Tue, 1 Jul 2025 08:32:56 +0000 (03:32 -0500)]
vulkan: support softmax/FA batch and broadcast (#14449)
Georgi Gerganov [Fri, 27 Jun 2025 18:50:57 +0000 (21:50 +0300)]
ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (#14435)
ggml-ci
zhouwg [Wed, 2 Jul 2025 12:38:10 +0000 (20:38 +0800)]
opencl : fix possible buffer overflow in dump_tensor (#14490)
Georgi Gerganov [Wed, 2 Jul 2025 11:12:07 +0000 (14:12 +0300)]
simple-chat : fix context-exceeded condition (#14494)
* simple-chat : fix context-exceeded condition
ggml-ci
* cont : fix n_ctx_used computation
ggml-ci
Eric Zhang [Wed, 2 Jul 2025 11:00:04 +0000 (19:00 +0800)]
opencl : skip empty nodes on cgraph compute (#14491)
lhez [Wed, 2 Jul 2025 07:07:42 +0000 (00:07 -0700)]
opencl : update upscale to support align corners (#14488)
Sigbjørn Skjæret [Wed, 2 Jul 2025 07:02:51 +0000 (09:02 +0200)]
ci : add OpenCL to labeler workflow (#14496)
Eric Zhang [Wed, 2 Jul 2025 05:41:35 +0000 (13:41 +0800)]
github : add OpenCL backend to issue templates (#14492)
Björn Ganster [Wed, 2 Jul 2025 05:19:31 +0000 (07:19 +0200)]
ggml : Callback before abort (#14481)
* Add a callback that will be called just before abort. This allows apps without a console to display a message to the user and save data if needed.
* Return previous callback to allow callback chaining
* style fixes
---------
Co-authored-by: Diego Devesa <redacted>
Georgi Gerganov [Tue, 1 Jul 2025 15:04:08 +0000 (18:04 +0300)]
ci : disable fast-math for Metal GHA CI (#14478)
* ci : disable fast-math for Metal GHA CI
ggml-ci
* cont : remove -g flag
ggml-ci
Grzegorz Grasza [Tue, 1 Jul 2025 13:44:11 +0000 (15:44 +0200)]
Add Vulkan images to docker.md (#14472)
Right now it's not easy to find those.
Chenguang Li [Tue, 1 Jul 2025 08:47:30 +0000 (16:47 +0800)]
CANN: update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3 (#14411)
* [CANN]update to aclnnGroupedMatmulV2
Signed-off-by: noemotiovon <redacted>
* Support MUL_MAT_ID on 310p
Signed-off-by: noemotiovon <redacted>
* fix editorconfig
Signed-off-by: noemotiovon <redacted>
---------
Signed-off-by: noemotiovon <redacted>
Jeff Bolz [Tue, 1 Jul 2025 08:43:08 +0000 (03:43 -0500)]
vulkan: Split large mul_mat_id to fit in shared memory (#14451)
Sigbjørn Skjæret [Tue, 1 Jul 2025 08:14:21 +0000 (10:14 +0200)]
add GELU_ERF (#14455)
Georgi Gerganov [Tue, 1 Jul 2025 08:05:48 +0000 (11:05 +0300)]
ggml : remove trailing whitespace (#0)
Georgi Gerganov [Tue, 1 Jul 2025 07:27:52 +0000 (10:27 +0300)]
sync : ggml
ggml-ci
Acly [Tue, 1 Jul 2025 07:11:00 +0000 (09:11 +0200)]
ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285)
* add "align corners" mode for bilinear upscale, and allow downscaling
* add ggml_interpolate, deprecate ggml_upscale_ext, pass in align-corners as bit-flag
* test-backend-ops: replace ggml_upscale_ext with ggml_interpolate, add test cases for downscale and align-corners
Daniel Bevenius [Tue, 24 Jun 2025 04:10:16 +0000 (06:10 +0200)]
ggml-quants : rename best_mad to best_error (ggml/1283)
This commit renames the variable `best_mad` to `best_error` in the
`make_qkx2_quants` function.
The motivation for this is that the name `best_mad` can be somewhat
confusing if mean absolute deviation (MAD) is not in use.
lhez [Tue, 1 Jul 2025 07:19:16 +0000 (00:19 -0700)]
opencl : add GEGLU, REGLU, SWIGLU (#14456)
Aman Gupta [Mon, 30 Jun 2025 15:57:04 +0000 (23:57 +0800)]
Add Conv2d for CPU (#14388)
* Conv2D: Add CPU version
* Half decent
* Tiled approach for F32
* remove file
* Fix tests
* Support F16 operations
* add assert about size
* Review: further formatting fixes, add assert and use CPU version of fp32->fp16
Georgi Gerganov [Mon, 30 Jun 2025 15:03:03 +0000 (18:03 +0300)]
memory : correctly handle failure in apply() (#14438)
ggml-ci
Georgi Gerganov [Mon, 30 Jun 2025 14:04:05 +0000 (17:04 +0300)]
metal : disable fast-math for some cpy kernels (#14460)
* metal : disable fast-math for some cpy kernels
ggml-ci
* cont : disable for q4_1
ggml-ci
* cont : disable for iq4_nl
ggml-ci
Romain Biessy [Mon, 30 Jun 2025 12:52:02 +0000 (14:52 +0200)]
ggml-cpu: sycl: Re-enable exp f16 (#14462)
Diego Devesa [Mon, 30 Jun 2025 10:43:15 +0000 (03:43 -0700)]
test-backend-ops : disable llama test (#14461)
xiaobing318 [Mon, 30 Jun 2025 09:48:24 +0000 (17:48 +0800)]
cmake : Remove redundant include path in CMakeLists.txt (#14452)
* Update docker.yml
修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动
* Remove redundant include path in CMakeLists.txt
The parent directory '..' was removed from the include directories for the ggml-cpu-feats target, to avoid unnecessary include paths.
* Enable scheduled Docker image builds
Uncomments the workflow schedule to trigger daily Docker image rebuilds at 04:12 UTC, improving automation and keeping images up to date.
Vedran Miletić [Mon, 30 Jun 2025 08:17:18 +0000 (10:17 +0200)]
scripts : make the shell scripts cross-platform (#14341)
matteo [Sun, 29 Jun 2025 18:02:53 +0000 (20:02 +0200)]
server : support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client (#13196)
* initial commit for handling extra template kwargs
* enable_thinking and assistant prefill cannot be enabled at the same time
* can set chat_template_kwargs in command line
* added doc
* fixed formatting
* add support for extra context in generic template init
* coding standard: common/chat.cpp
Co-authored-by: Georgi Gerganov <redacted>
* coding standard: common/chat.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Apply suggestions from code review
coding standard: cosmetic changes
Co-authored-by: Georgi Gerganov <redacted>
* fix merge conflict
* chat.cpp: simplify calls to apply to ensure systematic propagation of extra_context (+ the odd existing additional_context)
* normalize environment variable name
* simplify code
* prefill cannot be used with thinking models
* compatibility with the new reasoning-budget parameter
* fix prefill for non thinking models
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Olivier Chafik <redacted>
Renat [Sun, 29 Jun 2025 17:29:57 +0000 (19:29 +0200)]
server : fix appearance of the chats list context menu for Safari (#14322)
Akarshan Biswas [Sun, 29 Jun 2025 15:37:58 +0000 (21:07 +0530)]
SYCL: disable faulty fp16 exp kernel (#14395)
* SYCL: disable faulty fp16 CPU exponent for now
* Revert "SYCL: disable faulty fp16 CPU exponent for now"
This reverts commit
ed0aab1ec31b4eb4b0f275dd7acd41d96a375202 .
* SYCL: disable faulty fp16 CPU exponent for now
* Fix logic of disabling exponent kernel
Sigbjørn Skjæret [Sun, 29 Jun 2025 12:38:10 +0000 (14:38 +0200)]
ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (#14443)
Sigbjørn Skjæret [Sun, 29 Jun 2025 09:04:10 +0000 (11:04 +0200)]
ggml : implement REGLU/GEGLU/SWIGLU ops (#14158)
* implement unary REGLU/GEGLU/SWIGLU cpu ops
* relax constraints
* duplicate shape of source
* fix ggml_vec_geglu_f16
* special case gated ops
* implement unary REGLU/GEGLU/SWIGLU cuda ops
* tighten constraints again
* refactor into GGML_GLU_OP
* metal : add glu kernels
ggml-ci
* add CUDA_GLU_BLOCK_SIZE [no ci]
* more constraints and use 64bit ints
ggml-ci
* 64bit multiplication [no ci]
* implement swapped variants (cpu/cuda)
* update comment [no ci]
ggml-ci
* Vulkan: Add GLU ops and shaders
* SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate
* ggml : implement GLU for split up/gate (#14181)
* implement GLU for split up/gate
* add tests for ggml_glu_split
* Vulkan: Implement glu_split logic and shader support
* add split to logging [no ci]
* SYCL: refactor element_size ops and add split up and gate support to gated kernels
* SYCL: switch GEGLU to use tanh approximation
---------
Co-authored-by: 0cc4m <redacted>
Co-authored-by: Akarshan <redacted>
* GGML: increase OP count in assertion
* Refactor: Optimize SYCL element-wise operations with unary function inlining
This commit refactors the SYCL element-wise operations to improve performance by:
- Inlining unary operations (sgn, abs, elu, gelu, silu, etc.) to reduce kernel launch overhead.
- Introducing helper functions `op_xxx` for each unary operation to encapsulate the logic.
- Replacing direct kernel calls with calls to these inlined functions.
- Using `__dpct_inline__` to encourage compiler inlining.
- Minor code cleanup and consistency improvements.
The changes aim to reduce kernel launch overhead and improve the overall efficiency of element-wise operations on SYCL devices.
* vulkan: Increase workgroup size for GLU, for performance (#14345)
* vulkan: Increase workgroup size for GLU, for performance
* vulkan: change GLU shaders to do one element per invocation rather than one row per workgroup
* merge fix
* metal : add support for split and swap
ggml-ci
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: 0cc4m <redacted>
Co-authored-by: Akarshan <redacted>
Co-authored-by: Jeff Bolz <redacted>
Jeff Bolz [Sun, 29 Jun 2025 07:43:36 +0000 (02:43 -0500)]
vulkan: Add fusion support for RMS_NORM+MUL (#14366)
* vulkan: Add fusion support for RMS_NORM+MUL
- Add a use_count to ggml_tensor, so we can detect if an output is used more than once.
- Change the ggml-vulkan rms_norm shader to optionally multiply by another tensor.
- Add detection logic and basic fusion logic in ggml-vulkan.
- Add some testing support for fusion. Rather than computing one node at a time, allow
for computing the whole graph and just testing one node's results. Add rms_norm_mul tests
and enable a llama test.
* extract some common fusion logic
* fix -Winconsistent-missing-override
* move ggml_can_fuse to a common function
* build fix
* C and C++ versions of can_fuse
* move use count to the graph to avoid data races and double increments when used in multiple threads
* use hash table lookup to find node index
* change use_counts to be indexed by hash table slot
* minimize hash lookups
style fixes
* last node doesn't need single use.
fix type.
handle mul operands being swapped.
* remove redundant parameter
---------
Co-authored-by: slaren <redacted>
Aman Gupta [Sat, 28 Jun 2025 17:30:53 +0000 (01:30 +0800)]
CUDA: add bf16 and f32 support to cublas_mul_mat_batched (#14361)
* CUDA: add bf16 and f32 support to cublas_mul_mat_batched
* Review: add type traits and make function more generic
* Review: make check more explicit, add back comments, and fix formatting
* Review: fix formatting, remove useless type conversion, fix naming for bools
Jeff Bolz [Sat, 28 Jun 2025 15:36:40 +0000 (10:36 -0500)]
vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378)
Jeff Bolz [Sat, 28 Jun 2025 15:17:09 +0000 (10:17 -0500)]
vulkan: lock accesses of pinned_memory vector (#14333)
Weizhao Ouyang [Sat, 28 Jun 2025 14:08:21 +0000 (22:08 +0800)]
model : add support for ERNIE 4.5 0.3B model (#14408)
Add Day-0 support for Baidu ERNIE 4.5 0.3B model.
Signed-off-by: Weizhao Ouyang <redacted>
Xinpeng Dou [Sat, 28 Jun 2025 09:35:41 +0000 (17:35 +0800)]
fix async_mode bug (#14432)
Sigbjørn Skjæret [Sat, 28 Jun 2025 07:57:07 +0000 (09:57 +0200)]
ci : fix windows build and release (#14431)
Jeff Bolz [Sat, 28 Jun 2025 03:35:30 +0000 (22:35 -0500)]
vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (#14427)
This setting needs to be passed through to vulkan-shaders-gen
Georgi Gerganov [Fri, 27 Jun 2025 18:42:02 +0000 (21:42 +0300)]
graph : make llm_graph_context destructor virtual (#14410)
ggml-ci
Georgi Gerganov [Fri, 27 Jun 2025 14:55:45 +0000 (17:55 +0300)]
recurrent : call balloc split_reset() in init_batch() (#14414)
ggml-ci