]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Charles Xu [Mon, 21 Jul 2025 13:49:52 +0000 (15:49 +0200)]
kleidiai: add support for get_rows (llama/14676)
* kleidiai: add support for get_rows
* apply fixes based on code review
* apply more fixes based on code review
Jeff Bolz [Mon, 21 Jul 2025 11:35:40 +0000 (06:35 -0500)]
vulkan/cuda: Fix im2col when KW!=KH (llama/14789)
The tid is decomposed into "ow + ky*OW + kx*OW*KH". Change "ksize" to match.
Ervin Áron Tasnádi [Sat, 19 Jul 2025 19:59:08 +0000 (21:59 +0200)]
ggml: adds CONV_2D op and direct GEMM Vulkan implementation (llama/14316)
* ggml/ggml-vulkan/test-backend-ops: adds CONV_2D for Vulkan
* ggml-vulkan: adds f32 scalar shader to compute 2D convolution directly
with gemm (no need for im2col),
* test-backend-ops: adds test_case_ref to check the validity/performance of ops
against reference implementations having different graphs, adds tests
* * Performance fixes: minimized branch divergence, uses collectives to
eliminate redundant calculation, macros removed.
* Kernel shared memory size check
* Updates test-backend-ops to support graphs for performance
measurement.
* * Apple/Win32 compile errors fixed
* Subgroup size used to determine tile size -> fixes llvmpipe errors.
* Collectives disabled by default.
* Intel support is disabled as the performance is poor.
* Conv2d enabled for Intel with disabled collectives, disabled for Apple
* test-backend-ops modifications are reverted
* Trailing spaces and missing override fixed.
* Triggering pipeline relaunch.
* Code formatted with .clang-format.
Peter0x44 [Sat, 19 Jul 2025 15:58:03 +0000 (16:58 +0100)]
vulkan: Add logging for bf16 features to ggml_vk_print_gpu_info (#13274) (llama/14707)
0cc4m [Sat, 19 Jul 2025 15:47:53 +0000 (17:47 +0200)]
Vulkan: Fix fprintf format-security warning (llama/14770)
Kai Pastor [Wed, 23 Jul 2025 12:52:29 +0000 (14:52 +0200)]
CI: Test static build (#1307)
Kai Pastor [Tue, 22 Jul 2025 18:13:21 +0000 (20:13 +0200)]
cmake : fix usage issues (#1257)
* CMake config: Create target only once
Fix error on repeated find_package(ggml).
For simplicity, check only for the top-level ggml::ggml.
* CMake config: Add CUDA link libs
* CMake config: Add OpenCL link libs
* CMake config: Use canonical find_dependency
Use set and append to control link lib variables.
Apply more $<LINK_ONLY...>.
* CMake config: Wire OpenMP dependency
Daniel Bevenius [Mon, 21 Jul 2025 13:53:12 +0000 (15:53 +0200)]
ggml-cpu : remove stdlib include from repack.cpp (#1276)
This commit removes the inclusion of `<cstdlib>`.
The motivation for this change is that this source file does not seem to
use any functions from this header and the comment about `qsort` is a
little misleading/confusing.
Georgi Gerganov [Sat, 19 Jul 2025 08:47:23 +0000 (11:47 +0300)]
sync : llama.cpp
ggml-ci
Georgi Gerganov [Fri, 18 Jul 2025 17:37:26 +0000 (20:37 +0300)]
metal : fuse add, mul + add tests (llama/14596)
ggml-ci
Oliver Simons [Fri, 18 Jul 2025 11:35:32 +0000 (13:35 +0200)]
cuda : Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs (llama/14741)
* Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs
Gemma3n uses Matrix-Matrix addition as part of their input processing,
wrongly triggering CUDA_GRAPH disablement on NVGPUs even when batch-size
of 1 is used.
* Exclude `project_per_layer_input` by matching node names
This ensures that all other graphs which don't exhibit this pattern do
not have their behavior changed.
* Revert unnecessary formatting changes
Aman Gupta [Fri, 18 Jul 2025 06:54:18 +0000 (14:54 +0800)]
CUDA: set_rows + cpy.cu refactor (llama/14712)
Neo Zhang Jianyu [Fri, 18 Jul 2025 02:23:14 +0000 (10:23 +0800)]
use max work group size for device to replace the magic number (llama/14732)
Reese Levine [Wed, 16 Jul 2025 15:18:51 +0000 (08:18 -0700)]
ggml: Add initial WebGPU backend (llama/14521)
* Minimal setup of webgpu backend with dawn. Just prints out the adapter and segfaults
* Initialize webgpu device
* Making progress on setting up the backend
* Finish more boilerplate/utility functions
* Organize file and work on alloc buffer
* Add webgpu_context to prepare for actually running some shaders
* Work on memset and add shader loading
* Work on memset polyfill
* Implement set_tensor as webgpu WriteBuffer, remove host_buffer stubs since webgpu doesn't support it
* Implement get_tensor and buffer_clear
* Finish rest of setup
* Start work on compute graph
* Basic mat mul working
* Work on emscripten build
* Basic WebGPU backend instructions
* Use EMSCRIPTEN flag
* Work on passing ci, implement 4d tensor multiplication
* Pass thread safety test
* Implement permuting for mul_mat and cpy
* minor cleanups
* Address feedback
* Remove division by type size in cpy op
* Fix formatting and add github action workflows for vulkan and metal (m-series) webgpu backends
* Fix name
* Fix macos dawn prefix path
Georgi Gerganov [Wed, 16 Jul 2025 13:35:42 +0000 (16:35 +0300)]
llama : add high-throughput mode (llama/14363)
* kv-cache : prepare K/V buffers for separation
ggml-ci
* batched-bench : fix oob write
ggml-ci
* llama : add "virtual sequences"
ggml-ci
* llama : use "stream" vs "virtual sequence"
ggml-ci
* graph : fix stream splitting when KV cache is not used
ggml-ci
* kv-cache : add multi-stream save/load support
ggml-ci
* llama : add "--attn-streams" flag
ggml-ci
* kv-cache : fix handling when find_slot fails
ggml-ci
* kv-cache : restore find_slot impl
ggml-ci
* kv-cache : add comments
* kv-cache : add bounds checks for sequence id
ggml-ci
* cont : add n_seq_max to batch allocr
ggml-ci
* kv-cache : perform stream copies lazily after llama_synchronize
ggml-ci
* kv-cache : avoid throwing exceptions across the C boundary
ggml-ci
* CUDA: 4D FlashAttention support (llama/14628)
* CUDA: 4D FlashAttention support
* CUDA: fix WMMA FA kernel
* llama : rename attn_streams -> kv_unified
ggml-ci
* common : rename kv_split -> kv_unified
ggml-ci
---------
Co-authored-by: Johannes Gäßler <redacted>
Georgi Gerganov [Wed, 16 Jul 2025 11:43:32 +0000 (14:43 +0300)]
ggml : add asserts (llama/14720)
* ggml : add asserts
ggml-ci
* cont : fix constant type
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: Diego Devesa <redacted>
Jeff Bolz [Tue, 15 Jul 2025 19:51:09 +0000 (14:51 -0500)]
vulkan: fix noncontig check for mat_mul_id splitting (llama/14683)
* vulkan: fix noncontig check for mat_mul_id splitting
Remove supports_op check for > 4096 (splitting fixes this)
* vulkan: fix batched matmul dequant for Q*_K
Jeff Bolz [Tue, 15 Jul 2025 19:32:11 +0000 (14:32 -0500)]
vulkan: add RTE variants for glu/add/sub/mul/div (llama/14653)
R0CKSTAR [Tue, 15 Jul 2025 07:28:53 +0000 (15:28 +0800)]
cuda: fix build warnings in set-rows.cu (unused variable) (llama/14687)
Signed-off-by: Xiaodong Ye <redacted>
Anton Mitkov [Mon, 14 Jul 2025 17:12:42 +0000 (18:12 +0100)]
sycl: Hotfix for non dnnl codepath (llama/14677)
shalinib-ibm [Mon, 14 Jul 2025 13:16:42 +0000 (18:46 +0530)]
ggml : refactor llamafile_sgemm PPC code (llama/14673)
Remove un-necessary templates from class definition and packing functions
Reduce deeply nested conditionals, if-else switching in mnapck function
Replace repetitive code with inline functions in Packing functions
2 ~ 7% improvement in Q8 Model
15 ~ 50% improvement in Q4 Model
Signed-off-by: Shalini Salomi Bodapati <redacted>
Akarshan Biswas [Mon, 14 Jul 2025 09:37:55 +0000 (15:07 +0530)]
SYCL: use 1D kernel for set_rows (llama/14618)
* SYCL: Use 1D kernel for set_rows
* Remove dangling comment
* Refactor and use ceil_div
Anton Mitkov [Mon, 14 Jul 2025 09:37:35 +0000 (10:37 +0100)]
sycl: Batched mulmat rework for oneDNN dispatch (llama/14617)
Sigbjørn Skjæret [Sun, 13 Jul 2025 13:01:24 +0000 (15:01 +0200)]
cuda : add set rows for bf16 (llama/14664)
Yavor Ivanov [Sun, 13 Jul 2025 09:33:16 +0000 (02:33 -0700)]
cuda : add ELU support (llama/14657)
Georgi Gerganov [Sun, 13 Jul 2025 07:36:33 +0000 (10:36 +0300)]
ggml : add build-time message to remind about ggml_set_rows (llama/14661)
ggml-ci
Yavor Ivanov [Sun, 13 Jul 2025 05:38:13 +0000 (22:38 -0700)]
metal : Add missing unary ops Metal support (llama/14660)
Tarek Dakhran [Sat, 12 Jul 2025 17:10:14 +0000 (19:10 +0200)]
tests : cover lfm2 cases in test_ssm_conv (llama/14651)
Aman Gupta [Sat, 12 Jul 2025 13:31:38 +0000 (21:31 +0800)]
CUDA: add set rows for f32 and f16 (llama/14551)
* CUDA: add set rows for f32 and f16
* Review: change kernel params, use strides from host
* Use 1-d kernel
* Review: use int64_t for blockDim.x, rename nb->s for clarity
Georgi Gerganov [Sat, 12 Jul 2025 16:24:58 +0000 (19:24 +0300)]
sync : whispper.cpp
Georgi Gerganov [Sat, 12 Jul 2025 13:12:49 +0000 (16:12 +0300)]
git : remove kompute submodule (#1300)
ggml-ci
Georgi Gerganov [Sat, 12 Jul 2025 11:39:52 +0000 (14:39 +0300)]
sync : resolve conflicts (#0)
ggml-ci
Georgi Gerganov [Sat, 12 Jul 2025 11:36:32 +0000 (14:36 +0300)]
sync : llama.cpp
ggml-ci
Jeff Bolz [Sat, 12 Jul 2025 10:12:26 +0000 (05:12 -0500)]
vulkan: support SET_ROWS (llama/14587)
* vulkan: support SET_ROWS
Add variants of the copy_to_quant shader that do the SET_ROWS operation.
Change these shaders to spread the work across the workgroup.
The memory access pattern is probably not great (one thread per quant block),
but should be fine for now.
* vulkan: optimize set_rows
Larger workgroups for non-quant types.
Set "norepeat" (there is manual repeat logic).
Use fastmod.
Jeff Bolz [Sat, 12 Jul 2025 09:51:58 +0000 (04:51 -0500)]
vulkan: optimizations for deepseek prompt processing (llama/14555)
* vulkan: allow unclamped loads in coopmat2 mul_mat_id shader
* vulkan: increase coopmat2 mul_mat_id tile size
* vulkan: optimize mat_mul_id row_ids search to batch loads, and port to coopmat1 path
* vulkan: use smaller FA row size when head size is large. applies to both scalar and CM2 paths (CM1 isn't used due to shared memory limits)
Tarek Dakhran [Fri, 11 Jul 2025 18:27:01 +0000 (20:27 +0200)]
model : support LiquidAI LFM2 hybrid family (llama/14620)
**Important**
LFM2 was [merged ](https://github.com/huggingface/transformers/pull/39340)into transformers, but has not yet been released.
To convert into gguf, install transformers from source
```shell
pip install "transformers @ git+https://github.com/huggingface/transformers.git@main"
```
Slobodan Josic [Fri, 11 Jul 2025 16:55:00 +0000 (18:55 +0200)]
HIP : Add HIP 7.0+ compatibility for hipBLAS compute types (llama/14634)
rmatif [Thu, 10 Jul 2025 21:58:12 +0000 (23:58 +0200)]
opencl: add tiled mul_mat_f16_f32 (llama/14535)
* add tiled mul_mat_f16_f32
* fix trailing whitespace
* add insightful comments
lhez [Thu, 10 Jul 2025 18:48:52 +0000 (11:48 -0700)]
opencl: add `set_rows` for `f16` and `f32` (llama/14547)
* opencl: add `set_rows` for `f16` and `f32`
* opencl: better choose workgroup size for `set_rows`
Aman Gupta [Thu, 10 Jul 2025 15:29:01 +0000 (23:29 +0800)]
Docs: script to auto-generate ggml operations docs (llama/14598)
* Docs: script to auto-generate ggml operations docs
* Review: formatting changes + change github action
* Use built-in types instead of typing
* docs : add BLAS and Metal ops
---------
Co-authored-by: Georgi Gerganov <redacted>
Akarshan Biswas [Thu, 10 Jul 2025 08:29:38 +0000 (13:59 +0530)]
SYCL: Initial set_rows kernel implementation (llama/14562)
* SYCL: Initial set_rows kernel implementation
* Revert max_threads to 256
* Refactor set_rows and address review comments
* Deduplicate conversion function
* Remove guard before kernel launch and refactor
* Fix and add back SFINAE
compilade [Thu, 10 Jul 2025 03:54:38 +0000 (23:54 -0400)]
cuda : support Falcon-H1 state size for SSM_SCAN (llama/14602)
Xuan-Son Nguyen [Wed, 9 Jul 2025 16:16:12 +0000 (18:16 +0200)]
ggml : add ggml_scale_bias (llama/14417)
* ggml : add ggml_scale_bias
* ggml_vec_mad1_f32
* add more simd
* add CUDA
* sycl
* vulkan
* cann (placeholder)
* opencl
* will this fix cpu?
* fix cuda
* suggestions from coderabbit
* fix cann compile error
* vDSP_vsmsa
* rm __ARM_FEATURE_SVE
* use memcpy for op params
* make code looks more consistent
* use scalar for __ARM_FEATURE_SVE
* add x param to ggml_vec_mad1_f32
Miaoqian Lin [Wed, 9 Jul 2025 12:33:53 +0000 (20:33 +0800)]
ggml : prevent integer overflow in gguf tensor size calculation (llama/14595)
Jeff Bolz [Tue, 8 Jul 2025 18:11:42 +0000 (13:11 -0500)]
vulkan: optimize flash attention split_k_reduce (llama/14554)
* vulkan: allow FA split_k with smaller KV values
* vulkan: spread split_k_reduce work across more threads
k_num can get rather large. Use the whole workgroup to reduce the M/L values.
Launch a thread for each element in the HSV dimension of the output. Helps a
lot for large HSV (like deepseek).
Jeff Bolz [Tue, 8 Jul 2025 13:21:21 +0000 (08:21 -0500)]
vulkan : fix rope with partial rotation and non-cont src (llama/14582)
Georgi Gerganov [Tue, 8 Jul 2025 07:15:21 +0000 (10:15 +0300)]
cuda : fix rope with partial rotation and non-cont src (llama/14580)
* cuda : fix rope non-cont
ggml-ci
* cont : fix multi-rope + add test
ggml-ci
* sycl : try fix
ggml-ci
* cont : fix sycl + clean-up cuda
ggml-ci
Aman Gupta [Tue, 8 Jul 2025 02:11:18 +0000 (10:11 +0800)]
CUDA: add bilinear interpolation for upscale (llama/14563)
R0CKSTAR [Mon, 7 Jul 2025 23:58:30 +0000 (07:58 +0800)]
musa: fix build warnings (unused variable) (llama/14561)
Signed-off-by: Xiaodong Ye <redacted>
Aman Gupta [Mon, 7 Jul 2025 13:45:43 +0000 (21:45 +0800)]
CUDA: add bf16 and i32 to getrows (llama/14529)
Eve [Sun, 6 Jul 2025 10:29:36 +0000 (10:29 +0000)]
vulkan: increase LOAD_VEC_A to 8 (IQ1/IQ2) or 4 (IQ3) (llama/14485)
Commit taken from remyoudompheng's PR https://github.com/ggml-org/llama.cpp/pull/12260
Co-authored-by: Rémy Oudompheng <redacted>
Jeff Bolz [Sun, 6 Jul 2025 08:08:16 +0000 (03:08 -0500)]
vulkan: fix rms_norm+mul fusion (llama/14545)
The fused operation was grabbing the epsilon value from the wrong place.
Add an env var to disable fusion.
Add some missing checks for supported shapes/types.
Handle fused rms_norm+mul in check_results.
Jeff Bolz [Sat, 5 Jul 2025 07:26:04 +0000 (02:26 -0500)]
vulkan: Handle updated FA dim2/3 definition (llama/14518)
* vulkan: Handle updated FA dim2/3 definition
Pack mask boolean and n_head_log2 into a single dword to keep the push
constant block under the 128B limit.
* handle null mask for gqa
* allow gqa with dim3>1
Sigbjørn Skjæret [Sat, 5 Jul 2025 06:24:56 +0000 (08:24 +0200)]
opencl: add GELU_ERF (llama/14476)
R0CKSTAR [Sat, 5 Jul 2025 04:10:53 +0000 (12:10 +0800)]
test-backend-ops: add support for specifying output format (llama/14368)
* test-backend-ops: add support for specifying output format
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* Add build_commit and build_number in test_result
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* refactor
Signed-off-by: Xiaodong Ye <redacted>
* Get build commit from ggml_commit()
Signed-off-by: Xiaodong Ye <redacted>
* Merge errors into test_operation_info && address review comments
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* remove visitor nonsense
* remove visitor comment
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: slaren <redacted>
Georgi Gerganov [Fri, 4 Jul 2025 16:19:09 +0000 (19:19 +0300)]
metal : disable fast math in all quantize kernels (llama/14528)
ggml-ci
luyhcsu [Fri, 4 Jul 2025 03:50:07 +0000 (11:50 +0800)]
CANN: Replace aclrtMemsetSync with aclnnInplaceZero operator (llama/14002)
Co-authored-by: luyuhong <redacted>
Sigbjørn Skjæret [Thu, 3 Jul 2025 21:07:22 +0000 (23:07 +0200)]
ggml : implement GEGLU_ERF and GEGLU_QUICK ops (llama/14445)
lhez [Thu, 3 Jul 2025 18:22:24 +0000 (11:22 -0700)]
opencl : broadcast for soft_max (llama/14510)
Jeff Bolz [Thu, 3 Jul 2025 18:21:14 +0000 (13:21 -0500)]
vulkan: support mixed/deepseekR1 FA head sizes (llama/14509)
* vulkan: better parameterize FA by head sizes
* vulkan: support mixed/deepseekR1 FA head sizes
Johannes Gäßler [Thu, 3 Jul 2025 15:05:18 +0000 (17:05 +0200)]
ggml: backward pass for split swiglu (llama/14483)
Nicolò Scipione [Thu, 3 Jul 2025 09:00:03 +0000 (11:00 +0200)]
Fix conditional enabling following arch checks for ggml-sycl (llama/14504)
Signed-off-by: nscipione <redacted>
Georgi Gerganov [Thu, 3 Jul 2025 07:53:35 +0000 (10:53 +0300)]
kv-cache : use ggml_set_rows (llama/14285)
* kv-cache : use ggml_set_rows
ggml-ci
* graph : separate k and v indices
ggml-ci
* cont : remove redundant ifs
ggml-ci
* kv-cache : improve find_slot impl
* kv-cache : bounds-check when accessing slot_info indices
* kv-cache : add comments
ggml-ci
* ggml : add TODOs for adding GGML_OP_SET_ROWS support in the backends
ggml-ci
Georgi Gerganov [Thu, 3 Jul 2025 07:46:57 +0000 (10:46 +0300)]
ggml : fix FA mask dim 2 and 3 (llama/14505)
* ggml : fix FA mask dim 2 and 3
ggml-ci
* backends : unsupport batched FA in CUDA and Vulkan
ggml-ci
* vulkan : disable FA for mask->ne[2] != 1
Georgi Gerganov [Thu, 3 Jul 2025 04:48:32 +0000 (07:48 +0300)]
ggml : remove kompute backend (llama/14501)
ggml-ci
Aman Gupta [Wed, 2 Jul 2025 23:45:11 +0000 (07:45 +0800)]
CUDA: add dynamic shared mem to softmax, refactor general usage (llama/14497)
compilade [Wed, 2 Jul 2025 17:10:24 +0000 (13:10 -0400)]
llama : initial Mamba-2 support (llama/9126)
* llama : initial Mamba-2 support
* ggml : SIMD ggml_ssm_scan for Mamba-2
* ggml : improve ggml_mul speed when masking recurrent states
* llama : support running Mamba-Codestral-7B-v0.1
* llama : fix Mamba-2 conv state saving
* ggml : make the ggml_mul fast broadcast path more consistently formatted
* llama : remove unused variable
* llama : add missing break
* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present
The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.
* llama : avoid redundant state copy for Mamba 1 and 2
* metal : attempt to adapt SSM_SCAN for Mamba-2
* metal : fix SSM_SCAN pipeline scope
* metal : use log and exp instead of log1pf and expf in SSM_SCAN
* metal : remove unused arguments for SSM_SCAN
The max index is 31, so trimming the arguments is necessary.
* metal : add back n_seqs to SSM_SCAN args
Whoops, this is needed for the offset in the concatenated output.
* metal : fix SSM_SCAN state head offset
* metal : fix wrong number of tokens per sequence in SSM_SCAN
* ggml : remove unused fast broadcast path in GGML_MUL
This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.
* ggml : avoid multiply by D in GGML_OP_SSM_SCAN
This makes the weight buft detection in src/llama.cpp simpler.
* convert : transpose Mamba-2 A, D and reshape SSM_NORM
This breaks existing conversions of Mamba-2 models
to avoid some reshapes.
Not sure if it's a good idea,
but it makes the graph slightly cleaner.
* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks
* convert : fix flake8 lint
* metal : fix confusion between ; and ,
* metal : add missing args for nb references in ssm_scan_f32_group
* metal : single-user mamba2 inference works
* kv-cache : remove const_cast when setting inputs for s_copy
And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.
* convert : avoid AutoConfig for Mamba and Mamba2 hparams
* kv-cache : allow context shift for recurrent models
* graph : fix recurrent state copies when avoiding copies
Works, but using lambda functions might not be that clean.
* ggml : fix mamba2 ssm scan when compiled with SVE
* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches
* cuda : implement ssm scan for Mamba2
There is still room for improvement, but it works!
* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2
* mamba : fix mismatched new and delete size for llm_build_mamba
Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON
* cuda : graceful fallback for Mamba-1 models with weird embd size
Aman Gupta [Wed, 2 Jul 2025 12:34:24 +0000 (20:34 +0800)]
CUDA: add softmax broadcast (llama/14475)
* CUDA: add softmax broadcast
* Pass by const ref
* Review: Use blockDims for indexing, remove designated initializers
* Add TODO for noncontigous input/output
Johannes Gäßler [Wed, 2 Jul 2025 11:42:12 +0000 (13:42 +0200)]
CUDA: broadcasting for FlashAttention mask (llama/14500)
Jeff Bolz [Tue, 1 Jul 2025 08:32:56 +0000 (03:32 -0500)]
vulkan: support softmax/FA batch and broadcast (llama/14449)
Georgi Gerganov [Sat, 12 Jul 2025 11:35:19 +0000 (14:35 +0300)]
sync : llama.cpp
Georgi Gerganov [Sat, 12 Jul 2025 11:33:49 +0000 (14:33 +0300)]
ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (llama/14435)
zhouwg [Wed, 2 Jul 2025 12:38:10 +0000 (20:38 +0800)]
opencl : fix possible buffer overflow in dump_tensor (llama/14490)
Eric Zhang [Wed, 2 Jul 2025 11:00:04 +0000 (19:00 +0800)]
opencl : skip empty nodes on cgraph compute (llama/14491)
lhez [Wed, 2 Jul 2025 07:07:42 +0000 (00:07 -0700)]
opencl : update upscale to support align corners (llama/14488)
Björn Ganster [Wed, 2 Jul 2025 05:19:31 +0000 (07:19 +0200)]
ggml : Callback before abort (llama/14481)
* Add a callback that will be called just before abort. This allows apps without a console to display a message to the user and save data if needed.
* Return previous callback to allow callback chaining
* style fixes
---------
Co-authored-by: Diego Devesa <redacted>
Georgi Gerganov [Tue, 1 Jul 2025 15:04:08 +0000 (18:04 +0300)]
ci : disable fast-math for Metal GHA CI (llama/14478)
* ci : disable fast-math for Metal GHA CI
ggml-ci
* cont : remove -g flag
ggml-ci
Chenguang Li [Tue, 1 Jul 2025 08:47:30 +0000 (16:47 +0800)]
CANN: update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3 (llama/14411)
* [CANN]update to aclnnGroupedMatmulV2
Signed-off-by: noemotiovon <redacted>
* Support MUL_MAT_ID on 310p
Signed-off-by: noemotiovon <redacted>
* fix editorconfig
Signed-off-by: noemotiovon <redacted>
---------
Signed-off-by: noemotiovon <redacted>
Jeff Bolz [Tue, 1 Jul 2025 08:43:08 +0000 (03:43 -0500)]
vulkan: Split large mul_mat_id to fit in shared memory (llama/14451)
Sigbjørn Skjæret [Tue, 1 Jul 2025 08:14:21 +0000 (10:14 +0200)]
add GELU_ERF (llama/14455)
Kai Pastor [Fri, 11 Jul 2025 14:47:57 +0000 (16:47 +0200)]
ci : simplify, switch to ninja (#1295)
* CI: Move GGML_N_THREADS to env
* CI: Move macos-13 into matrix
* CI: Build with ninja
* CI: Remove env
Kai Pastor [Thu, 10 Jul 2025 06:57:51 +0000 (08:57 +0200)]
examples : Test installed CMake config package (#1294)
* Add test-cmake example
* CI: Run test for installed cmake config
Acly [Thu, 3 Jul 2025 17:58:12 +0000 (19:58 +0200)]
vulkan : implement bilinear interpolation for ggml_upscale/ggml_interpolate (#1291)
* supports GGML_SCALE_MODE_BILINEAR and GGML_SCALE_FLAG_ALIGN_CORNERS
Acly [Thu, 3 Jul 2025 17:47:15 +0000 (19:47 +0200)]
vulkan : implement ggml_roll (#1290)
* vulkan : implement ggml_roll
* vulkan : refactor vk_op_unary_push_constants initialization
Daniel Bevenius [Wed, 2 Jul 2025 11:55:32 +0000 (13:55 +0200)]
ggml : add version function to get lib version (#1286)
* ggml : add version function to get lib version
This commit adds a function `ggml_version()` to the ggml library that
returns the version of the library as a string.
The motivation for this is that it can be useful to be able to
programmatically check the version of the ggml library being used.
Usage:
```c
printf("GGML version: %s\n", ggml_version());
```
Output:
```console
GGML version: 0.0.2219
```
* ggml : add ggml_commit()
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 2 Jul 2025 05:07:23 +0000 (08:07 +0300)]
sync : whisper.cpp
Georgi Gerganov [Tue, 1 Jul 2025 08:10:25 +0000 (11:10 +0300)]
sync : llama.cpp
ggml-ci
Georgi Gerganov [Tue, 1 Jul 2025 08:05:48 +0000 (11:05 +0300)]
ggml : remove trailing whitespace (llama/0)
lhez [Tue, 1 Jul 2025 07:19:16 +0000 (00:19 -0700)]
opencl : add GEGLU, REGLU, SWIGLU (llama/14456)
Aman Gupta [Mon, 30 Jun 2025 15:57:04 +0000 (23:57 +0800)]
Add Conv2d for CPU (llama/14388)
* Conv2D: Add CPU version
* Half decent
* Tiled approach for F32
* remove file
* Fix tests
* Support F16 operations
* add assert about size
* Review: further formatting fixes, add assert and use CPU version of fp32->fp16
Georgi Gerganov [Mon, 30 Jun 2025 14:04:05 +0000 (17:04 +0300)]
metal : disable fast-math for some cpy kernels (llama/14460)
* metal : disable fast-math for some cpy kernels
ggml-ci
* cont : disable for q4_1
ggml-ci
* cont : disable for iq4_nl
ggml-ci
Romain Biessy [Mon, 30 Jun 2025 12:52:02 +0000 (14:52 +0200)]
ggml-cpu: sycl: Re-enable exp f16 (llama/14462)
Diego Devesa [Mon, 30 Jun 2025 10:43:15 +0000 (03:43 -0700)]
test-backend-ops : disable llama test (llama/14461)
xiaobing318 [Mon, 30 Jun 2025 09:48:24 +0000 (17:48 +0800)]
cmake : Remove redundant include path in CMakeLists.txt (llama/14452)
* Update docker.yml
修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动
* Remove redundant include path in CMakeLists.txt
The parent directory '..' was removed from the include directories for the ggml-cpu-feats target, to avoid unnecessary include paths.
* Enable scheduled Docker image builds
Uncomments the workflow schedule to trigger daily Docker image rebuilds at 04:12 UTC, improving automation and keeping images up to date.
Vedran Miletić [Mon, 30 Jun 2025 08:17:18 +0000 (10:17 +0200)]
scripts : make the shell scripts cross-platform (llama/14341)
Akarshan Biswas [Sun, 29 Jun 2025 15:37:58 +0000 (21:07 +0530)]
SYCL: disable faulty fp16 exp kernel (llama/14395)
* SYCL: disable faulty fp16 CPU exponent for now
* Revert "SYCL: disable faulty fp16 CPU exponent for now"
This reverts commit
ed0aab1ec31b4eb4b0f275dd7acd41d96a375202 .
* SYCL: disable faulty fp16 CPU exponent for now
* Fix logic of disabling exponent kernel
Sigbjørn Skjæret [Sun, 29 Jun 2025 12:38:10 +0000 (14:38 +0200)]
ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (llama/14443)
Sigbjørn Skjæret [Sun, 29 Jun 2025 09:04:10 +0000 (11:04 +0200)]
ggml : implement REGLU/GEGLU/SWIGLU ops (llama/14158)
* implement unary REGLU/GEGLU/SWIGLU cpu ops
* relax constraints
* duplicate shape of source
* fix ggml_vec_geglu_f16
* special case gated ops
* implement unary REGLU/GEGLU/SWIGLU cuda ops
* tighten constraints again
* refactor into GGML_GLU_OP
* metal : add glu kernels
ggml-ci
* add CUDA_GLU_BLOCK_SIZE [no ci]
* more constraints and use 64bit ints
ggml-ci
* 64bit multiplication [no ci]
* implement swapped variants (cpu/cuda)
* update comment [no ci]
ggml-ci
* Vulkan: Add GLU ops and shaders
* SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate
* ggml : implement GLU for split up/gate (llama/14181)
* implement GLU for split up/gate
* add tests for ggml_glu_split
* Vulkan: Implement glu_split logic and shader support
* add split to logging [no ci]
* SYCL: refactor element_size ops and add split up and gate support to gated kernels
* SYCL: switch GEGLU to use tanh approximation
---------
Co-authored-by: 0cc4m <redacted>
Co-authored-by: Akarshan <redacted>
* GGML: increase OP count in assertion
* Refactor: Optimize SYCL element-wise operations with unary function inlining
This commit refactors the SYCL element-wise operations to improve performance by:
- Inlining unary operations (sgn, abs, elu, gelu, silu, etc.) to reduce kernel launch overhead.
- Introducing helper functions `op_xxx` for each unary operation to encapsulate the logic.
- Replacing direct kernel calls with calls to these inlined functions.
- Using `__dpct_inline__` to encourage compiler inlining.
- Minor code cleanup and consistency improvements.
The changes aim to reduce kernel launch overhead and improve the overall efficiency of element-wise operations on SYCL devices.
* vulkan: Increase workgroup size for GLU, for performance (llama/14345)
* vulkan: Increase workgroup size for GLU, for performance
* vulkan: change GLU shaders to do one element per invocation rather than one row per workgroup
* merge fix
* metal : add support for split and swap
ggml-ci
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: 0cc4m <redacted>
Co-authored-by: Akarshan <redacted>
Co-authored-by: Jeff Bolz <redacted>
Jeff Bolz [Sun, 29 Jun 2025 07:43:36 +0000 (02:43 -0500)]
vulkan: Add fusion support for RMS_NORM+MUL (llama/14366)
* vulkan: Add fusion support for RMS_NORM+MUL
- Add a use_count to ggml_tensor, so we can detect if an output is used more than once.
- Change the ggml-vulkan rms_norm shader to optionally multiply by another tensor.
- Add detection logic and basic fusion logic in ggml-vulkan.
- Add some testing support for fusion. Rather than computing one node at a time, allow
for computing the whole graph and just testing one node's results. Add rms_norm_mul tests
and enable a llama test.
* extract some common fusion logic
* fix -Winconsistent-missing-override
* move ggml_can_fuse to a common function
* build fix
* C and C++ versions of can_fuse
* move use count to the graph to avoid data races and double increments when used in multiple threads
* use hash table lookup to find node index
* change use_counts to be indexed by hash table slot
* minimize hash lookups
style fixes
* last node doesn't need single use.
fix type.
handle mul operands being swapped.
* remove redundant parameter
---------
Co-authored-by: slaren <redacted>
Aman Gupta [Sat, 28 Jun 2025 17:30:53 +0000 (01:30 +0800)]
CUDA: add bf16 and f32 support to cublas_mul_mat_batched (llama/14361)
* CUDA: add bf16 and f32 support to cublas_mul_mat_batched
* Review: add type traits and make function more generic
* Review: make check more explicit, add back comments, and fix formatting
* Review: fix formatting, remove useless type conversion, fix naming for bools