]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Georgi Gerganov [Tue, 1 Jul 2025 08:05:48 +0000 (11:05 +0300)]
ggml : remove trailing whitespace (llama/0)
lhez [Tue, 1 Jul 2025 07:19:16 +0000 (00:19 -0700)]
opencl : add GEGLU, REGLU, SWIGLU (llama/14456)
Aman Gupta [Mon, 30 Jun 2025 15:57:04 +0000 (23:57 +0800)]
Add Conv2d for CPU (llama/14388)
* Conv2D: Add CPU version
* Half decent
* Tiled approach for F32
* remove file
* Fix tests
* Support F16 operations
* add assert about size
* Review: further formatting fixes, add assert and use CPU version of fp32->fp16
Georgi Gerganov [Mon, 30 Jun 2025 14:04:05 +0000 (17:04 +0300)]
metal : disable fast-math for some cpy kernels (llama/14460)
* metal : disable fast-math for some cpy kernels
ggml-ci
* cont : disable for q4_1
ggml-ci
* cont : disable for iq4_nl
ggml-ci
Romain Biessy [Mon, 30 Jun 2025 12:52:02 +0000 (14:52 +0200)]
ggml-cpu: sycl: Re-enable exp f16 (llama/14462)
Diego Devesa [Mon, 30 Jun 2025 10:43:15 +0000 (03:43 -0700)]
test-backend-ops : disable llama test (llama/14461)
xiaobing318 [Mon, 30 Jun 2025 09:48:24 +0000 (17:48 +0800)]
cmake : Remove redundant include path in CMakeLists.txt (llama/14452)
* Update docker.yml
修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动
* Remove redundant include path in CMakeLists.txt
The parent directory '..' was removed from the include directories for the ggml-cpu-feats target, to avoid unnecessary include paths.
* Enable scheduled Docker image builds
Uncomments the workflow schedule to trigger daily Docker image rebuilds at 04:12 UTC, improving automation and keeping images up to date.
Vedran Miletić [Mon, 30 Jun 2025 08:17:18 +0000 (10:17 +0200)]
scripts : make the shell scripts cross-platform (llama/14341)
Akarshan Biswas [Sun, 29 Jun 2025 15:37:58 +0000 (21:07 +0530)]
SYCL: disable faulty fp16 exp kernel (llama/14395)
* SYCL: disable faulty fp16 CPU exponent for now
* Revert "SYCL: disable faulty fp16 CPU exponent for now"
This reverts commit
ed0aab1ec31b4eb4b0f275dd7acd41d96a375202 .
* SYCL: disable faulty fp16 CPU exponent for now
* Fix logic of disabling exponent kernel
Sigbjørn Skjæret [Sun, 29 Jun 2025 12:38:10 +0000 (14:38 +0200)]
ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (llama/14443)
Sigbjørn Skjæret [Sun, 29 Jun 2025 09:04:10 +0000 (11:04 +0200)]
ggml : implement REGLU/GEGLU/SWIGLU ops (llama/14158)
* implement unary REGLU/GEGLU/SWIGLU cpu ops
* relax constraints
* duplicate shape of source
* fix ggml_vec_geglu_f16
* special case gated ops
* implement unary REGLU/GEGLU/SWIGLU cuda ops
* tighten constraints again
* refactor into GGML_GLU_OP
* metal : add glu kernels
ggml-ci
* add CUDA_GLU_BLOCK_SIZE [no ci]
* more constraints and use 64bit ints
ggml-ci
* 64bit multiplication [no ci]
* implement swapped variants (cpu/cuda)
* update comment [no ci]
ggml-ci
* Vulkan: Add GLU ops and shaders
* SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate
* ggml : implement GLU for split up/gate (llama/14181)
* implement GLU for split up/gate
* add tests for ggml_glu_split
* Vulkan: Implement glu_split logic and shader support
* add split to logging [no ci]
* SYCL: refactor element_size ops and add split up and gate support to gated kernels
* SYCL: switch GEGLU to use tanh approximation
---------
Co-authored-by: 0cc4m <redacted>
Co-authored-by: Akarshan <redacted>
* GGML: increase OP count in assertion
* Refactor: Optimize SYCL element-wise operations with unary function inlining
This commit refactors the SYCL element-wise operations to improve performance by:
- Inlining unary operations (sgn, abs, elu, gelu, silu, etc.) to reduce kernel launch overhead.
- Introducing helper functions `op_xxx` for each unary operation to encapsulate the logic.
- Replacing direct kernel calls with calls to these inlined functions.
- Using `__dpct_inline__` to encourage compiler inlining.
- Minor code cleanup and consistency improvements.
The changes aim to reduce kernel launch overhead and improve the overall efficiency of element-wise operations on SYCL devices.
* vulkan: Increase workgroup size for GLU, for performance (llama/14345)
* vulkan: Increase workgroup size for GLU, for performance
* vulkan: change GLU shaders to do one element per invocation rather than one row per workgroup
* merge fix
* metal : add support for split and swap
ggml-ci
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: 0cc4m <redacted>
Co-authored-by: Akarshan <redacted>
Co-authored-by: Jeff Bolz <redacted>
Jeff Bolz [Sun, 29 Jun 2025 07:43:36 +0000 (02:43 -0500)]
vulkan: Add fusion support for RMS_NORM+MUL (llama/14366)
* vulkan: Add fusion support for RMS_NORM+MUL
- Add a use_count to ggml_tensor, so we can detect if an output is used more than once.
- Change the ggml-vulkan rms_norm shader to optionally multiply by another tensor.
- Add detection logic and basic fusion logic in ggml-vulkan.
- Add some testing support for fusion. Rather than computing one node at a time, allow
for computing the whole graph and just testing one node's results. Add rms_norm_mul tests
and enable a llama test.
* extract some common fusion logic
* fix -Winconsistent-missing-override
* move ggml_can_fuse to a common function
* build fix
* C and C++ versions of can_fuse
* move use count to the graph to avoid data races and double increments when used in multiple threads
* use hash table lookup to find node index
* change use_counts to be indexed by hash table slot
* minimize hash lookups
style fixes
* last node doesn't need single use.
fix type.
handle mul operands being swapped.
* remove redundant parameter
---------
Co-authored-by: slaren <redacted>
Aman Gupta [Sat, 28 Jun 2025 17:30:53 +0000 (01:30 +0800)]
CUDA: add bf16 and f32 support to cublas_mul_mat_batched (llama/14361)
* CUDA: add bf16 and f32 support to cublas_mul_mat_batched
* Review: add type traits and make function more generic
* Review: make check more explicit, add back comments, and fix formatting
* Review: fix formatting, remove useless type conversion, fix naming for bools
Jeff Bolz [Sat, 28 Jun 2025 15:36:40 +0000 (10:36 -0500)]
vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (llama/14378)
Jeff Bolz [Sat, 28 Jun 2025 15:17:09 +0000 (10:17 -0500)]
vulkan: lock accesses of pinned_memory vector (llama/14333)
Xinpeng Dou [Sat, 28 Jun 2025 09:35:41 +0000 (17:35 +0800)]
fix async_mode bug (llama/14432)
Jeff Bolz [Sat, 28 Jun 2025 03:35:30 +0000 (22:35 -0500)]
vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (llama/14427)
This setting needs to be passed through to vulkan-shaders-gen
Radoslav Gerganov [Fri, 27 Jun 2025 13:41:40 +0000 (16:41 +0300)]
ggml : add ggml_set_rows (llama/14274)
* ggml : add ggml_set_rows
Add ggml_set_rows(a, b, c) which copies rows from 'b' into 'a' using
indices from 'c'.
ref: #8366
* use I64 for indices
* ggml : add repeat impl for i64
* ggml : add ggml_is_contiguous_rows
* ggml : ggml_set_rows support broadcast
* ggml : ggml_set_rows support quantized dst
ggml-ci
* ggml : support GGML_TYPE_F32 ".from_float" trait
* ggml : ggml_set_rows update comment + better index name
* tests : add ggml_set_rows
* metal : add ggml_set_rows implementation
ggml-ci
* ggml : simplify forward_dup_f32
* ggml : fix supports_op
* tests : add comment to set_rows
* ggml : leave the repeat_i64 for a separate PR
ggml-ci
* ggml : set_rows use std::min instead of MIN
* ggml : better error message for set_rows unsupported type
* metal : perform op->type check only once
* tests : more consistent implementation + more tests
ggml-ci
---------
Co-authored-by: Georgi Gerganov <redacted>
bandoti [Thu, 26 Jun 2025 16:46:53 +0000 (13:46 -0300)]
cmake: regen vulkan shaders when shaders-gen sources change (llama/14398)
* Add shaders-gen sources as target deps
Georgi Gerganov [Thu, 26 Jun 2025 12:51:19 +0000 (15:51 +0300)]
metal : add special-case mat-vec mul for ne00 == 4 (llama/14385)
ggml-ci
Georgi Gerganov [Thu, 26 Jun 2025 12:50:15 +0000 (15:50 +0300)]
metal : batch rows copy in a single threadgroup (llama/14384)
* metal : batch rows copy in a single threadgroup
ggml-ci
* metal : handle some edge cases when threadgroup size is not a power of 2
ggml-ci
R0CKSTAR [Thu, 26 Jun 2025 04:11:59 +0000 (12:11 +0800)]
musa: enable fp16 mma (all) and cublas on qy2 (llama/13842)
* musa: enable fp16 mma (all) and cublas on qy2
Signed-off-by: Xiaodong Ye <redacted>
* Update src/ggml-cuda/ggml-cuda.cu
Co-authored-by: Johannes Gäßler <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
* musa: disable MUL_MAT_ID (q2_k × f32) due to precision issues
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: Johannes Gäßler <redacted>
Aaron Teo [Wed, 25 Jun 2025 21:49:04 +0000 (05:49 +0800)]
ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
* ggml-cpu: add nnpa compile flag
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
4a9f60c201573128f73a65999b3e5cc497fae5c1 )
* ggml-cpu: add fp16->fp32 nnpa first
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
8d4a7987f9c1887f716be96250f2caeee0253929 )
* ggml-cpu: add fp32->fp16
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
0ff0d6516247a41d2ade42b42cf0d676a4dd1627 )
* ggml-cpu: better variable names
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
2f58bbcbb89c183340e252362b2a40651f573f1f )
* docs: update s390x docs
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
01b929491b50071a5d0572235dcf5a449da70aa7 )
* ggml-cpu: add debugging prints to see if dlf16 is correct
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix print vs printf
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix float placeholder
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: ensure fp16 and fp32 load and stores are called
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fp16 load ensured to hit
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove sigint from fp16 store
for some reason, the function is not getting a hit when debugged with
gdb. we will need to investigate further
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: activate nnpa for ggml_cpu_fp16_to_fp32
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: nnpa activate ggml_cpu_fp16_to_fp32 for 8 elements
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: nnpa switch to vec_xst test
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to vec_xst for 4 element loops also
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rework noop
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove noop, general code cleanup
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: clarify variable naming
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: activate nnpa for ggml_cpu_fp32_to_fp16
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add breakpoint for debugging
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: test fix for conversion failure
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: disable fp32->fp16 nnpa conversions for now
there are some conversion failures in nnpa that requires the eyes of an
ibm stsm. will create a separate pr to introduce the fp32->fp16 change.
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to elif macro
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: reattempt fp32->fp16
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: reattempt fp32->fp16
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix compiler types
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: change to typedef vector types
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add 4 element loops for fp32->fp16
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: clarified vector naming
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bring back fp32->fp16 store nnpa
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: activate nnpa fp32->fp16 or fp16->fp32 compute
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add nnpa macro check in ggml-impl
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add missing __func__
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: diagnose why __NNPA__ macro is not being defined
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: import vecintrin.h to fix compiler errors
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update macro tests
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move s390x typedef to own header file
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: move s390x typedef to own header file"
This reverts commit
157f856c34589566151630e294563a420702db39 .
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to importing ggml-cpu-impl instead
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix macro declaration
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: test more macros
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add debug prints
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bruteforce macro definitions
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move macro definitions
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add ggml-impl.h to cmakelists
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to private macros
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move s390x typedef to own header file
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
157f856c34589566151630e294563a420702db39 )
* ggml-cpu: move things around
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bring back compile macros
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to quotes for import
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add compiler error macro
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add s390x detection in ggml-src
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bring back compile definitions
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: undo cmakelists work
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: move s390x typedef to own header file"
This reverts commit
18d79e1a30b39d9aaa0bd58400c5cf2c32135c9a .
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove typedefs.h
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove typedef from cmakelists
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add ggml-impl.h future notes
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add todo comment for future reference
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: clarify naming of dlf16
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove unnecessary target compile definitions
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move nnpa fp16->fp32 and fp32->fp16 to simd-mappings
Signed-off-by: Aaron Teo <redacted>
* ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu
Signed-off-by: Aaron Teo <redacted>
* docs: update broken huggingface link for s390x
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix duplicate func names during compile
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: fix duplicate func names during compile"
This reverts commit
fbb733451f27677063b914d4f6c9a9841d45b38d .
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu"
This reverts commit
bd288e8fa52b5244f65cee21cb61062f1a9e0ca5 .
Signed-off-by: Aaron Teo <redacted>
* ggml: refactor fp16<->fp32 simd to ggml-cpu
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix missing simd-mappings.h import in quants.c
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix missing simd-mappings.h within repack
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix amx mmq missing simd-mappings.h
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: attempt at fixing loongarch failing build
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move nnpa together with other fp16<->fp32 simd
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix wrong refactor of ggml-base
ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164176555
Signed-off-by: Aaron Teo <redacted>
* ggml: remove dependency on ggml-cpu from ggml-base
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rename all fp16<->fp32 macros to prefix with ggml_cpu
ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164449406
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove mistaken fallback macro
fallback logic was already implemented but i was too sleepy to realise
Signed-off-by: Aaron Teo <redacted>
* ggml: move ggml_table_f32_f16 to ggml-cpu
ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures"
This reverts commit
32a3533564bdb7902cefb9c89b1c9e956a81ce29 .
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml: move ggml_table_f32_f16 to ggml-cpu"
This reverts commit
9e40d984ad27d7b60392fb2b7548885201864fe4 .
Signed-off-by: Aaron Teo <redacted>
* ggml: move ggml_table_f32_f16 to ggml-cpu
ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
9e40d984ad27d7b60392fb2b7548885201864fe4 )
* ggml: move ggml_table_f32_f16 to ggml-cpu.c
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: extern c ggml_table_f32_f16 + chore docs
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h
we rely on the variable declaration in ggml-cpu.c instead
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h"
This reverts commit
f71b21d2f74f5e03ec0c2b4fefd3cbf395aecf16 .
Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bring back ggml_table_f32_f16
Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: bring back ggml_table_f32_f16"
This reverts commit
2dce119178bed5ef5c8398c4230ddd14fef80e49 .
Signed-off-by: Aaron Teo <redacted>
* fix ggml time initialization
* fix f32_f16 table init
* remove extra line
---------
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: slaren <redacted>
Sigbjørn Skjæret [Wed, 25 Jun 2025 21:26:51 +0000 (23:26 +0200)]
ggml : do not output unprintable characters on GGUF load failure (llama/14381)
Anton Mitkov [Wed, 25 Jun 2025 16:09:55 +0000 (17:09 +0100)]
sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (llama/13973)
lhez [Tue, 24 Jun 2025 18:46:25 +0000 (11:46 -0700)]
opencl: ref count `ggml_backend_opencl_context` and refactor profiling (llama/14254)
* Move profiling info into `ggml_backend_opencl_context`
* Add `enqueue_ndrange_kernel` to launch kernel
uvos [Mon, 23 Jun 2025 23:12:56 +0000 (01:12 +0200)]
CUDA/HIP: optimize mmv paths taken for HIP devices (llama/14324)
Co-authored-by: Johannes Gäßler <redacted>
Johannes Gäßler [Mon, 23 Jun 2025 11:11:31 +0000 (13:11 +0200)]
CUDA: mul_mat_v support for batch sizes > 1 (llama/14262)
* CUDA: mul_mat_v support for batch sizes > 1
* use 64 bit math for initial offset calculation
uvos [Sun, 22 Jun 2025 14:51:23 +0000 (16:51 +0200)]
HIP: enable vec fattn on RDNA4 (llama/14323)
Aman Gupta [Sun, 22 Jun 2025 04:39:54 +0000 (12:39 +0800)]
CUDA: add mean operation (llama/14313)
* CUDA: add mean operation
* add back sum_rows_f32_cuda
* Review: early exit if col!=0
Markus Tavenrath [Sat, 21 Jun 2025 06:17:12 +0000 (08:17 +0200)]
Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (llama/13792)
* Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled.
* remove #ifdef for debug utils and add queue marker.
Georgi Gerganov [Sat, 21 Jun 2025 05:04:18 +0000 (08:04 +0300)]
metal : fix thread-safety (llama/14300)
ggml-ci
Acly [Tue, 1 Jul 2025 07:11:00 +0000 (09:11 +0200)]
ggml-cpu : "align corners" for bilinear upscale/downscale (#1285)
* add "align corners" mode for bilinear upscale, and allow downscaling
* add ggml_interpolate, deprecate ggml_upscale_ext, pass in align-corners as bit-flag
* test-backend-ops: replace ggml_upscale_ext with ggml_interpolate, add test cases for downscale and align-corners
Acly [Wed, 25 Jun 2025 10:16:22 +0000 (12:16 +0200)]
build : fix build with clang-cl on Windows (#1284)
* build : fix building tests with clang-cl on Windows
- clang-cl.exe (clang with MSVC CLI) doesn't like the space in /STACK option
- cl.exe (MSVC) works either way
* build : fix MSVC compiler warnings in test-roll.cpp
Daniel Bevenius [Tue, 24 Jun 2025 04:10:16 +0000 (06:10 +0200)]
ggml-quants : rename best_mad to best_error (#1283)
This commit renames the variable `best_mad` to `best_error` in the
`make_qkx2_quants` function.
The motivation for this is that the name `best_mad` can be somewhat
confusing if mean absolute deviation (MAD) is not in use.
Georgi Gerganov [Sat, 21 Jun 2025 06:21:28 +0000 (09:21 +0300)]
tests : cleanup old tests (#1282)
ggml-ci
Georgi Gerganov [Fri, 20 Jun 2025 18:04:04 +0000 (21:04 +0300)]
sync : llama.cpp
ggml-ci
Aman Gupta [Fri, 20 Jun 2025 14:48:24 +0000 (22:48 +0800)]
CUDA: add conv_2d_transpose (llama/14287)
* CUDA: add conv_2d_transpose
* remove direct include of cuda_fp16
* Review: add brackets for readability, remove ggml_set_param and add asserts
Nicolò Scipione [Fri, 20 Jun 2025 13:07:21 +0000 (15:07 +0200)]
sycl: add usage of enqueue_functions extension (llama/14244)
* Add header and namespace to use enqueue_functions extension
* Convert submit and parallel_for to use new extension in convert.cpp
* Convert submit and parallel_for to use extension in ggml-sycl.cpp
* Convert submit and parallel_for to use extension in gla.cpp
* Convert submit and parallel_for in mmq.cpp
* Convert submit and parallel_for in mmvq.cpp
* Convert submit and parallel_for in remaining files
* Convert all simple parallel_for to nd_launch from enqueue_functions
extension
* Wrapping extension in general function
Create a general function that enable the enqueue_functions extension if
it is enable in the compiler, otherwise call the general SYCL function
to launch kernels.
---------
Signed-off-by: nscipione <redacted>
Christian Kastner [Fri, 20 Jun 2025 12:17:32 +0000 (12:17 +0000)]
Implement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286)
* Add PowerPC feature detection and scoring
* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for PowerPC
* ggml-cpu: Delay some initializations until function is called
When using GGML_BACKEND_DL=ON, these initializations might use
instructions that are not supported by the current CPU.
---------
Co-authored-by: Diego Devesa <redacted>
Diego Devesa [Fri, 20 Jun 2025 11:57:36 +0000 (04:57 -0700)]
cuda : synchronize graph capture and cublas handle destruction (llama/14288)
Workarounds an issue that may cause CUDA graph capture to fail when a cuBLAS handle is destroyed in a different thread
Georgi Gerganov [Fri, 20 Jun 2025 08:19:15 +0000 (11:19 +0300)]
ggml : fix repack work size for mul_mat_id (llama/14292)
ggml-ci
Charles Xu [Fri, 20 Jun 2025 07:51:01 +0000 (09:51 +0200)]
ggml: Update KleidiAI to v1.9.0 (llama/14277)
Aman Gupta [Fri, 20 Jun 2025 01:50:24 +0000 (09:50 +0800)]
CUDA: add conv_2d_dw (llama/14265)
* CUDA: add conv_2d_dw
* better naming
* simplify using template
* Review: fix operation ordering in ggml-cuda, use __forceinline__, use more const
Diego Devesa [Thu, 19 Jun 2025 19:24:14 +0000 (12:24 -0700)]
ggml-cpu : remove unnecesary arm feature detection (llama/14281)
Support for Arm runtime feature detection has now been added to GGML_CPU_ALL_VARIANTS. This removes the old and not very functional code.
fanyang [Thu, 19 Jun 2025 12:49:48 +0000 (20:49 +0800)]
build : suppress gcc15 compile warnings (llama/14261)
* Change _contains_any() substrs to std::string_view and fix the find comparison logic.
Anton Mitkov [Thu, 19 Jun 2025 10:40:21 +0000 (11:40 +0100)]
sycl: Cleanup codepaths in Get Rows in sycl backend (llama/14215)
Addresses unused reorder path
Aaron Teo [Thu, 19 Jun 2025 09:48:54 +0000 (17:48 +0800)]
llamafile : support s390x SIMD instruction set (llama/14273)
0cc4m [Thu, 19 Jun 2025 07:15:42 +0000 (09:15 +0200)]
Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (llama/14249)
Georgi Gerganov [Thu, 19 Jun 2025 05:05:21 +0000 (08:05 +0300)]
metal : add mean kernel (llama/14267)
* metal : add mean kernel
ggml-ci
* cont : dedup implementation
ggml-ci
Aaron Teo [Wed, 18 Jun 2025 17:10:08 +0000 (01:10 +0800)]
ggml-cpu: reduce asm calls for hsum (llama/14037)
Signed-off-by: Aaron Teo <redacted>
Aaron Teo [Wed, 18 Jun 2025 17:06:49 +0000 (01:06 +0800)]
ggml-cpu: fix uncaught underscore terminators (llama/14023)
Signed-off-by: Aaron Teo <redacted>
Charles Xu [Wed, 18 Jun 2025 11:40:07 +0000 (13:40 +0200)]
ggml: Add Apple support for GGML_CPU_ALL_VARIANTS (llama/14258)
Acly [Wed, 18 Jun 2025 11:34:50 +0000 (13:34 +0200)]
Add `ggml_roll` (#1274)
* ggml : add ggml_roll
* use set/get_op_params & std::min
Georgi Gerganov [Wed, 18 Jun 2025 09:41:12 +0000 (12:41 +0300)]
sync : whisper.cpp
Georgi Gerganov [Wed, 18 Jun 2025 07:00:11 +0000 (10:00 +0300)]
sync : llama.cpp
ggml-ci
bandoti [Tue, 17 Jun 2025 20:33:25 +0000 (17:33 -0300)]
cmake: remove shader-gen step-targets from ggml-vulkan (llama/14226)
* Remove step-targets from vulkan-shaders-gen
* Unset DESTDIR when building vulkan-shaders-gen
xctan [Tue, 17 Jun 2025 09:58:32 +0000 (17:58 +0800)]
ggml-cpu : remove the weak alias trick (llama/14221)
R0CKSTAR [Tue, 17 Jun 2025 09:48:08 +0000 (17:48 +0800)]
musa: fix build warning (unused variable) (llama/14231)
Signed-off-by: Xiaodong Ye <redacted>
Diego Devesa [Mon, 16 Jun 2025 15:11:43 +0000 (08:11 -0700)]
llama : add thread safety test (llama/14035)
* llama : add thread safety test
* llamafile : remove global state
* llama : better LLAMA_SPLIT_MODE_NONE logic
when main_gpu < 0 GPU devices are not used
---------
Co-authored-by: Georgi Gerganov <redacted>
bandoti [Mon, 16 Jun 2025 13:32:13 +0000 (10:32 -0300)]
cmake: clean up external project logic for vulkan-shaders-gen (llama/14179)
* Remove install step for vulkan-shaders-gen
* Add install step to normalize msvc with make
* Regenerate modified shaders at build-time
uvos [Mon, 16 Jun 2025 11:47:38 +0000 (13:47 +0200)]
HIP: disable rocwmma on gfx12 by default until rocm 7.0 (llama/14202)
Charles Xu [Mon, 16 Jun 2025 09:47:57 +0000 (11:47 +0200)]
ggml: Add Android support for GGML_CPU_ALL_VARIANTS (llama/14206)
Jeff Bolz [Mon, 16 Jun 2025 06:21:08 +0000 (00:21 -0600)]
vulkan: mutex around vkQueueSubmit (llama/14127)
This fixes the remaining crash in test-thread-safety on my system.
xctan [Mon, 16 Jun 2025 05:54:15 +0000 (13:54 +0800)]
ggml-cpu : rework weak alias on apple targets (llama/14146)
* ggml-cpu : rework weak alias on apple targets
* fix powerpc detection
* fix ppc detection
* fix powerpc detection on darwin
uvos [Sun, 15 Jun 2025 15:30:13 +0000 (17:30 +0200)]
CUDA/HIP: fix ssm_scan on devices where warp size is not 32 (llama/14196)
uvos [Sun, 15 Jun 2025 13:45:27 +0000 (15:45 +0200)]
HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (llama/14183)
Anton Mitkov [Fri, 13 Jun 2025 07:51:39 +0000 (08:51 +0100)]
sycl: Adding additional cpy dbg print output (llama/14034)
Ewan Crawford [Fri, 13 Jun 2025 07:45:37 +0000 (08:45 +0100)]
SYCL: Bump oneMath commit (llama/14152)
Update oneMath commit to merged PR https://github.com/uxlfoundation/oneMath/pull/669
which adds SYCL-Graph support for recording CUDA BLAS commands.
With this change the `MUL_MAT` tests now pass on DPC++ CUDA backends with SYCL-Graph
enabled. Prior to this change, an error would be thrown.
```
$ GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0 -o MUL_MAT -p type_a=f16,type_b=f32,m=16,n=1,k=256,bs=\\[1,1\\],nr=\\[2
UR CUDA ERROR:
Value: 700
Name: CUDA_ERROR_ILLEGAL_ADDRESS
Description: an illegal memory access was encountered
Function: operator()
Source Location: $HOME/dpcpp/unified-runtime/source/adapters/cuda/queue.cpp:154
Native API failed. Native API returns:
2147483646 (UR_RESULT_ERROR_UNKNOWN)
Exception caught at file:$HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp, line:3598, func:operator()
SYCL error: CHECK_TRY_ERROR((stream)->wait()): Meet error in this line code!
in function ggml_backend_sycl_synchronize at $HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3598
$HOME/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/common.hpp:118: SYCL error
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
```
Anton Mitkov [Thu, 12 Jun 2025 13:15:11 +0000 (14:15 +0100)]
sycl: Remove not needed copy f16->f32 for dnnl mul mat (llama/14125)
Georgi Gerganov [Thu, 12 Jun 2025 07:14:24 +0000 (10:14 +0300)]
cmake : handle whitepsaces in path during metal build (llama/14126)
* cmake : handle whitepsaces in path during metal build
ggml-ci
* cont : proper fix
ggml-ci
---------
Co-authored-by: Daniel Bevenius <redacted>
Christian Kastner [Wed, 11 Jun 2025 19:07:44 +0000 (19:07 +0000)]
Implement GGML_CPU_ALL_VARIANTS for ARM (llama/14080)
* ggml-cpu: Factor out feature detection build from x86
* ggml-cpu: Add ARM feature detection and scoring
This is analogous to cpu-feats-x86.cpp. However, to detect compile-time
activation of features, we rely on GGML_USE_<FEAT> which need to be set
in cmake, instead of GGML_<FEAT> that users would set for x86.
This is because on ARM, users specify features with GGML_CPU_ARM_ARCH,
rather than with individual flags.
* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for ARM
Like x86, however to pass around arch flags within cmake, we use
GGML_INTERNAL_<FEAT> as we don't have GGML_<FEAT>.
Some features are optional, so we may need to build multiple backends
per arch version (armv8.2_1, armv8.2_2, ...), and let the scoring
function sort out which one can be used.
* ggml-cpu: Limit ARM GGML_CPU_ALL_VARIANTS to Linux for now
The other platforms will need their own specific variants.
This also fixes the bug that the the variant-building branch was always
being executed as the else-branch of GGML_NATIVE=OFF. The branch is
moved to an elseif-branch which restores the previous behavior.
Jeff Bolz [Wed, 11 Jun 2025 14:48:52 +0000 (09:48 -0500)]
vulkan: Better thread-safety for command pools/buffers (llama/14116)
This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.
Jeff Bolz [Wed, 11 Jun 2025 05:19:25 +0000 (00:19 -0500)]
vulkan: Track descriptor pools/sets per-context (llama/14109)
Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.
lhez [Tue, 10 Jun 2025 23:55:58 +0000 (16:55 -0700)]
opencl: add `mul_mv_id_q4_0_f32_8x_flat` (llama/14003)
0cc4m [Tue, 10 Jun 2025 12:01:33 +0000 (14:01 +0200)]
Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (llama/14099)
Isaac McFadyen [Tue, 10 Jun 2025 06:41:01 +0000 (02:41 -0400)]
rpc : nicer error messages for RPC server crash (llama/14076)
Daniel Bevenius [Fri, 13 Jun 2025 13:06:42 +0000 (15:06 +0200)]
ggml : disable warnings for tests when using MSVC (#1273)
* ggml : disable warnings for tests when using MSVC
This commit disables warnings for tests on windows when using MSVC.
The motivation for this is that this brings the build output more
inline with what Linux/MacOS systems produce.
There is still one warning generated for the tests which is:
```console
Building Custom Rule C:/ggml/tests/CMakeLists.txt
cl : command line warning D9025: overriding '/DNDEBUG' with '/UNDEBUG'
[C:\ggml\build\tests\test-arange.vcxproj]
test-arange.cpp
test-arange.vcxproj -> C:\ggml\build\bin\Release\test-arange.exe
```
* ggml : fix typo in tests disable list
Daniel Bevenius [Fri, 13 Jun 2025 07:05:44 +0000 (09:05 +0200)]
ggml : remove unused ggml_context_container (#1272)
This commit removes the unused `ggml_context_container` structure from
the ggml library. It looks like the usage of this struct was removed in
Commit
4757fe18d56ec11bf9c07feaca6e9d5b5357e7f4 ("ggml : alloc
ggml_contexts on the heap (whisper/2525)").
The motivation for this changes is to improve code clarity/readability.
Daniel Bevenius [Thu, 12 Jun 2025 13:57:58 +0000 (15:57 +0200)]
scripts : remove common.{cpp,h} from whisper sync scripts (#1271)
This commit removes the common.{cpp,h} files from the whisper sync
scripts.
Refs: https://github.com/ggml-org/whisper.cpp/pull/3244#issuecomment-
2966630744
Daniel Bevenius [Thu, 12 Jun 2025 10:27:09 +0000 (12:27 +0200)]
examples : include examples in msvc disable warn (#1270)
This commit adds the examples in the "list" of targets to ignore MSVC
warnings.
The motivation for this is that currently the examples generate a number
of warnings that are ignore/disabled for the core ggml project. This
makes for a cleaner output when building.
Daniel Bevenius [Wed, 11 Jun 2025 09:15:14 +0000 (11:15 +0200)]
mnist : use CMake to build mnist wasm example (#1269)
This commit updates the mnist examples to use CMake for building the
WebAssembly (WASM) version of the MNIST example instead of the current
emcc command.
The motivation for this change is that using CMake should make it easier
to maintin with regards to when changes in ggml occur they should not
cause this example to break. Currently the emcc command is outdated and
it was not clear how to updated it which is why this change was made.
Resolves: https://github.com/ggml-org/ggml/issues/1264
Georgi Gerganov [Tue, 10 Jun 2025 14:27:50 +0000 (17:27 +0300)]
sync : whisper.cpp
ggml-ci
Georgi Gerganov [Tue, 10 Jun 2025 08:34:10 +0000 (11:34 +0300)]
ggml : fix weak alias win32 (whisper/0)
ggml-ci
Georgi Gerganov [Tue, 10 Jun 2025 08:04:47 +0000 (11:04 +0300)]
files : remove old sources (part 2) (#1267)
ggml-ci
Georgi Gerganov [Tue, 10 Jun 2025 07:57:17 +0000 (10:57 +0300)]
files : remove old sources (#1266)
ggml-ci
Georgi Gerganov [Tue, 10 Jun 2025 06:22:40 +0000 (09:22 +0300)]
sync : llama.cpp
ggml-ci
Georgi Gerganov [Mon, 9 Jun 2025 20:05:02 +0000 (23:05 +0300)]
metal : use less stack memory in FA kernel (llama/14088)
* metal : use less stack memory in FA kernel
ggml-ci
* cont : fix BF16 variant
xctan [Mon, 9 Jun 2025 14:47:13 +0000 (22:47 +0800)]
ggml-cpu : split arch-specific implementations (llama/13892)
* move ggml-cpu-aarch64 to repack
* split quantize_row_q8_0/1
* split helper functions
* split ggml_vec_dot_q4_0_q8_0
* split ggml_vec_dot_q4_1_q8_1
* split ggml_vec_dot_q5_0_q8_0
* split ggml_vec_dot_q5_1_q8_1
* split ggml_vec_dot_q8_0_q8_0
* split ggml_vec_dot_tq1_0_q8_K
* split ggml_vec_dot_tq2_0_q8_K
* split ggml_vec_dot_q2_K_q8_K
* split ggml_vec_dot_q3_K_q8_K
* split ggml_vec_dot_q4_K_q8_K
* split ggml_vec_dot_q5_K_q8_K
* split ggml_vec_dot_q6_K_q8_K
* split ggml_vec_dot_iq2_xxs_q8_K
* split ggml_vec_dot_iq2_xs_q8_K
* split ggml_vec_dot_iq2_s_q8_K
* split ggml_vec_dot_iq3_xxs_q8_K
* split ggml_vec_dot_iq3_s_q8_K
* split ggml_vec_dot_iq1_s_q8_K
* split ggml_vec_dot_iq1_m_q8_K
* split ggml_vec_dot_iq4_nl_q8_0
* split ggml_vec_dot_iq4_xs_q8_K
* fix typos
* fix missing prototypes
* rename ggml-cpu-quants.c
* rename ggml-cpu-traits
* rename arm folder
* move cpu-feats-x86.cpp
* rename ggml-cpu-hbm
* update arm detection macro in quants.c
* move iq quant tables
* split ggml_quantize_mat_q8_0/K
* split ggml_gemv_*
* split ggml_gemm_*
* rename namespace aarch64 to repack
* use weak aliases to replace test macros
* rename GGML_CPU_AARCH64 to GGML_CPU_REPACK
* rename more aarch64 to repack
* clean up rebase leftover
* fix compilation errors
* remove trailing spaces
* try to fix clang compilation errors
* try to fix clang compilation errors again
* try to fix clang compilation errors, 3rd attempt
* try to fix clang compilation errors, 4th attempt
* try to fix clang compilation errors, 5th attempt
* try to fix clang compilation errors, 6th attempt
* try to fix clang compilation errors, 7th attempt
* try to fix clang compilation errors, 8th attempt
* try to fix clang compilation errors, 9th attempt
* more cleanup
* fix compilation errors
* fix apple targets
* fix a typo in arm version of ggml_vec_dot_q4_K_q8_K
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Diego Devesa [Mon, 9 Jun 2025 14:36:26 +0000 (07:36 -0700)]
cuda : fix device sync on buffer clear (llama/14033)
Xinpeng Dou [Mon, 9 Jun 2025 11:47:39 +0000 (19:47 +0800)]
CANN: Simplify the environment variable setting(#13104)
* Simplify the environment variable setting to specify the memory pool type.
* Adjust the GGML_CANN_ASYNC_MODE setting to accept yes, enable, 1, or on (case-insensitive) as valid options.
* update
* fix CI
* update
* delete whitespace
* fix according to review
* update CANN.md
* update CANN.md
Nicolò Scipione [Mon, 9 Jun 2025 09:47:07 +0000 (11:47 +0200)]
sycl: Add reorder to Q6_K mmvq implementation (llama/13885)
* Add Reorder to Q6_K mmvq implementation
* Address PR comments: clean up comments
* Remove unused parameter after refactoring q4_k
* Adding inline to function and removing unnecessary reference to int
---------
Signed-off-by: nscipione <redacted>
Diego Devesa [Sun, 8 Jun 2025 18:39:56 +0000 (11:39 -0700)]
cuda : fix buffer type check with integrated GPUs (llama/14069)
Akarshan Biswas [Sat, 7 Jun 2025 13:28:20 +0000 (18:58 +0530)]
SYCL: Implement few same quantized type copy kernels (llama/13739)
* SYCL: Implement few same quantized type copy kernels
* Use memcpy for copying contiguous tensors
ggml-ci
* feat(sycl): add contiguous tensor copy support and device checks
Adds a memcpy path for contiguous tensors of the same type to optimize data transfer. Updates device support checks to recognize contiguous tensor operations, improving compatibility and performance.
* refactor: replace specific block copy functions with template
The changes replace multiple redundant block copy functions (e.g., cpy_block_q8_0_q8_0, cpy_block_q5_0_q5_0) with a single templated function cpy_blck_q_q. This reduces code duplication by using a generic template that works for any block type, improving maintainability while preserving the same functionality. The template is instantiated with specific block types (e.g., block_q8_0) where needed.
* Exclude BF16 support for COPY tensors for now
ggml-ci
* perf: adjust SYCL copy kernel block sizes for efficiency
Use ceil_div to ensure full element coverage and update nd_range parameters to better align with SYCL block sizes, improving parallelism and device utilization in copy operations.
Masato Nakasaka [Thu, 5 Jun 2025 14:00:29 +0000 (23:00 +0900)]
vulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (llama/14001)
* allowing B580 and U9-288V
* experimenting code to detect Xe2
* allowing coopmat only for Xe2 GPUs
* fixed comment wording
* fixed comment wording
* removed unnecessary driver check
Diego Devesa [Thu, 5 Jun 2025 09:57:42 +0000 (02:57 -0700)]
llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (llama/14013)
Jeff Bolz [Thu, 5 Jun 2025 05:17:58 +0000 (00:17 -0500)]
vulkan: automatically deduce size of push constants (llama/13936)
Ervin Áron Tasnádi [Wed, 4 Jun 2025 20:02:00 +0000 (22:02 +0200)]
ggml-vulkan: adds support for op CONV_TRANSPOSE_1D (llama/13813)
* * ggml-vulkan: adds op CONV_TRANSPOSE_1D
* test-backend-ops: adds more spohisticated tests for CONV_TRANSPOSE_1D
* Missing barrier added to shader.
Number of additional tests reduced to 108.
* * Fixes typo in variable name.
* Removes extra whitespaces.
* Adds int64->int32 casts to prevent possible warnings.
* Problem size reduced in tests to pass tests with llvmpipe.
* supports_op condition moved from unintended position
Diego Devesa [Wed, 4 Jun 2025 11:15:54 +0000 (04:15 -0700)]
releases : use dl backend for linux release, remove arm64 linux release (llama/13996)
Johannes Gäßler [Wed, 4 Jun 2025 06:57:05 +0000 (08:57 +0200)]
CUDA: fix FTZ in FA for Gemma 3 (llama/13991)