]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Georgi Gerganov [Sat, 2 Aug 2025 14:17:08 +0000 (17:17 +0300)]
sync : llama.cpp
ggml-ci
leejet [Sat, 2 Aug 2025 14:15:36 +0000 (22:15 +0800)]
cuda: make im2col a little faster (llama/15025)
Georgi Gerganov [Sat, 2 Aug 2025 14:13:05 +0000 (17:13 +0300)]
cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (llama/15038)
* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1
ggml-ci
* cont : fix cont types
ggml-ci
* cont : adopt variable names and comment from the other branch
Jeff Bolz [Sat, 2 Aug 2025 09:21:37 +0000 (04:21 -0500)]
vulkan: coopmat2 mul_mat optimizations (llama/14934)
- Increase tile size for k-quants, to match non-k-quants
- Choose more carefully between large and medium tiles, considering how it
interacts with split_k
- Allow larger/non-power of two split_k, and make the splits a multiple of 256
- Use split_k==3 to when >1/2 and <=2/3 of the SMs would hae been used
Jeff Bolz [Sat, 2 Aug 2025 08:48:30 +0000 (03:48 -0500)]
vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (llama/15015)
Jeff Bolz [Sat, 2 Aug 2025 07:57:04 +0000 (02:57 -0500)]
vulkan: optimizations for direct convolution (llama/14933)
* vulkan: optimizations for direct convolution
- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.
* Three tiles sizes for CONV_2D, and a heuristic to choose
* reallow collectives for pre-Turing
* make SHMEM_PAD a spec constant
* fixes for intel perf - no shmem padding, placeholder shader core count
* shader variants with/without unrolling
* 0cc4m's fixes for AMD perf
Co-authored-by: 0cc4m <redacted>
---------
Co-authored-by: 0cc4m <redacted>
Johannes Gäßler [Fri, 1 Aug 2025 18:47:32 +0000 (20:47 +0200)]
CUDA: fix MMQ nwarps for AMD with warp_size==32 (llama/15014)
lhez [Fri, 1 Aug 2025 11:15:44 +0000 (04:15 -0700)]
opencl: add f16 for `add`, `sub`, `mul`, `div` (llama/14984)
Srihari-mcw [Fri, 1 Aug 2025 06:20:33 +0000 (11:50 +0530)]
ggml : Q2k interleaving implementation - x86/x64 SIMD (llama/14373)
* Initial Q2_K Block Interleaving Implementation
* Addressed review comments and clean up of the code
* Post rebase fixes
* Initial CI/CD fixes
* Update declarations in arch-fallback.h
* Changes for GEMV Q2_K in arch-fallback.h
* Enable repacking only on AVX-512 machines
* Update comments in repack.cpp
* Address q2k comments
---------
Co-authored-by: Manogna-Sree <redacted>
diannao [Fri, 1 Aug 2025 02:02:34 +0000 (10:02 +0800)]
docker : add cann build pipline (llama/14591)
* docker: add cann build pipline
* docker: add cann build pipline
* docker: fix cann devops
* cann : fix multi card hccl
* Update src/ggml-cann/ggml-cann.cpp
Co-authored-by: Xuan-Son Nguyen <redacted>
* Update ggml-cann.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Ruben Ortlam [Thu, 31 Jul 2025 15:46:54 +0000 (17:46 +0200)]
Vulkan: Fix minor debug mode issues (llama/14899)
* vulkan: fix debug mode issues
* vulkan: remove broken check_results GGML_OP_SET_ROWS support
hipudding [Thu, 31 Jul 2025 11:47:20 +0000 (19:47 +0800)]
CANN: Improve loading efficiency after converting weights to NZ format. (llama/14985)
* CANN: Improve loading efficiency after converting weights to NZ format.
* CANN: fix typo
lhez [Wed, 30 Jul 2025 21:56:55 +0000 (14:56 -0700)]
opencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (llama/14809)
uvos [Wed, 30 Jul 2025 15:38:06 +0000 (17:38 +0200)]
HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (llama/14949)
Johannes Gäßler [Wed, 30 Jul 2025 13:46:13 +0000 (15:46 +0200)]
CUDA: skip masked KV slices for all FA kernels (llama/14924)
uvos [Tue, 29 Jul 2025 18:23:04 +0000 (20:23 +0200)]
HIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets (llama/14945)
uvos [Tue, 29 Jul 2025 15:44:30 +0000 (17:44 +0200)]
HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (llama/14930)
This is useful for testing for regressions on GCN with CDNA hardware.
With GGML_HIP_MMQ_MFMA=Off and GGML_CUDA_FORCE_MMQ=On we can conveniently test the GCN code path on CDNA. As CDNA is just GCN renamed with MFMA added and limited use ACC registers, this provides a good alternative for regression testing when GCN hardware is not available.
uvos [Tue, 29 Jul 2025 15:43:43 +0000 (17:43 +0200)]
HIP: Ignore unsupported unroll transformation in fattn-vec (llama/14931)
llvm with the amdgcn target dose not support unrolling loops with conditional break statements, when those statements can not be resolved at compile time. Similar to other places in GGML lets simply ignore this warning.
hipudding [Tue, 29 Jul 2025 14:36:43 +0000 (22:36 +0800)]
CANN: Add ggml_set_rows (llama/14943)
Sigbjørn Skjæret [Tue, 29 Jul 2025 12:22:03 +0000 (14:22 +0200)]
cuda : add softcap fusion (llama/14907)
Aman Gupta [Tue, 29 Jul 2025 06:45:18 +0000 (14:45 +0800)]
CUDA: add roll (llama/14919)
* CUDA: add roll
* Make everything const, use __restrict__
Leonard Mosescu [Mon, 28 Jul 2025 16:04:27 +0000 (09:04 -0700)]
test-backend-ops : extend test case filtering (llama/14865)
* Extend test case filtering
1. Allow passing multiple (comma-separated?) ops to test-backend-ops. This can be convenient when working on a set of ops, when you'd want to test them together (but without having to run every single op). For example:
`test-backend-ops.exe test -o "ADD,RMS_NORM,ROPE,SILU,SOFT_MAX"`
2. Support full test-case variation string in addition to basic op names. This would make it easy to select a single variation, either for testing or for benchmarking. It can be particularly useful for profiling a particular variation (ex. a CUDA kernel), for example:
`test-backend-ops.exe perf -b CUDA0 -o "MUL_MAT(type_a=f16,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3],v=2)"`
These two can be combined. As the current `-o`, this change doesn't try to detect/report an error if an filter doesn't name existing ops (ex. misspelled)
* Updating the usage help text
* Update tests/test-backend-ops.cpp
xctan [Mon, 28 Jul 2025 15:40:24 +0000 (23:40 +0800)]
ggml-cpu : deduplicate scalar implementations (llama/14897)
* remove redundant code in riscv
* remove redundant code in arm
* remove redundant code in loongarch
* remove redundant code in ppc
* remove redundant code in s390
* remove redundant code in wasm
* remove redundant code in x86
* remove fallback headers
* fix x86 ggml_vec_dot_q8_0_q8_0
Akarshan Biswas [Mon, 28 Jul 2025 15:02:15 +0000 (20:32 +0530)]
SYCL: Add set_rows support for quantized types (llama/14883)
* SYCL: Add set_rows support for quantized types
This commit adds support for GGML_OP_SET_ROWS operation for various
quantized tensor types (Q8_0, Q5_1, Q5_0, Q4_1, Q4_0, IQ4_NL) and BF16
type in the SYCL backend.
The quantization/dequantization copy kernels were moved from cpy.cpp
to cpy.hpp to make them available for set_rows.cpp.
This addresses part of the TODOs mentioned in the code.
* Use get_global_linear_id() instead
ggml-ci
* Fix formatting
ggml-ci
* Use const for ne11 and size_t variables in set_rows_sycl_q
ggml-ci
* Increase block size for q kernel to 256
ggml-ci
* Cleanup imports
* Add float.h to cpy.hpp
Johannes Gäßler [Mon, 28 Jul 2025 12:30:22 +0000 (14:30 +0200)]
CUDA: fix pointer incrementation in FA (llama/14916)
Alberto Cabrera Pérez [Mon, 28 Jul 2025 10:05:53 +0000 (11:05 +0100)]
sycl: refactor quantization to q8_1 (llama/14815)
* sycl: quantization to q8_1 refactor
* Refactored src1 copy logic in op_mul_mat
Kai Pastor [Sat, 2 Aug 2025 14:29:48 +0000 (16:29 +0200)]
ci : Move msvc to matrix (#1318)
Enable static builds and testing
AN Long [Sat, 2 Aug 2025 14:28:28 +0000 (23:28 +0900)]
simple : fix typo (#1319)
Georgi Gerganov [Wed, 30 Jul 2025 12:56:40 +0000 (15:56 +0300)]
sync : whisper.cpp
Kai Pastor [Wed, 30 Jul 2025 12:53:16 +0000 (14:53 +0200)]
cmake : Fix BLAS link interface (#1316)
Kai Pastor [Wed, 30 Jul 2025 12:52:26 +0000 (14:52 +0200)]
vulkan : fix 32-bit builds (#1313)
The pipeline member can be cast to VkPipeline.
This is a VkPipeline_T* on 64 bit but a uint64_t on 32 bit.
Cf. VK_DEFINE_NON_DISPATCHABLE_HANDLE documentation.
Georgi Gerganov [Mon, 28 Jul 2025 05:15:36 +0000 (08:15 +0300)]
sync : llama.cpp
ggml-ci
Erik Scholz [Sun, 27 Jul 2025 10:04:33 +0000 (12:04 +0200)]
vulkan : add fp16 support for the conv_2d kernel (llama/14872)
* add f16 to conv_2d testing
* weaken conv2d test error threshold
Jeff Bolz [Sun, 27 Jul 2025 09:05:34 +0000 (04:05 -0500)]
vulkan: skip empty set_rows to avoid invalid API usage (llama/14860)
Aman Gupta [Sun, 27 Jul 2025 01:36:43 +0000 (09:36 +0800)]
Docs: add instructions for adding backends (llama/14889)
deepsek [Sat, 26 Jul 2025 22:28:14 +0000 (18:28 -0400)]
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (llama/14624)
This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.
hipudding [Sat, 26 Jul 2025 09:56:18 +0000 (17:56 +0800)]
CANN: Implement GLU ops (llama/14884)
Implement REGLU, GEGLU, SWIGLU ops according to #14158
R0CKSTAR [Sat, 26 Jul 2025 02:36:02 +0000 (10:36 +0800)]
musa: fix build warnings (unused variable) (llama/14869)
Signed-off-by: Xiaodong Ye <redacted>
Aaron Teo [Fri, 25 Jul 2025 17:09:03 +0000 (01:09 +0800)]
ggml-cpu : disable GGML_NNPA by default due to instability (llama/14880)
* docs: update s390x document for sentencepiece
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
e086c5e3a7ab3463d8e0906efcfa39352db0a48d )
* docs: update huggingface links + reword
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
8410b085ea8c46e22be38266147a1e94757ef108 )
* ggml-cpu: disable ggml-nnpa compile flag by default
fixes #14877
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
412f4c7c88894b8f55846b4719c76892a23cfe09 )
* docs: update s390x build docs to reflect nnpa disable
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
c1eeae1d0c2edc74ab9fbeff2707b0d357cf0b4d )
---------
Signed-off-by: Aaron Teo <redacted>
Gabe Goodhart [Fri, 25 Jul 2025 16:47:39 +0000 (10:47 -0600)]
metal: SSM_SCAN performance (llama/14743)
* feat: Add s_off as a parameter in the args struct
This may not be necessary, but it more closely mirrors the CUDA kernel
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* perf: Parallelize mamba2 SSM_SCAN metal kernel over d_state
This is a first attempt at optimizing the metal kernel. The changes here
are:
- Launch the kernel with a thread group of size d_state
- Use simd groups and shared memory to do the summation for the y
computation
When tested with G4 tiny preview, this shows roughly a 3x speedup on
prefill and 15% speedup on decode.
Signed-off-by: Gabe Goodhart <redacted>
* fix: Update logic to correctly do the multi-layer parallel sum
Signed-off-by: Gabe Goodhart <redacted>
* fix: Correctly size the shared memory bufer and assert expected size relationships
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Compute block offsets once rather than once per token
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Use local variable for state recursion
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Use a secondary simd_sum instead of a for loop
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Add assertion and comment about relationship between simd size and num simd groups
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallelize of d_state for mamba-1
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallel sum in SSM_CONV
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* Revert "feat: Parallel sum in SSM_CONV"
After discussion with @compilade, the size of the parallelism here is
not worth the cost in complexity or overhead of the parallel for.
https://github.com/ggml-org/llama.cpp/pull/14743#discussion_r2223395357
This reverts commit
16bc059660c1c59e566628201c0ca2c20c9f4bc3 .
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Simplify shared memory sizing
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
Co-Authored-By: Georgi Gerganov <redacted>
---------
Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
lhez [Fri, 25 Jul 2025 15:12:13 +0000 (08:12 -0700)]
opencl: add fused `rms_norm_mul` (llama/14841)
* opencl: add fused `rms_norm` + `mul`
* opencl: improve workgroup size for `rms_norm_mul`
Oliver Simons [Fri, 25 Jul 2025 11:29:57 +0000 (13:29 +0200)]
ggml : remove invalid portPos specifiers from dot files (llama/14838)
Neither "g" nor "x" are valid portPos specifiers per the official
[graphviz documents](https://graphviz.org/docs/attr-types/portPos/):
> If a compass point is used, it must have the form "n","ne","e","se","s","sw","w","nw","c","_".
I tested locally for it to fall back to default portPos specifier if an
invalid portPos is specified. As a consequence, we can remove associated
code.
Chris Rohlf [Fri, 25 Jul 2025 10:17:02 +0000 (06:17 -0400)]
rpc : check for null buffers in get/set/copy tensor endpoints (llama/14868)
Diego Devesa [Fri, 25 Jul 2025 08:07:26 +0000 (01:07 -0700)]
sched : fix multiple evaluations of the same graph with pipeline parallelism (llama/14855)
ggml-ci
R0CKSTAR [Thu, 24 Jul 2025 19:05:37 +0000 (03:05 +0800)]
musa: upgrade musa sdk to rc4.2.0 (llama/14498)
* musa: apply mublas API changes
Signed-off-by: Xiaodong Ye <redacted>
* musa: update musa version to 4.2.0
Signed-off-by: Xiaodong Ye <redacted>
* musa: restore MUSA graph settings in CMakeLists.txt
Signed-off-by: Xiaodong Ye <redacted>
* musa: disable mudnnMemcpyAsync by default
Signed-off-by: Xiaodong Ye <redacted>
* musa: switch back to non-mudnn images
Signed-off-by: Xiaodong Ye <redacted>
* minor changes
Signed-off-by: Xiaodong Ye <redacted>
* musa: restore rc in docker image tag
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Georgi Gerganov [Fri, 25 Jul 2025 04:05:38 +0000 (07:05 +0300)]
contrib : recommend PRs to llama.cpp (#1312)
* contrib : recommend PRs to llama.cpp
* cont : wording
Kai Pastor [Thu, 24 Jul 2025 17:58:02 +0000 (19:58 +0200)]
cmake : Indent ggml-config.cmake (#1310)
Georgi Gerganov [Thu, 24 Jul 2025 17:28:43 +0000 (20:28 +0300)]
sync : llama.cpp
ggml-ci
Alberto Cabrera Pérez [Thu, 24 Jul 2025 10:09:57 +0000 (11:09 +0100)]
sycl: fixed semantics of block offset calculation (llama/14814)
Georgi Gerganov [Thu, 24 Jul 2025 07:24:05 +0000 (10:24 +0300)]
metal : fix fusion across different encoders (llama/14849)
* metal : fix fusion across different encoders
ggml-ci
* cont : add assertion
ggml-ci
Donghyeon Jeong [Thu, 24 Jul 2025 04:50:41 +0000 (13:50 +0900)]
sycl: fix undefined variable in work group size check (llama/14843)
Johannes Gäßler [Wed, 23 Jul 2025 19:43:25 +0000 (21:43 +0200)]
CUDA: fix overflow in FA, tune performance (llama/14840)
Johannes Gäßler [Wed, 23 Jul 2025 16:22:30 +0000 (18:22 +0200)]
CUDA: fix compilation with GGML_CUDA_F16 (llama/14837)
Johannes Gäßler [Wed, 23 Jul 2025 10:35:53 +0000 (12:35 +0200)]
CUDA: fix quantized KV cache + multiple sequences (llama/14822)
* CUDA: fix quantized KV cache + multiple sequences
* Update src/ggml-cuda/fattn-common.cuh
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 18 Jul 2025 10:36:27 +0000 (13:36 +0300)]
tests : add non-cont K,V FA tests
ggml-ci
lixing-star [Wed, 23 Jul 2025 06:39:51 +0000 (14:39 +0800)]
ggml: fix loongarch quantize_row_q8_1 error (llama/14827)
chen fan [Wed, 23 Jul 2025 03:58:00 +0000 (11:58 +0800)]
CANN: weight format to NZ for Ascend310P3 (llama/14407)
* weight format to nz for 310p
* remove quant weight format to nz
* clean code
* fix
* make the conditions for converting weights to NZ format consistent
* clean code
Aman Gupta [Wed, 23 Jul 2025 01:25:42 +0000 (09:25 +0800)]
CUDA: add fused rms norm (llama/14800)
Jeff Bolz [Tue, 22 Jul 2025 15:35:21 +0000 (10:35 -0500)]
vulkan: fix rms_norm_mul to handle broadcasting dim0 (llama/14817)
Sigbjørn Skjæret [Tue, 22 Jul 2025 10:33:10 +0000 (12:33 +0200)]
cuda : implement bf16 cpy ops and enable bf16 cont (llama/14763)
* implement bf16 cpy ops and enable bf16 cont
* deduplicate copy functions
* deduplicate checks
lhez [Tue, 22 Jul 2025 06:53:30 +0000 (23:53 -0700)]
opencl: remove unreachable `return` (llama/14806)
R0CKSTAR [Mon, 21 Jul 2025 23:45:26 +0000 (07:45 +0800)]
cuda: remove linking to cublasLt (llama/14790)
Signed-off-by: Xiaodong Ye <redacted>
Sigbjørn Skjæret [Mon, 21 Jul 2025 20:55:10 +0000 (22:55 +0200)]
opencl: fix `im2col` when `KW!=KH` (llama/14803)
rmatif [Mon, 21 Jul 2025 17:03:19 +0000 (19:03 +0200)]
opencl: add conv2d kernel (llama/14403)
* add conv2d kernel
* fix trailing whitespace
* whitespace fixe
* handle f16 input and f16 kernel, more opt
* resolve conflicts
* use enqueue_ndrange_kernel
Romain Biessy [Mon, 21 Jul 2025 16:39:29 +0000 (18:39 +0200)]
sycl: Fix im2col (llama/14797)
Charles Xu [Mon, 21 Jul 2025 13:49:52 +0000 (15:49 +0200)]
kleidiai: add support for get_rows (llama/14676)
* kleidiai: add support for get_rows
* apply fixes based on code review
* apply more fixes based on code review
Jeff Bolz [Mon, 21 Jul 2025 11:35:40 +0000 (06:35 -0500)]
vulkan/cuda: Fix im2col when KW!=KH (llama/14789)
The tid is decomposed into "ow + ky*OW + kx*OW*KH". Change "ksize" to match.
Ervin Áron Tasnádi [Sat, 19 Jul 2025 19:59:08 +0000 (21:59 +0200)]
ggml: adds CONV_2D op and direct GEMM Vulkan implementation (llama/14316)
* ggml/ggml-vulkan/test-backend-ops: adds CONV_2D for Vulkan
* ggml-vulkan: adds f32 scalar shader to compute 2D convolution directly
with gemm (no need for im2col),
* test-backend-ops: adds test_case_ref to check the validity/performance of ops
against reference implementations having different graphs, adds tests
* * Performance fixes: minimized branch divergence, uses collectives to
eliminate redundant calculation, macros removed.
* Kernel shared memory size check
* Updates test-backend-ops to support graphs for performance
measurement.
* * Apple/Win32 compile errors fixed
* Subgroup size used to determine tile size -> fixes llvmpipe errors.
* Collectives disabled by default.
* Intel support is disabled as the performance is poor.
* Conv2d enabled for Intel with disabled collectives, disabled for Apple
* test-backend-ops modifications are reverted
* Trailing spaces and missing override fixed.
* Triggering pipeline relaunch.
* Code formatted with .clang-format.
Peter0x44 [Sat, 19 Jul 2025 15:58:03 +0000 (16:58 +0100)]
vulkan: Add logging for bf16 features to ggml_vk_print_gpu_info (#13274) (llama/14707)
0cc4m [Sat, 19 Jul 2025 15:47:53 +0000 (17:47 +0200)]
Vulkan: Fix fprintf format-security warning (llama/14770)
Kai Pastor [Wed, 23 Jul 2025 12:52:29 +0000 (14:52 +0200)]
CI: Test static build (#1307)
Kai Pastor [Tue, 22 Jul 2025 18:13:21 +0000 (20:13 +0200)]
cmake : fix usage issues (#1257)
* CMake config: Create target only once
Fix error on repeated find_package(ggml).
For simplicity, check only for the top-level ggml::ggml.
* CMake config: Add CUDA link libs
* CMake config: Add OpenCL link libs
* CMake config: Use canonical find_dependency
Use set and append to control link lib variables.
Apply more $<LINK_ONLY...>.
* CMake config: Wire OpenMP dependency
Daniel Bevenius [Mon, 21 Jul 2025 13:53:12 +0000 (15:53 +0200)]
ggml-cpu : remove stdlib include from repack.cpp (#1276)
This commit removes the inclusion of `<cstdlib>`.
The motivation for this change is that this source file does not seem to
use any functions from this header and the comment about `qsort` is a
little misleading/confusing.
Georgi Gerganov [Sat, 19 Jul 2025 08:47:23 +0000 (11:47 +0300)]
sync : llama.cpp
ggml-ci
Georgi Gerganov [Fri, 18 Jul 2025 17:37:26 +0000 (20:37 +0300)]
metal : fuse add, mul + add tests (llama/14596)
ggml-ci
Oliver Simons [Fri, 18 Jul 2025 11:35:32 +0000 (13:35 +0200)]
cuda : Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs (llama/14741)
* Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs
Gemma3n uses Matrix-Matrix addition as part of their input processing,
wrongly triggering CUDA_GRAPH disablement on NVGPUs even when batch-size
of 1 is used.
* Exclude `project_per_layer_input` by matching node names
This ensures that all other graphs which don't exhibit this pattern do
not have their behavior changed.
* Revert unnecessary formatting changes
Aman Gupta [Fri, 18 Jul 2025 06:54:18 +0000 (14:54 +0800)]
CUDA: set_rows + cpy.cu refactor (llama/14712)
Neo Zhang Jianyu [Fri, 18 Jul 2025 02:23:14 +0000 (10:23 +0800)]
use max work group size for device to replace the magic number (llama/14732)
Reese Levine [Wed, 16 Jul 2025 15:18:51 +0000 (08:18 -0700)]
ggml: Add initial WebGPU backend (llama/14521)
* Minimal setup of webgpu backend with dawn. Just prints out the adapter and segfaults
* Initialize webgpu device
* Making progress on setting up the backend
* Finish more boilerplate/utility functions
* Organize file and work on alloc buffer
* Add webgpu_context to prepare for actually running some shaders
* Work on memset and add shader loading
* Work on memset polyfill
* Implement set_tensor as webgpu WriteBuffer, remove host_buffer stubs since webgpu doesn't support it
* Implement get_tensor and buffer_clear
* Finish rest of setup
* Start work on compute graph
* Basic mat mul working
* Work on emscripten build
* Basic WebGPU backend instructions
* Use EMSCRIPTEN flag
* Work on passing ci, implement 4d tensor multiplication
* Pass thread safety test
* Implement permuting for mul_mat and cpy
* minor cleanups
* Address feedback
* Remove division by type size in cpy op
* Fix formatting and add github action workflows for vulkan and metal (m-series) webgpu backends
* Fix name
* Fix macos dawn prefix path
Georgi Gerganov [Wed, 16 Jul 2025 13:35:42 +0000 (16:35 +0300)]
llama : add high-throughput mode (llama/14363)
* kv-cache : prepare K/V buffers for separation
ggml-ci
* batched-bench : fix oob write
ggml-ci
* llama : add "virtual sequences"
ggml-ci
* llama : use "stream" vs "virtual sequence"
ggml-ci
* graph : fix stream splitting when KV cache is not used
ggml-ci
* kv-cache : add multi-stream save/load support
ggml-ci
* llama : add "--attn-streams" flag
ggml-ci
* kv-cache : fix handling when find_slot fails
ggml-ci
* kv-cache : restore find_slot impl
ggml-ci
* kv-cache : add comments
* kv-cache : add bounds checks for sequence id
ggml-ci
* cont : add n_seq_max to batch allocr
ggml-ci
* kv-cache : perform stream copies lazily after llama_synchronize
ggml-ci
* kv-cache : avoid throwing exceptions across the C boundary
ggml-ci
* CUDA: 4D FlashAttention support (llama/14628)
* CUDA: 4D FlashAttention support
* CUDA: fix WMMA FA kernel
* llama : rename attn_streams -> kv_unified
ggml-ci
* common : rename kv_split -> kv_unified
ggml-ci
---------
Co-authored-by: Johannes Gäßler <redacted>
Georgi Gerganov [Wed, 16 Jul 2025 11:43:32 +0000 (14:43 +0300)]
ggml : add asserts (llama/14720)
* ggml : add asserts
ggml-ci
* cont : fix constant type
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: Diego Devesa <redacted>
Jeff Bolz [Tue, 15 Jul 2025 19:51:09 +0000 (14:51 -0500)]
vulkan: fix noncontig check for mat_mul_id splitting (llama/14683)
* vulkan: fix noncontig check for mat_mul_id splitting
Remove supports_op check for > 4096 (splitting fixes this)
* vulkan: fix batched matmul dequant for Q*_K
Jeff Bolz [Tue, 15 Jul 2025 19:32:11 +0000 (14:32 -0500)]
vulkan: add RTE variants for glu/add/sub/mul/div (llama/14653)
R0CKSTAR [Tue, 15 Jul 2025 07:28:53 +0000 (15:28 +0800)]
cuda: fix build warnings in set-rows.cu (unused variable) (llama/14687)
Signed-off-by: Xiaodong Ye <redacted>
Anton Mitkov [Mon, 14 Jul 2025 17:12:42 +0000 (18:12 +0100)]
sycl: Hotfix for non dnnl codepath (llama/14677)
shalinib-ibm [Mon, 14 Jul 2025 13:16:42 +0000 (18:46 +0530)]
ggml : refactor llamafile_sgemm PPC code (llama/14673)
Remove un-necessary templates from class definition and packing functions
Reduce deeply nested conditionals, if-else switching in mnapck function
Replace repetitive code with inline functions in Packing functions
2 ~ 7% improvement in Q8 Model
15 ~ 50% improvement in Q4 Model
Signed-off-by: Shalini Salomi Bodapati <redacted>
Akarshan Biswas [Mon, 14 Jul 2025 09:37:55 +0000 (15:07 +0530)]
SYCL: use 1D kernel for set_rows (llama/14618)
* SYCL: Use 1D kernel for set_rows
* Remove dangling comment
* Refactor and use ceil_div
Anton Mitkov [Mon, 14 Jul 2025 09:37:35 +0000 (10:37 +0100)]
sycl: Batched mulmat rework for oneDNN dispatch (llama/14617)
Sigbjørn Skjæret [Sun, 13 Jul 2025 13:01:24 +0000 (15:01 +0200)]
cuda : add set rows for bf16 (llama/14664)
Yavor Ivanov [Sun, 13 Jul 2025 09:33:16 +0000 (02:33 -0700)]
cuda : add ELU support (llama/14657)
Georgi Gerganov [Sun, 13 Jul 2025 07:36:33 +0000 (10:36 +0300)]
ggml : add build-time message to remind about ggml_set_rows (llama/14661)
ggml-ci
Yavor Ivanov [Sun, 13 Jul 2025 05:38:13 +0000 (22:38 -0700)]
metal : Add missing unary ops Metal support (llama/14660)
Tarek Dakhran [Sat, 12 Jul 2025 17:10:14 +0000 (19:10 +0200)]
tests : cover lfm2 cases in test_ssm_conv (llama/14651)
Aman Gupta [Sat, 12 Jul 2025 13:31:38 +0000 (21:31 +0800)]
CUDA: add set rows for f32 and f16 (llama/14551)
* CUDA: add set rows for f32 and f16
* Review: change kernel params, use strides from host
* Use 1-d kernel
* Review: use int64_t for blockDim.x, rename nb->s for clarity
Georgi Gerganov [Sat, 12 Jul 2025 16:24:58 +0000 (19:24 +0300)]
sync : whispper.cpp
Georgi Gerganov [Sat, 12 Jul 2025 13:12:49 +0000 (16:12 +0300)]
git : remove kompute submodule (#1300)
ggml-ci
Georgi Gerganov [Sat, 12 Jul 2025 11:39:52 +0000 (14:39 +0300)]
sync : resolve conflicts (#0)
ggml-ci
Georgi Gerganov [Sat, 12 Jul 2025 11:36:32 +0000 (14:36 +0300)]
sync : llama.cpp
ggml-ci
Jeff Bolz [Sat, 12 Jul 2025 10:12:26 +0000 (05:12 -0500)]
vulkan: support SET_ROWS (llama/14587)
* vulkan: support SET_ROWS
Add variants of the copy_to_quant shader that do the SET_ROWS operation.
Change these shaders to spread the work across the workgroup.
The memory access pattern is probably not great (one thread per quant block),
but should be fine for now.
* vulkan: optimize set_rows
Larger workgroups for non-quant types.
Set "norepeat" (there is manual repeat logic).
Use fastmod.
Jeff Bolz [Sat, 12 Jul 2025 09:51:58 +0000 (04:51 -0500)]
vulkan: optimizations for deepseek prompt processing (llama/14555)
* vulkan: allow unclamped loads in coopmat2 mul_mat_id shader
* vulkan: increase coopmat2 mul_mat_id tile size
* vulkan: optimize mat_mul_id row_ids search to batch loads, and port to coopmat1 path
* vulkan: use smaller FA row size when head size is large. applies to both scalar and CM2 paths (CM1 isn't used due to shared memory limits)