]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
4 months agoMerge remote-tracking branch 'origin/debian/latest' into debian/latest-cuda
Mathieu Baudier [Sat, 15 Feb 2025 10:18:43 +0000 (11:18 +0100)]
Merge remote-tracking branch 'origin/debian/latest' into
debian/latest-cuda

4 months agoUpdate upstream web site
Mathieu Baudier [Sat, 15 Feb 2025 06:40:49 +0000 (07:40 +0100)]
Update upstream web site

4 months agoMerge tag 'upstream/0.0.1722' into debian/latest
Mathieu Baudier [Sat, 15 Feb 2025 06:33:12 +0000 (07:33 +0100)]
Merge tag 'upstream/0.0.1722' into debian/latest

4 months agosync : llama.cpp upstream/0.0.1722
Georgi Gerganov [Wed, 12 Feb 2025 19:46:43 +0000 (21:46 +0200)]
sync : llama.cpp

ggml-ci

4 months agoHIP: Switch to std::vector in rocblas version check (llama/11820)
uvos [Wed, 12 Feb 2025 16:25:03 +0000 (17:25 +0100)]
HIP: Switch to std::vector in rocblas version check (llama/11820)

4 months agocleanup: fix compile warnings associated with gnu_printf (llama/11811)
bandoti [Wed, 12 Feb 2025 14:06:53 +0000 (10:06 -0400)]
cleanup: fix compile warnings associated with gnu_printf (llama/11811)

4 months agoggml : fix multi-threaded clamp_f32 (llama/11824)
Richard [Wed, 12 Feb 2025 13:57:33 +0000 (13:57 +0000)]
ggml : fix multi-threaded clamp_f32 (llama/11824)

* Bug fix for clamp_f32

When using tensors larger than 1d clamp operation does not work due to the restriction of returning if ith is not 0.

* Bug fix for clamp_f32

* Bug fix for clamp_f32

4 months agoggml-cpu: Fix duplicate MATMUL_INT8 (llama/11817)
Weizhao Ouyang [Wed, 12 Feb 2025 12:22:58 +0000 (20:22 +0800)]
ggml-cpu: Fix duplicate MATMUL_INT8 (llama/11817)

Signed-off-by: Weizhao Ouyang <redacted>
4 months agoCUDA: fix CUDART_VERSION checks (llama/11821)
Johannes Gäßler [Wed, 12 Feb 2025 12:16:39 +0000 (13:16 +0100)]
CUDA: fix CUDART_VERSION checks (llama/11821)

4 months agoFix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx (llama/11803)
Sheldon Robinson [Tue, 11 Feb 2025 15:55:45 +0000 (10:55 -0500)]
Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx (llama/11803)

* Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx

* Fix #11802: PR #11803 - keep RegQueryValueExA, remove TEXT macro, description needs to be ANSI string

4 months agoCUDA: use arch list for compatibility check (llama/11775)
Johannes Gäßler [Mon, 10 Feb 2025 23:17:22 +0000 (00:17 +0100)]
CUDA: use arch list for compatibility check (llama/11775)

* CUDA: use arch list for feature availability check

---------

Co-authored-by: Diego Devesa <redacted>
4 months agofix: typos in documentation files (llama/11791)
Maxim Evtush [Mon, 10 Feb 2025 22:21:31 +0000 (23:21 +0100)]
fix: typos in documentation files (llama/11791)

* Update ggml.c

* Update arg.cpp

* Update speculative.h

4 months agovulkan: Make Vulkan optional at runtime (#11493). (llama/11494)
Danny Milosavljevic [Mon, 10 Feb 2025 06:17:21 +0000 (07:17 +0100)]
vulkan: Make Vulkan optional at runtime (#11493). (llama/11494)

Co-authored-by: Jeff Bolz <redacted>
4 months agovulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation...
Wagner Bruna [Mon, 10 Feb 2025 06:08:22 +0000 (03:08 -0300)]
vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (llama/11592)

4 months agovulkan: account for lookup tables when checking shared memory size (llama/11502)
Jeff Bolz [Sun, 9 Feb 2025 07:43:51 +0000 (01:43 -0600)]
vulkan: account for lookup tables when checking shared memory size (llama/11502)

4 months agoggml: Fix data race in ggml threadpool (llama/11736)
Karol Kontny [Sat, 8 Feb 2025 14:30:53 +0000 (15:30 +0100)]
ggml: Fix data race in ggml threadpool (llama/11736)

After the barrier in last iteration is executed, still the loop termination
condition will be executed. However main thread can destroy the cgraph object
and its nodes already, then another thread will access it, but the thing is already gone.
Also trouble can happen when n_nodes == 0 or abort is called, but I'm not sure if the
prior situation is possible.

Last syncronization should be done after the loop to ensure the cgraph/cplan won't be
accessed after the main thread exits from the function.

4 months agoCUDA: fix min. version for movmatrix (llama/11751)
Johannes Gäßler [Sat, 8 Feb 2025 09:46:07 +0000 (10:46 +0100)]
CUDA: fix min. version for movmatrix (llama/11751)

4 months agovulkan: print shared memory size (llama/11719)
Jeff Bolz [Fri, 7 Feb 2025 10:26:03 +0000 (04:26 -0600)]
vulkan: print shared memory size (llama/11719)

4 months agoSYCL: remove XMX info from print devices (llama/11712)
Akarshan Biswas [Fri, 7 Feb 2025 09:27:53 +0000 (14:57 +0530)]
SYCL: remove XMX info from print devices (llama/11712)

4 months agoggml : optimize and build warning fix for LoongArch (llama/11709)
Jinyang He [Fri, 7 Feb 2025 07:38:31 +0000 (15:38 +0800)]
ggml : optimize and build warning fix for LoongArch (llama/11709)

* ggml : optimize convert f32<->f16 for loongarch_asx

* ggml : optimize loongarch_asx extend i16,i8,u8 to i32,i16

* ggml : Fix warnings when run cpu CI locally on LoongArch

4 months agoSYCL: Adjust support condition for norm operators (llama/11674)
Akarshan Biswas [Thu, 6 Feb 2025 11:42:35 +0000 (17:12 +0530)]
SYCL: Adjust support condition for norm operators (llama/11674)

SYCL does not support non contiguous tensors for norm operations

4 months agoggml : fix LoongArch compile error with 128-bit SIMD (llama/11701)
junchao-zhao [Thu, 6 Feb 2025 09:20:00 +0000 (17:20 +0800)]
ggml : fix LoongArch compile error with 128-bit SIMD (llama/11701)

4 months agovulkan: optimize coopmat2 iq2/iq3 callbacks (llama/11521)
Jeff Bolz [Thu, 6 Feb 2025 06:15:30 +0000 (00:15 -0600)]
vulkan: optimize coopmat2 iq2/iq3 callbacks (llama/11521)

* vulkan: optimize coopmat2 iq2/iq3 callbacks

* build: trigger CI on GLSL compute shader changes

4 months agovulkan: initial support for IQ4_XS quantization (llama/11501)
Rémy O [Thu, 6 Feb 2025 06:09:59 +0000 (07:09 +0100)]
vulkan: initial support for IQ4_XS quantization (llama/11501)

4 months agovulkan: use smaller combined allocations to avoid fragmentation (llama/11551)
Jeff Bolz [Thu, 6 Feb 2025 06:02:18 +0000 (00:02 -0600)]
vulkan: use smaller combined allocations to avoid fragmentation (llama/11551)

4 months agometal : avoid breaking build when metal API predates TARGET_OS_VISION (llama/11690)
Charles Duffy [Thu, 6 Feb 2025 01:52:31 +0000 (19:52 -0600)]
metal : avoid breaking build when metal API predates TARGET_OS_VISION (llama/11690)

Avoids breakage in nix flake build introduced by b0569130c5e9c671152c913d82803b7c2f014ff9

4 months agometal : adjust support conditions for norm operators (llama/11671)
Georgi Gerganov [Wed, 5 Feb 2025 08:57:42 +0000 (10:57 +0200)]
metal : adjust support conditions for norm operators (llama/11671)

cont #11659

ggml-ci

4 months agoCUDA: support for mat. mul. with ne03 != ne13 (llama/11656)
Johannes Gäßler [Wed, 5 Feb 2025 07:58:31 +0000 (08:58 +0100)]
CUDA: support for mat. mul. with ne03 != ne13 (llama/11656)

4 months agoCUDA: non-contiguous (RMS) norm support (llama/11659)
Johannes Gäßler [Tue, 4 Feb 2025 21:21:42 +0000 (22:21 +0100)]
CUDA: non-contiguous (RMS) norm support (llama/11659)

* CUDA: non-contiguous (RMS) norm support

---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agoHIP: force max threads per block to be 1024 (llama/11621)
fxzjshm [Tue, 4 Feb 2025 18:18:38 +0000 (02:18 +0800)]
HIP: force max threads per block to be 1024 (llama/11621)

Some old/vendor forked version of llvm still use 256. Explicitly set it to 1024 to align with upstream llvm.

Signed-off-by: fxzjshm <redacted>
4 months agometal : use residency set for other platforms (llama/11648)
Jhen-Jie Hong [Tue, 4 Feb 2025 11:07:18 +0000 (19:07 +0800)]
metal : use residency set for other platforms (llama/11648)

4 months agocmake : fix CPU detection on loongarch64 in tests (#1106)
Shang Yuanchun [Sun, 9 Feb 2025 09:34:53 +0000 (17:34 +0800)]
cmake : fix CPU detection on loongarch64 in tests (#1106)

4 months agoreadme : remove transfer notice (#1107)
Georgi Gerganov [Sat, 8 Feb 2025 08:33:44 +0000 (10:33 +0200)]
readme : remove transfer notice (#1107)

* readme : remove transfer notice

ggml-ci

* readme : update url

ggml-ci

4 months agofix a bug in examples/simple/simple-backend (#1078)
Shawn yang [Fri, 7 Feb 2025 17:10:21 +0000 (01:10 +0800)]
fix a bug in examples/simple/simple-backend (#1078)

Co-authored-by: yangxiao <redacted>
4 months agoCUDA backend (requires contrib and non-free components)
Mathieu Baudier [Thu, 6 Feb 2025 14:44:00 +0000 (15:44 +0100)]
CUDA backend (requires contrib and non-free components)

4 months agorpc: fix known RCE in rpc-server (#1103)
Patrick Peng [Thu, 6 Feb 2025 14:29:13 +0000 (09:29 -0500)]
rpc: fix known RCE in rpc-server (#1103)

Add bounds checking in `rpc_server::copy_tensor` to prevent out-of-bounds writes
+ Check if  `(uint8_t *)dst->data + ggml_nbytes(src)` remains within the destination buffer’s allocated region.

4 months agoMerge tag 'upstream/0.0.1689' into debian/latest
Mathieu Baudier [Wed, 5 Feb 2025 14:27:04 +0000 (15:27 +0100)]
Merge tag 'upstream/0.0.1689' into debian/latest

4 months agoUse upcoming versioning scheme
Mathieu Baudier [Wed, 5 Feb 2025 13:04:21 +0000 (14:04 +0100)]
Use upcoming versioning scheme

4 months agoreadme : add info about repository transfer
Georgi Gerganov [Tue, 4 Feb 2025 14:18:59 +0000 (16:18 +0200)]
readme : add info about repository transfer

4 months agoauthors : update upstream/0.0.1689
Georgi Gerganov [Tue, 4 Feb 2025 11:03:55 +0000 (13:03 +0200)]
authors : update

4 months agosync : whisper.cpp
Georgi Gerganov [Tue, 4 Feb 2025 10:59:01 +0000 (12:59 +0200)]
sync : whisper.cpp

4 months agocmake: Add ability to pass in GGML_BUILD_NUMBER (#1096)
Christian Kastner [Mon, 3 Feb 2025 23:17:15 +0000 (00:17 +0100)]
cmake: Add ability to pass in GGML_BUILD_NUMBER (#1096)

This makes git as a dependency optional, and is useful in the case where
ggml is built not from git, but from a tarball, or a distribution source
package.

This conditional also affects GGML_BUILD_COMMIT. Nothing seems to be
using it, though, so there doesn't seem much value factor it out, or
even require it.

4 months agosync : llama.cpp
Georgi Gerganov [Mon, 3 Feb 2025 12:26:19 +0000 (14:26 +0200)]
sync : llama.cpp

ggml-ci

4 months agoCUDA: fix Volta FlashAttention logic (llama/11615)
Johannes Gäßler [Mon, 3 Feb 2025 12:25:56 +0000 (13:25 +0100)]
CUDA: fix Volta FlashAttention logic (llama/11615)

4 months agosync : llama.cpp
Georgi Gerganov [Mon, 3 Feb 2025 08:52:41 +0000 (10:52 +0200)]
sync : llama.cpp

ggml-ci

4 months agoHIP: fix flash_attn_stream_k_fixup warning (llama/11604)
Johannes Gäßler [Sun, 2 Feb 2025 22:48:29 +0000 (23:48 +0100)]
HIP: fix flash_attn_stream_k_fixup warning (llama/11604)

4 months agoCUDA/HIP: add support for selectable warp size to mmv (llama/11519)
uvos [Sun, 2 Feb 2025 21:40:09 +0000 (22:40 +0100)]
CUDA/HIP: add support for selectable warp size to mmv (llama/11519)

CUDA/HIP: add support for selectable warp size to mmv

4 months agoHIP: add GGML_CUDA_CC_IS_* for amd familys as increasing cc archtectures for amd...
uvos [Sun, 2 Feb 2025 21:08:05 +0000 (22:08 +0100)]
HIP: add GGML_CUDA_CC_IS_* for amd familys as increasing cc archtectures for amd gpus are not supersets of eatch other (llama/11601)

This fixes a bug where RDNA1 gpus other than gfx1010 where not handled correctly

4 months agoCUDA: use mma PTX instructions for FlashAttention (llama/11583)
Johannes Gäßler [Sun, 2 Feb 2025 18:31:09 +0000 (19:31 +0100)]
CUDA: use mma PTX instructions for FlashAttention (llama/11583)

* CUDA: use mma PTX instructions for FlashAttention

* __shfl_sync workaround for movmatrix

* add __shfl_sync to HIP

Co-authored-by: Diego Devesa <redacted>
4 months ago`ci`: use sccache on windows instead of ccache (llama/11545)
Olivier Chafik [Fri, 31 Jan 2025 17:12:40 +0000 (17:12 +0000)]
`ci`: use sccache on windows instead of ccache (llama/11545)

* Use sccache on ci for windows

* Detect sccache in cmake

4 months agoHIP: require at least HIP 5.5
uvos [Wed, 29 Jan 2025 18:36:00 +0000 (19:36 +0100)]
HIP: require at least HIP 5.5

4 months agoHIP: Prepare reduction operators for wave 64
uvos [Wed, 29 Jan 2025 18:12:42 +0000 (19:12 +0100)]
HIP: Prepare reduction operators for wave 64

4 months agoCUDA/HIP: add warp_size to cuda_device_info
uvos [Wed, 29 Jan 2025 16:46:23 +0000 (17:46 +0100)]
CUDA/HIP: add warp_size to cuda_device_info

4 months agovulkan: implement initial support for IQ2 and IQ3 quantizations (llama/11360)
Rémy Oudompheng [Wed, 29 Jan 2025 17:29:39 +0000 (18:29 +0100)]
vulkan: implement initial support for IQ2 and IQ3 quantizations (llama/11360)

* vulkan: initial support for IQ3_S

* vulkan: initial support for IQ3_XXS

* vulkan: initial support for IQ2_XXS

* vulkan: initial support for IQ2_XS

* vulkan: optimize Q3_K by removing branches

* vulkan: implement dequantize variants for coopmat2

* vulkan: initial support for IQ2_S

* vulkan: vertically realign code

* port failing dequant callbacks from mul_mm

* Fix array length mismatches

* vulkan: avoid using workgroup size before it is referenced

* tests: increase timeout for Vulkan llvmpipe backend

---------

Co-authored-by: Jeff Bolz <redacted>
4 months agovulkan: Catch pipeline creation failure and print an error message (llama/11436)
Jeff Bolz [Wed, 29 Jan 2025 15:26:50 +0000 (09:26 -0600)]
vulkan: Catch pipeline creation failure and print an error message (llama/11436)

* vulkan: Catch pipeline creation failure and print an error message

Also, fix some warnings from my on-demand compile change.

* vulkan: fix pipeline creation logging

4 months agocmake : sync new file
Georgi Gerganov [Wed, 29 Jan 2025 09:37:00 +0000 (11:37 +0200)]
cmake : sync new file

ggml-ci

4 months agoscripts : sync cmake
Georgi Gerganov [Wed, 29 Jan 2025 09:35:43 +0000 (11:35 +0200)]
scripts : sync cmake

4 months agosync : llama.cpp
Georgi Gerganov [Wed, 29 Jan 2025 09:26:23 +0000 (11:26 +0200)]
sync : llama.cpp

ggml-ci

4 months agoHIP: Supress transformation warning in softmax.cu
uvos [Tue, 28 Jan 2025 22:06:32 +0000 (23:06 +0100)]
HIP: Supress transformation warning in softmax.cu

loops with bounds not known at compile time can not be unrolled.
when ncols_template == 0, the bounds of the loop are not constexpr, thus llvm cant unroll the loops here.

4 months agoHIP: Only call rocblas_initialize on rocblas versions with the multiple instantation...
Nikita Sarychev [Tue, 28 Jan 2025 15:42:20 +0000 (07:42 -0800)]
HIP: Only call rocblas_initialize on rocblas versions with the multiple instantation bug (llama/11080)

This disables the workaround on rocblas fixed versions (>=4.0.0) to eliminate the runtime cost and unnecessary VRAM allocation of loading all tensile objects.

4 months agocmake : don't fail on `GGML_CPU=OFF` (llama/11457)
someone13574 [Tue, 28 Jan 2025 14:15:34 +0000 (09:15 -0500)]
cmake : don't fail on `GGML_CPU=OFF` (llama/11457)

4 months agoSYCL : SOFTMAX F16 mask support and other fixes (llama/11261)
Akarshan Biswas [Tue, 28 Jan 2025 09:56:58 +0000 (15:26 +0530)]
SYCL : SOFTMAX F16 mask support and other fixes (llama/11261)

Implemented ggml_sycl_op_soft_max() F16 src1(mask) support for which a pragma deprecation warning was added during #5021.
To do this, had to decouple it from ggml_sycl_op_flatten which always considered src1 to be of fp32 type(many OP functions are dependent on it).

* SYCL: SOFTMAX F16 mask support and other fixes

* test-backend-ops: Add F16 mask test cases

4 months agoAMD: parse the architecture as supplied by gcnArchName (llama/11244)
Haus1 [Mon, 27 Jan 2025 13:58:17 +0000 (08:58 -0500)]
AMD: parse the architecture as supplied by gcnArchName (llama/11244)

The value provided by minor doesn't include stepping for AMD, parse the value returned by gcnArchName instead to retrieve an accurate ID.

4 months agometal: Handle null returned from MTLCreateSystemDefaultDevice() (llama/11441)
Ihar Hrachyshka [Mon, 27 Jan 2025 07:41:59 +0000 (02:41 -0500)]
metal: Handle null returned from MTLCreateSystemDefaultDevice() (llama/11441)

This fixes segmentation fault error when running tests when no metal
devices are available (for example, when not linked with Core Graphics
framework or otherwise).

4 months agometal : use residency sets (llama/11427)
Georgi Gerganov [Sun, 26 Jan 2025 18:06:16 +0000 (20:06 +0200)]
metal : use residency sets (llama/11427)

* metal : use residency sets

ggml-ci

* metal : restore commandBufferWithUnretainedReferences calls [no ci]

* metal : release descriptors

ggml-ci

* metal : check env GGML_METAL_NO_RESIDENCY

ggml-ci

* metal : fix build + clean-up

ggml-ci

4 months agocmake: add ggml find package (llama/11369)
bandoti [Sun, 26 Jan 2025 16:07:48 +0000 (12:07 -0400)]
cmake: add ggml find package (llama/11369)

* Add initial ggml cmake package

* Add build numbers to ggml find-package

* Expand variables with GGML_ prefix

* Guard against adding to cache variable twice

* Add git to msys2 workflow

* Handle ggml-cpu-* variants

* Link ggml/ggml-base libraries to their targets

* Replace main-cmake-pkg with simple-cmake-pkg

* Interface features require c_std_90

* Fix typo

* Removed unnecessary bracket from status message

* Update examples/simple-cmake-pkg/README.md

Co-authored-by: Georgi Gerganov <redacted>
* Update examples/simple-cmake-pkg/README.md

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agovulkan: compile shaders on-demand (llama/11406)
Jeff Bolz [Sat, 25 Jan 2025 21:29:57 +0000 (15:29 -0600)]
vulkan: compile shaders on-demand (llama/11406)

Reduce first-run startup time and memory consumption.

Should fix #11339.

4 months agoHip: disable VMM on hip as it seams that it dosent work in some configurations (llama...
uvos [Sat, 25 Jan 2025 20:01:12 +0000 (21:01 +0100)]
Hip: disable VMM on hip as it seams that it dosent work in some configurations (llama/11420)

4 months agohip : Add hipGraph and VMM support to ROCM (llama/11362)
uvos [Fri, 24 Jan 2025 23:02:23 +0000 (00:02 +0100)]
hip : Add hipGraph and VMM support to ROCM (llama/11362)

* Add hipGraph support

* Enable VMM on rocm

4 months agoCUDA: fix FP16 cuBLAS GEMM (llama/11396)
Johannes Gäßler [Fri, 24 Jan 2025 20:02:43 +0000 (21:02 +0100)]
CUDA: fix FP16 cuBLAS GEMM (llama/11396)

4 months agorocBLAS: Avoid fp32->fp16->fp32 conversion on cdna (llama/11356)
uvos [Fri, 24 Jan 2025 16:50:49 +0000 (17:50 +0100)]
rocBLAS: Avoid fp32->fp16->fp32 conversion on cdna (llama/11356)

4 months agoCPU/CUDA: fix (GQA) mul mat back, add CUDA support (llama/11380)
Johannes Gäßler [Fri, 24 Jan 2025 11:38:31 +0000 (12:38 +0100)]
CPU/CUDA: fix (GQA) mul mat back, add CUDA support (llama/11380)

4 months agocmake : avoid -march=native when reproducible build is wanted (llama/11366)
Bernhard M. Wiedemann [Fri, 24 Jan 2025 11:21:35 +0000 (12:21 +0100)]
cmake : avoid -march=native when reproducible build is wanted (llama/11366)

See https://reproducible-builds.org/ for why this is good
and https://reproducible-builds.org/specs/source-date-epoch/
for the definition of this variable.

Without this patch, compiling on different machines produced different binaries, which made verification of results difficult.

Fixes: #11317
This patch was done while working on reproducible builds for openSUSE.

4 months agotests: fix some mul_mat test gaps (llama/11375)
Jeff Bolz [Thu, 23 Jan 2025 20:51:24 +0000 (14:51 -0600)]
tests: fix some mul_mat test gaps (llama/11375)

Now that we have batched mat-vec mul Vulkan shaders for up to n==8,
these tests weren't actually exercising the mat-mat mul path. Test
n==9 as well. Also, change to use all_types.

4 months agoVulkan-run-test: fix mmq_wg_denoms (llama/11343)
amd-dwang [Thu, 23 Jan 2025 07:14:28 +0000 (15:14 +0800)]
Vulkan-run-test: fix mmq_wg_denoms (llama/11343)

There should be a copy-and-paste error here.

*mmq_wg_denoms should be used together with *warptile_mmq, instead of
wg_denoms.

4 months agovulkan: sort shaders for more deterministic binary (llama/11315)
Jeff Bolz [Thu, 23 Jan 2025 07:07:50 +0000 (01:07 -0600)]
vulkan: sort shaders for more deterministic binary (llama/11315)

Fixes #11306.

4 months agovulkan: fix diag_mask_inf (llama/11323)
Jeff Bolz [Thu, 23 Jan 2025 07:01:17 +0000 (01:01 -0600)]
vulkan: fix diag_mask_inf (llama/11323)

With robustbufferaccess disabled, this shader was showing OOB stores. There
is a bounds check in the code, but the workgrouop dimensions were reversed vs
CUDA and it was running the wrong number of threads. So fix the workgroup
dimensions and disable robustness for this pipeline.

4 months agorpc : better caching of the base buffer pointer (llama/11331)
Radoslav Gerganov [Tue, 21 Jan 2025 13:06:41 +0000 (15:06 +0200)]
rpc : better caching of the base buffer pointer (llama/11331)

There is no need to use map, just store the base pointer in the buffer
context.

4 months agometal : fix out-of-bounds write (llama/11314)
Georgi Gerganov [Tue, 21 Jan 2025 06:48:13 +0000 (08:48 +0200)]
metal : fix out-of-bounds write (llama/11314)

ggml-ci

4 months agovulkan: fix coopmat2 validation failures (llama/11284)
Jeff Bolz [Mon, 20 Jan 2025 16:38:32 +0000 (10:38 -0600)]
vulkan: fix coopmat2 validation failures (llama/11284)

mul mat and flash attention shaders were loading f32 types directly into
A/B matrices, which happens to work but is technically invalid usage.
For FA, we can load it as an Accumulator matrix and convert and this
is not in the inner loop and is cheap enough. For mul mat, it's more
efficient to do this conversion in a separate pass and have the input(s)
be f16.

coopmat2 requires SPIR-V 1.6 (related using to LocalSizeId). LocalSizeId
requires maintenance4 be enabled, and SPIR-V 1.6 requires Vulkan 1.3.

4 months agoSYCL: Introducing memory host pool (llama/11251)
Nicolò Scipione [Sun, 19 Jan 2025 13:33:34 +0000 (14:33 +0100)]
SYCL: Introducing memory host pool (llama/11251)

* Implement host pool for matrix_info

Creating a new memory pool on the host to store memory location for
matrix_info needed to launch gemm_batch from oneMKL/oneMath.
Removing complex support in gemm_batch since it is not used in llama.cpp

* Remove unnecessary headers and cast

* Reorder member variable to avoid warning on initialization

* Formatting

* Remove unused variable

* Address PR review feedback - remove warning

---------

Signed-off-by: nscipione <redacted>
4 months agocmake : add sanitizer flags for llama.cpp (llama/11279)
Georgi Gerganov [Sat, 18 Jan 2025 14:18:15 +0000 (16:18 +0200)]
cmake : add sanitizer flags for llama.cpp (llama/11279)

* cmake : add sanitizer flags for llama.cpp

ggml-ci

* tests : fix compile warnings

ggml-ci

* cmake : move sanitizer flags to llama_add_compile_flags

ggml-ci

* cmake : move llama.cpp compile flags to top level lists

ggml-ci

* cmake : apply only sanitizer flags at top level

ggml-ci

* tests : fix gguf context use in same_tensor_data

* gguf-test: tensor data comparison

* dummy : trigger ggml-ci

* unicode : silence gcc warnings

ggml-ci

* ci : use sanitizer builds only in Debug mode

ggml-ci

* cmake : add status messages [no ci]

---------

Co-authored-by: Johannes Gäßler <redacted>
4 months agovulkan: fix coopmat2 flash attention for non-contiguous inputs (llama/11281)
Jeff Bolz [Sat, 18 Jan 2025 08:26:50 +0000 (02:26 -0600)]
vulkan: fix coopmat2 flash attention for non-contiguous inputs (llama/11281)

Add code similar to mul_mm_cm2 to force alignment of strides, to avoid
a performance regression.

Add noncontiguous FA tests in test-backend-ops.

Fixes #11268.

4 months agorpc : early register backend devices (llama/11262)
Radoslav Gerganov [Fri, 17 Jan 2025 08:57:09 +0000 (10:57 +0200)]
rpc : early register backend devices (llama/11262)

Early register RPC devices and do not propagate RPC specifics in the
llama model structures.

ref: #10609

4 months agovulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (llama/11166)
Jeff Bolz [Thu, 16 Jan 2025 21:47:10 +0000 (15:47 -0600)]
vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (llama/11166)

* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination

4 months agovulkan: optimize coopmat2 q4_k/q5_k dequant functions. (llama/11206)
Jeff Bolz [Thu, 16 Jan 2025 21:23:49 +0000 (15:23 -0600)]
vulkan: optimize coopmat2 q4_k/q5_k dequant functions. (llama/11206)

Do masking on whole dwords, fetch all scales at once.

4 months agovulkan: optimize coopmat2 q2_k dequant function (llama/11130)
Jeff Bolz [Thu, 16 Jan 2025 21:16:39 +0000 (15:16 -0600)]
vulkan: optimize coopmat2 q2_k dequant function (llama/11130)

4 months agoCUDA: backwards pass for misc. ops, add tests (llama/11257)
Johannes Gäßler [Thu, 16 Jan 2025 15:43:38 +0000 (16:43 +0100)]
CUDA: backwards pass for misc. ops, add tests (llama/11257)

* CUDA: backwards pass for misc. ops, add tests

* remove restrict from pointers

4 months agoggml: aarch64: implement SVE kernels for q4_K_q8_K vector dot (llama/11227)
fj-y-saito [Thu, 16 Jan 2025 09:11:49 +0000 (18:11 +0900)]
ggml: aarch64: implement SVE kernels for q4_K_q8_K vector dot (llama/11227)

* Add SVE support for q4_K_q8_K

* Update ggml/src/ggml-cpu/ggml-cpu-quants.c

change to use K_SCALE_SIZE

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agovulkan: scale caching for k quants + misc fixes (llama/11081)
Eve [Wed, 15 Jan 2025 19:50:13 +0000 (19:50 +0000)]
vulkan: scale caching for k quants + misc fixes (llama/11081)

* q6_k scale caching

* 16 bit unpack

* q4_k test (slow)

* revert it

* q3_k

* q2_k

* little stuff

* try precalculating products of a and q2_k scales

* Revert "try precalculating products of a and q2_k scales"

This reverts commit 65110b81f23f66331a50c6e889a7c1ab9470a86b.

* unpack should be u16, add vim swap to gitignore (about time)

* better q4_k scales

* q5_k

* better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations

* q2_k better dequant

* q3_k optimizations

* q3_k use hmask simd from cpu avx version

* make the caches happy

* q3_k separate out calculation

* q2_k separate out

* little stuff

* use calc_superblock everywhere

* q2_k optimize scale calculation

* more barriers

4 months agofix: ggml: fix vulkan-shaders-gen build (llama/10448)
Junil Kim [Wed, 15 Jan 2025 13:17:42 +0000 (22:17 +0900)]
fix: ggml: fix vulkan-shaders-gen build (llama/10448)

* fix: ggml: fix vulkan-shaders-gen build

The vulkan-shaders-gen target was not being built correctly
in case of cross-compilation.
Other outputs need to be built for the cross compile target,
but vulkan-shaders-gen needs to be built for the host.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

- Add GGML_SHADERS_GEN_TOOLCHAIN CMake option.
- Auto-detect host toolchain if not set.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

Use configure_file to generate host_toolchain.cmake from template

* fix: ggml: Fix compile error

Fix compile error not finding vulkan-shaders-gen

* fix: vulkan-shaders-gen build and path handling

Fix build issues with vulkan-shaders-gen:
- Add target dependency for correct build order
- Use CMAKE_HOST_SYSTEM_NAME for executable suffix
- Fix MSVC output directory in host toolchain
- Normalize path handling for cross-compilation

* fix: improve host compiler detection in vulkan shader build

Improve host compiler detection for vulkan shader generation:
- Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches
- Consolidate compiler detection logic
- Fix Windows-specific MSVC detection
- Ensure correct compiler search in cross-compilation

* refactor: Simplify CMake function for detecting host compiler

Simplified the CMake function to improve the process of detecting the host compiler.

* fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt

Since `vulkan-shader-gen.cpp` only requires the `glslc` executable
and not the Vulkan headers or libraries, CMakeLists.txt needs to
be corrected.
(See: ecc93d0558fc3ecb8a5af69d2ece02fae4710ade)

* refactor: Rename host_toolchain.cmake.in

- Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in

* refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN

Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN

4 months agoRoPE: fix back, CUDA support for back + noncont. (llama/11240)
Johannes Gäßler [Wed, 15 Jan 2025 11:51:37 +0000 (12:51 +0100)]
RoPE: fix back, CUDA support for back + noncont. (llama/11240)

* RoPE: fix back, CUDA support for back + noncont.

* fix comments reg. non-cont. RoPE support [no-ci]

4 months agoSYCL: Add gated linear attention kernel (llama/11175)
Akarshan Biswas [Wed, 15 Jan 2025 03:20:17 +0000 (08:50 +0530)]
SYCL: Add gated linear attention kernel (llama/11175)

* SYCL: Add Gated Linear attention kernel

* glahpp: add a space at the end of file

* gla: Put the barrier inside the main logic loop

5 months agocmake : fix build tests on arm (#1084)
Andrii Ryzhkov [Sat, 25 Jan 2025 13:13:00 +0000 (14:13 +0100)]
cmake : fix build tests on arm (#1084)

5 months agoggml : add option to not print stack on abort (#1081)
William Tambellini [Thu, 23 Jan 2025 19:59:08 +0000 (11:59 -0800)]
ggml : add option to not print stack on abort (#1081)

* Add option to not print stack on abort

Add option/envvar to disable stack printing on abort.
Also link some unittests with Threads to fix link errors on
ubuntu/g++11.

* Update src/ggml.c

---------

Co-authored-by: Diego Devesa <redacted>
5 months agoAdd a link in the Debian package to a portable CPU backend
Mathieu Baudier [Wed, 22 Jan 2025 09:47:41 +0000 (10:47 +0100)]
Add a link in the Debian package to a portable CPU backend

5 months agoSimplify debian packaging now that GGML backends are in libexec
Mathieu Baudier [Tue, 21 Jan 2025 12:20:49 +0000 (13:20 +0100)]
Simplify debian packaging now that GGML backends are in libexec

5 months agoInstall GGML backends in libexec
Mathieu Baudier [Tue, 21 Jan 2025 11:45:38 +0000 (12:45 +0100)]
Install GGML backends in libexec

5 months agoDebian development package only depends on base libraries
Mathieu Baudier [Tue, 21 Jan 2025 11:03:04 +0000 (12:03 +0100)]
Debian development package only depends on base libraries

5 months agoImprove debian build based on lintian
Mathieu Baudier [Tue, 21 Jan 2025 10:57:53 +0000 (11:57 +0100)]
Improve debian build based on lintian