]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
3 months agovulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (llama/14427)
Jeff Bolz [Sat, 28 Jun 2025 03:35:30 +0000 (22:35 -0500)]
vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (llama/14427)

This setting needs to be passed through to vulkan-shaders-gen

3 months agoggml : add ggml_set_rows (llama/14274)
Radoslav Gerganov [Fri, 27 Jun 2025 13:41:40 +0000 (16:41 +0300)]
ggml : add ggml_set_rows (llama/14274)

* ggml : add ggml_set_rows

Add ggml_set_rows(a, b, c) which copies rows from 'b' into 'a' using
indices from 'c'.

ref: #8366

* use I64 for indices

* ggml : add repeat impl for i64

* ggml : add ggml_is_contiguous_rows

* ggml : ggml_set_rows support broadcast

* ggml : ggml_set_rows support quantized dst

ggml-ci

* ggml : support GGML_TYPE_F32 ".from_float" trait

* ggml : ggml_set_rows update comment + better index name

* tests : add ggml_set_rows

* metal : add ggml_set_rows implementation

ggml-ci

* ggml : simplify forward_dup_f32

* ggml : fix supports_op

* tests : add comment to set_rows

* ggml : leave the repeat_i64 for a separate PR

ggml-ci

* ggml : set_rows use std::min instead of MIN

* ggml : better error message for set_rows unsupported type

* metal : perform op->type check only once

* tests : more consistent implementation + more tests

ggml-ci

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agocmake: regen vulkan shaders when shaders-gen sources change (llama/14398)
bandoti [Thu, 26 Jun 2025 16:46:53 +0000 (13:46 -0300)]
cmake: regen vulkan shaders when shaders-gen sources change (llama/14398)

* Add shaders-gen sources as target deps

3 months agometal : add special-case mat-vec mul for ne00 == 4 (llama/14385)
Georgi Gerganov [Thu, 26 Jun 2025 12:51:19 +0000 (15:51 +0300)]
metal : add special-case mat-vec mul for ne00 == 4 (llama/14385)

ggml-ci

3 months agometal : batch rows copy in a single threadgroup (llama/14384)
Georgi Gerganov [Thu, 26 Jun 2025 12:50:15 +0000 (15:50 +0300)]
metal : batch rows copy in a single threadgroup (llama/14384)

* metal : batch rows copy in a single threadgroup

ggml-ci

* metal : handle some edge cases when threadgroup size is not a power of 2

ggml-ci

3 months agomusa: enable fp16 mma (all) and cublas on qy2 (llama/13842)
R0CKSTAR [Thu, 26 Jun 2025 04:11:59 +0000 (12:11 +0800)]
musa: enable fp16 mma (all) and cublas on qy2 (llama/13842)

* musa: enable fp16 mma (all) and cublas on qy2

Signed-off-by: Xiaodong Ye <redacted>
* Update src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
* musa: disable MUL_MAT_ID (q2_k × f32) due to precision issues

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: Johannes Gäßler <redacted>
3 months agoggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
Aaron Teo [Wed, 25 Jun 2025 21:49:04 +0000 (05:49 +0800)]
ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)

* ggml-cpu: add nnpa compile flag

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 4a9f60c201573128f73a65999b3e5cc497fae5c1)

* ggml-cpu: add fp16->fp32 nnpa first

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 8d4a7987f9c1887f716be96250f2caeee0253929)

* ggml-cpu: add fp32->fp16

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 0ff0d6516247a41d2ade42b42cf0d676a4dd1627)

* ggml-cpu: better variable names

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 2f58bbcbb89c183340e252362b2a40651f573f1f)

* docs: update s390x docs

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 01b929491b50071a5d0572235dcf5a449da70aa7)

* ggml-cpu: add debugging prints to see if dlf16 is correct

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix print vs printf

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix float placeholder

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: ensure fp16 and fp32 load and stores are called

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fp16 load ensured to hit

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove sigint from fp16 store

for some reason, the function is not getting a hit when debugged with
    gdb. we will need to investigate further

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: activate nnpa for ggml_cpu_fp16_to_fp32

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: nnpa activate ggml_cpu_fp16_to_fp32 for 8 elements

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: nnpa switch to vec_xst test

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to vec_xst for 4 element loops also

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rework noop

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove noop, general code cleanup

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: clarify variable naming

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: activate nnpa for ggml_cpu_fp32_to_fp16

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add breakpoint for debugging

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: test fix for conversion failure

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: disable fp32->fp16 nnpa conversions for now

there are some conversion failures in nnpa that requires the eyes of an
ibm stsm. will create a separate pr to introduce the fp32->fp16 change.

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to elif macro

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: reattempt fp32->fp16

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: reattempt fp32->fp16

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix compiler types

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: change to typedef vector types

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add 4 element loops for fp32->fp16

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: clarified vector naming

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bring back fp32->fp16 store nnpa

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: activate nnpa fp32->fp16 or fp16->fp32 compute

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add nnpa macro check in ggml-impl

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add missing __func__

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: diagnose why __NNPA__ macro is not being defined

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: import vecintrin.h to fix compiler errors

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: update macro tests

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move s390x typedef to own header file

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: move s390x typedef to own header file"

This reverts commit 157f856c34589566151630e294563a420702db39.

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to importing ggml-cpu-impl instead

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix macro declaration

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: test more macros

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add debug prints

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bruteforce macro definitions

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move macro definitions

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add ggml-impl.h to cmakelists

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to private macros

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move s390x typedef to own header file

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 157f856c34589566151630e294563a420702db39)

* ggml-cpu: move things around

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bring back compile macros

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: switch to quotes for import

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add compiler error macro

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add s390x detection in ggml-src

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bring back compile definitions

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: undo cmakelists work

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: move s390x typedef to own header file"

This reverts commit 18d79e1a30b39d9aaa0bd58400c5cf2c32135c9a.

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove typedefs.h

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove typedef from cmakelists

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add ggml-impl.h future notes

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add todo comment for future reference

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: clarify naming of dlf16

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove unnecessary target compile definitions

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move nnpa fp16->fp32 and fp32->fp16 to simd-mappings

Signed-off-by: Aaron Teo <redacted>
* ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu

Signed-off-by: Aaron Teo <redacted>
* docs: update broken huggingface link for s390x

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix duplicate func names during compile

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: fix duplicate func names during compile"

This reverts commit fbb733451f27677063b914d4f6c9a9841d45b38d.

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu"

This reverts commit bd288e8fa52b5244f65cee21cb61062f1a9e0ca5.

Signed-off-by: Aaron Teo <redacted>
* ggml: refactor fp16<->fp32 simd to ggml-cpu

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix missing simd-mappings.h import in quants.c

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix missing simd-mappings.h within repack

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix amx mmq missing simd-mappings.h

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: attempt at fixing loongarch failing build

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move nnpa together with other fp16<->fp32 simd

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix wrong refactor of ggml-base

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164176555

Signed-off-by: Aaron Teo <redacted>
* ggml: remove dependency on ggml-cpu from ggml-base

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rename all fp16<->fp32 macros to prefix with ggml_cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164449406

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: remove mistaken fallback macro

fallback logic was already implemented but i was too sleepy to realise

Signed-off-by: Aaron Teo <redacted>
* ggml: move ggml_table_f32_f16 to ggml-cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures"

This reverts commit 32a3533564bdb7902cefb9c89b1c9e956a81ce29.

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml: move ggml_table_f32_f16 to ggml-cpu"

This reverts commit 9e40d984ad27d7b60392fb2b7548885201864fe4.

Signed-off-by: Aaron Teo <redacted>
* ggml: move ggml_table_f32_f16 to ggml-cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006

Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit 9e40d984ad27d7b60392fb2b7548885201864fe4)

* ggml: move ggml_table_f32_f16 to ggml-cpu.c

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: extern c ggml_table_f32_f16 + chore docs

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h

we rely on the variable declaration in ggml-cpu.c instead

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h"

This reverts commit f71b21d2f74f5e03ec0c2b4fefd3cbf395aecf16.

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: bring back ggml_table_f32_f16

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: bring back ggml_table_f32_f16"

This reverts commit 2dce119178bed5ef5c8398c4230ddd14fef80e49.

Signed-off-by: Aaron Teo <redacted>
* fix ggml time initialization

* fix f32_f16 table init

* remove extra line

---------

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: slaren <redacted>
3 months agoggml : do not output unprintable characters on GGUF load failure (llama/14381)
Sigbjørn Skjæret [Wed, 25 Jun 2025 21:26:51 +0000 (23:26 +0200)]
ggml : do not output unprintable characters on GGUF load failure (llama/14381)

3 months agosycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (llama/13973)
Anton Mitkov [Wed, 25 Jun 2025 16:09:55 +0000 (17:09 +0100)]
sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (llama/13973)

3 months agoopencl: ref count `ggml_backend_opencl_context` and refactor profiling (llama/14254)
lhez [Tue, 24 Jun 2025 18:46:25 +0000 (11:46 -0700)]
opencl: ref count `ggml_backend_opencl_context` and refactor profiling (llama/14254)

* Move profiling info into `ggml_backend_opencl_context`
* Add `enqueue_ndrange_kernel` to launch kernel

3 months agoCUDA/HIP: optimize mmv paths taken for HIP devices (llama/14324)
uvos [Mon, 23 Jun 2025 23:12:56 +0000 (01:12 +0200)]
CUDA/HIP: optimize mmv paths taken for HIP devices (llama/14324)

Co-authored-by: Johannes Gäßler <redacted>
3 months agoCUDA: mul_mat_v support for batch sizes > 1 (llama/14262)
Johannes Gäßler [Mon, 23 Jun 2025 11:11:31 +0000 (13:11 +0200)]
CUDA: mul_mat_v support for batch sizes > 1 (llama/14262)

* CUDA: mul_mat_v support for batch sizes > 1

* use 64 bit math for initial offset calculation

3 months agoHIP: enable vec fattn on RDNA4 (llama/14323)
uvos [Sun, 22 Jun 2025 14:51:23 +0000 (16:51 +0200)]
HIP: enable vec fattn on RDNA4 (llama/14323)

3 months agoCUDA: add mean operation (llama/14313)
Aman Gupta [Sun, 22 Jun 2025 04:39:54 +0000 (12:39 +0800)]
CUDA: add mean operation (llama/14313)

* CUDA: add mean operation

* add back sum_rows_f32_cuda

* Review: early exit if col!=0

3 months agoAdd support for VK_EXT_debug_utils to add labels to Vulkan objects. (llama/13792)
Markus Tavenrath [Sat, 21 Jun 2025 06:17:12 +0000 (08:17 +0200)]
Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (llama/13792)

* Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled.

* remove #ifdef for debug utils and add queue marker.

3 months agometal : fix thread-safety (llama/14300)
Georgi Gerganov [Sat, 21 Jun 2025 05:04:18 +0000 (08:04 +0300)]
metal : fix thread-safety (llama/14300)

ggml-ci

3 months agoggml-cpu : "align corners" for bilinear upscale/downscale (#1285)
Acly [Tue, 1 Jul 2025 07:11:00 +0000 (09:11 +0200)]
ggml-cpu : "align corners" for bilinear upscale/downscale (#1285)

* add "align corners" mode for bilinear upscale, and allow downscaling
* add ggml_interpolate, deprecate ggml_upscale_ext, pass in align-corners as bit-flag
* test-backend-ops: replace ggml_upscale_ext with ggml_interpolate, add test cases for downscale and align-corners

3 months agobuild : fix build with clang-cl on Windows (#1284)
Acly [Wed, 25 Jun 2025 10:16:22 +0000 (12:16 +0200)]
build : fix build with clang-cl on Windows (#1284)

* build : fix building tests with clang-cl on Windows

- clang-cl.exe (clang with MSVC CLI) doesn't like the space in /STACK option
- cl.exe (MSVC) works either way

* build : fix MSVC compiler warnings in test-roll.cpp

3 months agoggml-quants : rename best_mad to best_error (#1283)
Daniel Bevenius [Tue, 24 Jun 2025 04:10:16 +0000 (06:10 +0200)]
ggml-quants : rename best_mad to best_error (#1283)

This commit renames the variable `best_mad` to `best_error` in the
`make_qkx2_quants` function.

The motivation for this is that the name `best_mad` can be somewhat
confusing if mean absolute deviation (MAD) is not in use.

3 months agotests : cleanup old tests (#1282)
Georgi Gerganov [Sat, 21 Jun 2025 06:21:28 +0000 (09:21 +0300)]
tests : cleanup old tests (#1282)

ggml-ci

3 months agosync : llama.cpp
Georgi Gerganov [Fri, 20 Jun 2025 18:04:04 +0000 (21:04 +0300)]
sync : llama.cpp

ggml-ci

3 months agoCUDA: add conv_2d_transpose (llama/14287)
Aman Gupta [Fri, 20 Jun 2025 14:48:24 +0000 (22:48 +0800)]
CUDA: add conv_2d_transpose (llama/14287)

* CUDA: add conv_2d_transpose

* remove direct include of cuda_fp16

* Review: add brackets for readability, remove ggml_set_param and add asserts

3 months agosycl: add usage of enqueue_functions extension (llama/14244)
Nicolò Scipione [Fri, 20 Jun 2025 13:07:21 +0000 (15:07 +0200)]
sycl: add usage of enqueue_functions extension (llama/14244)

* Add header and namespace to use enqueue_functions extension

* Convert submit and parallel_for to use new extension in convert.cpp

* Convert submit and parallel_for to use extension in ggml-sycl.cpp

* Convert submit and parallel_for to use extension in gla.cpp

* Convert submit and parallel_for in mmq.cpp

* Convert submit and parallel_for in mmvq.cpp

* Convert submit and parallel_for in remaining files

* Convert all simple parallel_for to nd_launch from enqueue_functions
extension

* Wrapping extension in general function

Create a general function that enable the enqueue_functions extension if
it is enable in the compiler, otherwise call the general SYCL function
to launch kernels.

---------

Signed-off-by: nscipione <redacted>
3 months agoImplement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286)
Christian Kastner [Fri, 20 Jun 2025 12:17:32 +0000 (12:17 +0000)]
Implement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286)

* Add PowerPC feature detection and scoring

* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for PowerPC

* ggml-cpu: Delay some initializations until function is called

When using GGML_BACKEND_DL=ON, these initializations might use
instructions that are not supported by the current CPU.

---------

Co-authored-by: Diego Devesa <redacted>
3 months agocuda : synchronize graph capture and cublas handle destruction (llama/14288)
Diego Devesa [Fri, 20 Jun 2025 11:57:36 +0000 (04:57 -0700)]
cuda : synchronize graph capture and cublas handle destruction (llama/14288)

Workarounds an issue that may cause CUDA graph capture to fail when a cuBLAS handle is destroyed in a different thread

3 months agoggml : fix repack work size for mul_mat_id (llama/14292)
Georgi Gerganov [Fri, 20 Jun 2025 08:19:15 +0000 (11:19 +0300)]
ggml : fix repack work size for mul_mat_id (llama/14292)

ggml-ci

3 months agoggml: Update KleidiAI to v1.9.0 (llama/14277)
Charles Xu [Fri, 20 Jun 2025 07:51:01 +0000 (09:51 +0200)]
ggml: Update KleidiAI to v1.9.0 (llama/14277)

3 months agoCUDA: add conv_2d_dw (llama/14265)
Aman Gupta [Fri, 20 Jun 2025 01:50:24 +0000 (09:50 +0800)]
CUDA: add conv_2d_dw (llama/14265)

* CUDA: add conv_2d_dw

* better naming

* simplify using template

* Review: fix operation ordering in ggml-cuda, use __forceinline__, use more const

3 months agoggml-cpu : remove unnecesary arm feature detection (llama/14281)
Diego Devesa [Thu, 19 Jun 2025 19:24:14 +0000 (12:24 -0700)]
ggml-cpu : remove unnecesary arm feature detection (llama/14281)

Support for Arm runtime feature detection has now been added to GGML_CPU_ALL_VARIANTS. This removes the old and not very functional code.

3 months agobuild : suppress gcc15 compile warnings (llama/14261)
fanyang [Thu, 19 Jun 2025 12:49:48 +0000 (20:49 +0800)]
build : suppress gcc15 compile warnings (llama/14261)

* Change _contains_any() substrs to std::string_view and fix the find comparison logic.

3 months agosycl: Cleanup codepaths in Get Rows in sycl backend (llama/14215)
Anton Mitkov [Thu, 19 Jun 2025 10:40:21 +0000 (11:40 +0100)]
sycl: Cleanup codepaths in Get Rows in sycl backend (llama/14215)

Addresses unused reorder path

3 months agollamafile : support s390x SIMD instruction set (llama/14273)
Aaron Teo [Thu, 19 Jun 2025 09:48:54 +0000 (17:48 +0800)]
llamafile : support s390x SIMD instruction set (llama/14273)

3 months agoVulkan: Set device max size for host memory to avoid OOM warning and fallback to...
0cc4m [Thu, 19 Jun 2025 07:15:42 +0000 (09:15 +0200)]
Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (llama/14249)

3 months agometal : add mean kernel (llama/14267)
Georgi Gerganov [Thu, 19 Jun 2025 05:05:21 +0000 (08:05 +0300)]
metal : add mean kernel (llama/14267)

* metal : add mean kernel

ggml-ci

* cont : dedup implementation

ggml-ci

3 months agoggml-cpu: reduce asm calls for hsum (llama/14037)
Aaron Teo [Wed, 18 Jun 2025 17:10:08 +0000 (01:10 +0800)]
ggml-cpu: reduce asm calls for hsum (llama/14037)

Signed-off-by: Aaron Teo <redacted>
3 months agoggml-cpu: fix uncaught underscore terminators (llama/14023)
Aaron Teo [Wed, 18 Jun 2025 17:06:49 +0000 (01:06 +0800)]
ggml-cpu: fix uncaught underscore terminators (llama/14023)

Signed-off-by: Aaron Teo <redacted>
3 months agoggml: Add Apple support for GGML_CPU_ALL_VARIANTS (llama/14258)
Charles Xu [Wed, 18 Jun 2025 11:40:07 +0000 (13:40 +0200)]
ggml: Add Apple support for GGML_CPU_ALL_VARIANTS (llama/14258)

3 months agoAdd `ggml_roll` (#1274) upstream/0.0.2199
Acly [Wed, 18 Jun 2025 11:34:50 +0000 (13:34 +0200)]
Add `ggml_roll` (#1274)

* ggml : add ggml_roll

* use set/get_op_params & std::min

3 months agosync : whisper.cpp
Georgi Gerganov [Wed, 18 Jun 2025 09:41:12 +0000 (12:41 +0300)]
sync : whisper.cpp

3 months agosync : llama.cpp
Georgi Gerganov [Wed, 18 Jun 2025 07:00:11 +0000 (10:00 +0300)]
sync : llama.cpp

ggml-ci

3 months agocmake: remove shader-gen step-targets from ggml-vulkan (llama/14226)
bandoti [Tue, 17 Jun 2025 20:33:25 +0000 (17:33 -0300)]
cmake: remove shader-gen step-targets from ggml-vulkan (llama/14226)

* Remove step-targets from vulkan-shaders-gen

* Unset DESTDIR when building vulkan-shaders-gen

3 months agoggml-cpu : remove the weak alias trick (llama/14221)
xctan [Tue, 17 Jun 2025 09:58:32 +0000 (17:58 +0800)]
ggml-cpu : remove the weak alias trick (llama/14221)

3 months agomusa: fix build warning (unused variable) (llama/14231)
R0CKSTAR [Tue, 17 Jun 2025 09:48:08 +0000 (17:48 +0800)]
musa: fix build warning (unused variable) (llama/14231)

Signed-off-by: Xiaodong Ye <redacted>
3 months agollama : add thread safety test (llama/14035)
Diego Devesa [Mon, 16 Jun 2025 15:11:43 +0000 (08:11 -0700)]
llama : add thread safety test (llama/14035)

* llama : add thread safety test

* llamafile : remove global state

* llama : better LLAMA_SPLIT_MODE_NONE logic

when main_gpu < 0 GPU devices are not used

---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agocmake: clean up external project logic for vulkan-shaders-gen (llama/14179)
bandoti [Mon, 16 Jun 2025 13:32:13 +0000 (10:32 -0300)]
cmake: clean up external project logic for vulkan-shaders-gen (llama/14179)

* Remove install step for vulkan-shaders-gen

* Add install step to normalize msvc with make

* Regenerate modified shaders at build-time

3 months agoHIP: disable rocwmma on gfx12 by default until rocm 7.0 (llama/14202)
uvos [Mon, 16 Jun 2025 11:47:38 +0000 (13:47 +0200)]
HIP: disable rocwmma on gfx12 by default until rocm 7.0 (llama/14202)

3 months agoggml: Add Android support for GGML_CPU_ALL_VARIANTS (llama/14206)
Charles Xu [Mon, 16 Jun 2025 09:47:57 +0000 (11:47 +0200)]
ggml: Add Android support for GGML_CPU_ALL_VARIANTS (llama/14206)

3 months agovulkan: mutex around vkQueueSubmit (llama/14127)
Jeff Bolz [Mon, 16 Jun 2025 06:21:08 +0000 (00:21 -0600)]
vulkan: mutex around vkQueueSubmit (llama/14127)

This fixes the remaining crash in test-thread-safety on my system.

3 months agoggml-cpu : rework weak alias on apple targets (llama/14146)
xctan [Mon, 16 Jun 2025 05:54:15 +0000 (13:54 +0800)]
ggml-cpu : rework weak alias on apple targets (llama/14146)

* ggml-cpu : rework weak alias on apple targets

* fix powerpc detection

* fix ppc detection

* fix powerpc detection on darwin

3 months agoCUDA/HIP: fix ssm_scan on devices where warp size is not 32 (llama/14196)
uvos [Sun, 15 Jun 2025 15:30:13 +0000 (17:30 +0200)]
CUDA/HIP: fix ssm_scan on devices where warp size is not 32 (llama/14196)

3 months agoHIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (llama...
uvos [Sun, 15 Jun 2025 13:45:27 +0000 (15:45 +0200)]
HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (llama/14183)

3 months agosycl: Adding additional cpy dbg print output (llama/14034)
Anton Mitkov [Fri, 13 Jun 2025 07:51:39 +0000 (08:51 +0100)]
sycl: Adding additional cpy dbg print output (llama/14034)

3 months agoSYCL: Bump oneMath commit (llama/14152)
Ewan Crawford [Fri, 13 Jun 2025 07:45:37 +0000 (08:45 +0100)]
SYCL: Bump oneMath commit (llama/14152)

Update oneMath commit to merged PR https://github.com/uxlfoundation/oneMath/pull/669
which adds SYCL-Graph support for recording CUDA BLAS commands.

With this change the `MUL_MAT` tests now pass on DPC++ CUDA backends with SYCL-Graph
enabled. Prior to this change, an error would be thrown.

```
$ GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0 -o MUL_MAT -p type_a=f16,type_b=f32,m=16,n=1,k=256,bs=\\[1,1\\],nr=\\[2

UR CUDA ERROR:
        Value:           700
        Name:            CUDA_ERROR_ILLEGAL_ADDRESS
        Description:     an illegal memory access was encountered
        Function:        operator()
        Source Location: $HOME/dpcpp/unified-runtime/source/adapters/cuda/queue.cpp:154

Native API failed. Native API returns: 2147483646 (UR_RESULT_ERROR_UNKNOWN)
Exception caught at file:$HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp, line:3598, func:operator()
SYCL error: CHECK_TRY_ERROR((stream)->wait()): Meet error in this line code!
  in function ggml_backend_sycl_synchronize at $HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3598
$HOME/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/common.hpp:118: SYCL error
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
```

3 months agosycl: Remove not needed copy f16->f32 for dnnl mul mat (llama/14125)
Anton Mitkov [Thu, 12 Jun 2025 13:15:11 +0000 (14:15 +0100)]
sycl: Remove not needed copy f16->f32 for dnnl mul mat (llama/14125)

3 months agocmake : handle whitepsaces in path during metal build (llama/14126)
Georgi Gerganov [Thu, 12 Jun 2025 07:14:24 +0000 (10:14 +0300)]
cmake : handle whitepsaces in path during metal build (llama/14126)

* cmake : handle whitepsaces in path during metal build

ggml-ci

* cont : proper fix

ggml-ci

---------

Co-authored-by: Daniel Bevenius <redacted>
3 months agoImplement GGML_CPU_ALL_VARIANTS for ARM (llama/14080)
Christian Kastner [Wed, 11 Jun 2025 19:07:44 +0000 (19:07 +0000)]
Implement GGML_CPU_ALL_VARIANTS for ARM (llama/14080)

* ggml-cpu: Factor out feature detection build from x86

* ggml-cpu: Add ARM feature detection and scoring

This is analogous to cpu-feats-x86.cpp. However, to detect compile-time
activation of features, we rely on GGML_USE_<FEAT> which need to be set
in cmake, instead of GGML_<FEAT> that users would set for x86.

This is because on ARM, users specify features with GGML_CPU_ARM_ARCH,
rather than with individual flags.

* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for ARM

Like x86, however to pass around arch flags within cmake, we use
GGML_INTERNAL_<FEAT> as we don't have GGML_<FEAT>.

Some features are optional, so we may need to build multiple backends
per arch version (armv8.2_1, armv8.2_2, ...), and let the scoring
function sort out which one can be used.

* ggml-cpu: Limit ARM GGML_CPU_ALL_VARIANTS to Linux for now

The other platforms will need their own specific variants.

This also fixes the bug that the the variant-building branch was always
being executed as the else-branch of GGML_NATIVE=OFF. The branch is
moved to an elseif-branch which restores the previous behavior.

3 months agovulkan: Better thread-safety for command pools/buffers (llama/14116)
Jeff Bolz [Wed, 11 Jun 2025 14:48:52 +0000 (09:48 -0500)]
vulkan: Better thread-safety for command pools/buffers (llama/14116)

This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.

3 months agovulkan: Track descriptor pools/sets per-context (llama/14109)
Jeff Bolz [Wed, 11 Jun 2025 05:19:25 +0000 (00:19 -0500)]
vulkan: Track descriptor pools/sets per-context (llama/14109)

Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.

3 months agoopencl: add `mul_mv_id_q4_0_f32_8x_flat` (llama/14003)
lhez [Tue, 10 Jun 2025 23:55:58 +0000 (16:55 -0700)]
opencl: add `mul_mv_id_q4_0_f32_8x_flat` (llama/14003)

3 months agoVulkan: Don't default to CPU device (like llvmpipe), even if no other device is avail...
0cc4m [Tue, 10 Jun 2025 12:01:33 +0000 (14:01 +0200)]
Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (llama/14099)

3 months agorpc : nicer error messages for RPC server crash (llama/14076)
Isaac McFadyen [Tue, 10 Jun 2025 06:41:01 +0000 (02:41 -0400)]
rpc : nicer error messages for RPC server crash (llama/14076)

3 months agoggml : disable warnings for tests when using MSVC (#1273)
Daniel Bevenius [Fri, 13 Jun 2025 13:06:42 +0000 (15:06 +0200)]
ggml : disable warnings for tests when using MSVC (#1273)

* ggml : disable warnings for tests when using MSVC

This commit disables warnings for tests on windows when using MSVC.

The motivation for this is that this brings the build output more
inline with what Linux/MacOS systems produce.

There is still one warning generated for the tests which is:
```console
  Building Custom Rule C:/ggml/tests/CMakeLists.txt
cl : command line  warning D9025: overriding '/DNDEBUG' with '/UNDEBUG'
[C:\ggml\build\tests\test-arange.vcxproj]
  test-arange.cpp
  test-arange.vcxproj -> C:\ggml\build\bin\Release\test-arange.exe
```

* ggml : fix typo in tests disable list

3 months agoggml : remove unused ggml_context_container (#1272)
Daniel Bevenius [Fri, 13 Jun 2025 07:05:44 +0000 (09:05 +0200)]
ggml : remove unused ggml_context_container (#1272)

This commit removes the unused `ggml_context_container` structure from
the ggml library. It looks like the usage of this struct was removed in
Commit 4757fe18d56ec11bf9c07feaca6e9d5b5357e7f4 ("ggml : alloc
ggml_contexts on the heap (whisper/2525)").

The motivation for this changes is to improve code clarity/readability.

3 months agoscripts : remove common.{cpp,h} from whisper sync scripts (#1271)
Daniel Bevenius [Thu, 12 Jun 2025 13:57:58 +0000 (15:57 +0200)]
scripts : remove common.{cpp,h} from whisper sync scripts (#1271)

This commit removes the common.{cpp,h} files from the whisper sync
scripts.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3244#issuecomment-2966630744

3 months agoexamples : include examples in msvc disable warn (#1270)
Daniel Bevenius [Thu, 12 Jun 2025 10:27:09 +0000 (12:27 +0200)]
examples : include examples in msvc disable warn (#1270)

This commit adds the examples in the "list" of targets to ignore MSVC
warnings.

The motivation for this is that currently the examples generate a number
of warnings that are ignore/disabled for the core ggml project. This
makes for a cleaner output when building.

3 months agomnist : use CMake to build mnist wasm example (#1269)
Daniel Bevenius [Wed, 11 Jun 2025 09:15:14 +0000 (11:15 +0200)]
mnist : use CMake to build mnist wasm example (#1269)

This commit updates the mnist examples to use CMake for building the
WebAssembly (WASM) version of the MNIST example instead of the current
emcc command.

The motivation for this change is that using CMake should make it easier
to maintin with regards to when changes in ggml occur they should not
cause this example to break. Currently the emcc command is outdated and
it was not clear how to updated it which is why this change was made.

Resolves: https://github.com/ggml-org/ggml/issues/1264

3 months agosync : whisper.cpp
Georgi Gerganov [Tue, 10 Jun 2025 14:27:50 +0000 (17:27 +0300)]
sync : whisper.cpp

ggml-ci

3 months agoggml : fix weak alias win32 (whisper/0)
Georgi Gerganov [Tue, 10 Jun 2025 08:34:10 +0000 (11:34 +0300)]
ggml : fix weak alias win32 (whisper/0)

ggml-ci

3 months agofiles : remove old sources (part 2) (#1267)
Georgi Gerganov [Tue, 10 Jun 2025 08:04:47 +0000 (11:04 +0300)]
files : remove old sources (part 2) (#1267)

ggml-ci

3 months agofiles : remove old sources (#1266)
Georgi Gerganov [Tue, 10 Jun 2025 07:57:17 +0000 (10:57 +0300)]
files : remove old sources (#1266)

ggml-ci

3 months agosync : llama.cpp
Georgi Gerganov [Tue, 10 Jun 2025 06:22:40 +0000 (09:22 +0300)]
sync : llama.cpp

ggml-ci

3 months agometal : use less stack memory in FA kernel (llama/14088)
Georgi Gerganov [Mon, 9 Jun 2025 20:05:02 +0000 (23:05 +0300)]
metal : use less stack memory in FA kernel (llama/14088)

* metal : use less stack memory in FA kernel

ggml-ci

* cont : fix BF16 variant

3 months agoggml-cpu : split arch-specific implementations (llama/13892)
xctan [Mon, 9 Jun 2025 14:47:13 +0000 (22:47 +0800)]
ggml-cpu : split arch-specific implementations (llama/13892)

* move ggml-cpu-aarch64 to repack

* split quantize_row_q8_0/1

* split helper functions

* split ggml_vec_dot_q4_0_q8_0

* split ggml_vec_dot_q4_1_q8_1

* split ggml_vec_dot_q5_0_q8_0

* split ggml_vec_dot_q5_1_q8_1

* split ggml_vec_dot_q8_0_q8_0

* split ggml_vec_dot_tq1_0_q8_K

* split ggml_vec_dot_tq2_0_q8_K

* split ggml_vec_dot_q2_K_q8_K

* split ggml_vec_dot_q3_K_q8_K

* split ggml_vec_dot_q4_K_q8_K

* split ggml_vec_dot_q5_K_q8_K

* split ggml_vec_dot_q6_K_q8_K

* split ggml_vec_dot_iq2_xxs_q8_K

* split ggml_vec_dot_iq2_xs_q8_K

* split ggml_vec_dot_iq2_s_q8_K

* split ggml_vec_dot_iq3_xxs_q8_K

* split ggml_vec_dot_iq3_s_q8_K

* split ggml_vec_dot_iq1_s_q8_K

* split ggml_vec_dot_iq1_m_q8_K

* split ggml_vec_dot_iq4_nl_q8_0

* split ggml_vec_dot_iq4_xs_q8_K

* fix typos

* fix missing prototypes

* rename ggml-cpu-quants.c

* rename ggml-cpu-traits

* rename arm folder

* move cpu-feats-x86.cpp

* rename ggml-cpu-hbm

* update arm detection macro in quants.c

* move iq quant tables

* split ggml_quantize_mat_q8_0/K

* split ggml_gemv_*

* split ggml_gemm_*

* rename namespace aarch64 to repack

* use weak aliases to replace test macros

* rename GGML_CPU_AARCH64 to GGML_CPU_REPACK

* rename more aarch64 to repack

* clean up rebase leftover

* fix compilation errors

* remove trailing spaces

* try to fix clang compilation errors

* try to fix clang compilation errors again

* try to fix clang compilation errors, 3rd attempt

* try to fix clang compilation errors, 4th attempt

* try to fix clang compilation errors, 5th attempt

* try to fix clang compilation errors, 6th attempt

* try to fix clang compilation errors, 7th attempt

* try to fix clang compilation errors, 8th attempt

* try to fix clang compilation errors, 9th attempt

* more cleanup

* fix compilation errors

* fix apple targets

* fix a typo in arm version of ggml_vec_dot_q4_K_q8_K

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
3 months agocuda : fix device sync on buffer clear (llama/14033)
Diego Devesa [Mon, 9 Jun 2025 14:36:26 +0000 (07:36 -0700)]
cuda : fix device sync on buffer clear (llama/14033)

3 months agoCANN: Simplify the environment variable setting(#13104)
Xinpeng Dou [Mon, 9 Jun 2025 11:47:39 +0000 (19:47 +0800)]
CANN: Simplify the environment variable setting(#13104)

* Simplify the environment variable setting to specify the memory pool type.

* Adjust the GGML_CANN_ASYNC_MODE setting to accept yes, enable, 1, or on (case-insensitive) as valid options.

* update

* fix CI

* update

* delete whitespace

* fix according to review

* update CANN.md

* update CANN.md

3 months agosycl: Add reorder to Q6_K mmvq implementation (llama/13885)
Nicolò Scipione [Mon, 9 Jun 2025 09:47:07 +0000 (11:47 +0200)]
sycl: Add reorder to Q6_K mmvq implementation (llama/13885)

* Add Reorder to Q6_K mmvq implementation

* Address PR comments: clean up comments

* Remove unused parameter after refactoring q4_k

* Adding inline to function and removing unnecessary reference to int

---------

Signed-off-by: nscipione <redacted>
3 months agocuda : fix buffer type check with integrated GPUs (llama/14069)
Diego Devesa [Sun, 8 Jun 2025 18:39:56 +0000 (11:39 -0700)]
cuda : fix buffer type check with integrated GPUs (llama/14069)

3 months agoSYCL: Implement few same quantized type copy kernels (llama/13739)
Akarshan Biswas [Sat, 7 Jun 2025 13:28:20 +0000 (18:58 +0530)]
SYCL: Implement few same quantized type copy kernels (llama/13739)

* SYCL: Implement few same quantized type copy kernels

* Use memcpy for copying contiguous tensors

ggml-ci

* feat(sycl): add contiguous tensor copy support and device checks

Adds a memcpy path for contiguous tensors of the same type to optimize data transfer. Updates device support checks to recognize contiguous tensor operations, improving compatibility and performance.

* refactor: replace specific block copy functions with template

The changes replace multiple redundant block copy functions (e.g., cpy_block_q8_0_q8_0, cpy_block_q5_0_q5_0) with a single templated function cpy_blck_q_q. This reduces code duplication by using a generic template that works for any block type, improving maintainability while preserving the same functionality. The template is instantiated with specific block types (e.g., block_q8_0) where needed.

* Exclude BF16 support for COPY tensors for now
ggml-ci

* perf: adjust SYCL copy kernel block sizes for efficiency

Use ceil_div to ensure full element coverage and update nd_range parameters to better align with SYCL block sizes, improving parallelism and device utilization in copy operations.

3 months agovulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (llama/14001)
Masato Nakasaka [Thu, 5 Jun 2025 14:00:29 +0000 (23:00 +0900)]
vulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (llama/14001)

* allowing B580 and U9-288V

* experimenting code to detect Xe2

* allowing coopmat only for Xe2 GPUs

* fixed comment wording

* fixed comment wording

* removed unnecessary driver check

3 months agollama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama...
Diego Devesa [Thu, 5 Jun 2025 09:57:42 +0000 (02:57 -0700)]
llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (llama/14013)

3 months agovulkan: automatically deduce size of push constants (llama/13936)
Jeff Bolz [Thu, 5 Jun 2025 05:17:58 +0000 (00:17 -0500)]
vulkan: automatically deduce size of push constants (llama/13936)

3 months agoggml-vulkan: adds support for op CONV_TRANSPOSE_1D (llama/13813)
Ervin Áron Tasnádi [Wed, 4 Jun 2025 20:02:00 +0000 (22:02 +0200)]
ggml-vulkan: adds support for op CONV_TRANSPOSE_1D (llama/13813)

* * ggml-vulkan: adds op CONV_TRANSPOSE_1D

* test-backend-ops: adds more spohisticated tests for CONV_TRANSPOSE_1D

* Missing barrier added to shader.
Number of additional tests reduced to 108.

* * Fixes typo in variable name.

* Removes extra whitespaces.

* Adds int64->int32 casts to prevent possible warnings.

* Problem size reduced in tests to pass tests with llvmpipe.

* supports_op condition moved from unintended position

3 months agoreleases : use dl backend for linux release, remove arm64 linux release (llama/13996)
Diego Devesa [Wed, 4 Jun 2025 11:15:54 +0000 (04:15 -0700)]
releases : use dl backend for linux release, remove arm64 linux release (llama/13996)

3 months agoCUDA: fix FTZ in FA for Gemma 3 (llama/13991)
Johannes Gäßler [Wed, 4 Jun 2025 06:57:05 +0000 (08:57 +0200)]
CUDA: fix FTZ in FA for Gemma 3 (llama/13991)

3 months agovulkan: fix warnings in perf logger querypool code (llama/13937)
Jeff Bolz [Tue, 3 Jun 2025 18:30:22 +0000 (13:30 -0500)]
vulkan: fix warnings in perf logger querypool code (llama/13937)

3 months agoopencl: add `backend_synchronize` (llama/13939)
lhez [Mon, 2 Jun 2025 23:54:58 +0000 (16:54 -0700)]
opencl: add `backend_synchronize` (llama/13939)

* This is not needed by the normal use where the result is read
  using `tensor_get`, but it allows perf mode of `test-backend-ops`
  to properly measure performance.

3 months agoOpenCL: Add concat, tsembd, upscale, tanh, pad and repeat (llama/13840)
rmatif [Mon, 2 Jun 2025 23:53:36 +0000 (23:53 +0000)]
OpenCL: Add concat, tsembd, upscale, tanh, pad and repeat (llama/13840)

* add concat, pad, repeat, tsembd, tanh, upscale

* small fixes

3 months agometal : use F32 accumulators in FA kernels (llama/13975)
Georgi Gerganov [Mon, 2 Jun 2025 18:33:40 +0000 (21:33 +0300)]
metal : use F32 accumulators in FA kernels (llama/13975)

ggml-ci

3 months agocmake : Handle mixed-case 'Power' strings in POWER CPU detection (llama/13966)
shalinib-ibm [Mon, 2 Jun 2025 12:18:36 +0000 (17:48 +0530)]
cmake : Handle mixed-case 'Power' strings in POWER CPU detection (llama/13966)

Some systems report the CPU implementation as "Power11" instead of "POWER11".
The existing CMake logic uses a case-sensitive regular expression to extract
the CPU generation, which fails when the casing doesn't exactly match "POWER".

This patch provides a fix by first converting the string to uppercase before applying the regex.

Signed-off-by: root <redacted>
Co-authored-by: root <redacted>
3 months agosycl: quantize and reorder the input to q8_1 when reorder is enabled (llama/13826)
Atharva Dubey [Mon, 2 Jun 2025 09:12:20 +0000 (10:12 +0100)]
sycl: quantize and reorder the input to q8_1 when reorder is enabled (llama/13826)

* [WIP]: fuse q8 quantization and reorder

* wip2: fuse q8 quantization and reorder

* working q8 reorder commit

* restored common.hpp

* remove debug prints

* remove unnecessary headers and remove trailing whitespace

* Update src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Alberto Cabrera Pérez <redacted>
---------

Co-authored-by: Alberto Cabrera Pérez <redacted>
3 months agogguf: fix failure on version == 0 (llama/13956)
Johannes Gäßler [Sun, 1 Jun 2025 16:08:05 +0000 (18:08 +0200)]
gguf: fix failure on version == 0 (llama/13956)

3 months agoggml: check if non-native endian model is being loaded (llama/13943)
Aaron Teo [Sun, 1 Jun 2025 14:53:57 +0000 (22:53 +0800)]
ggml: check if non-native endian model is being loaded (llama/13943)

* gguf: prevent non-native endian models from being loaded

Signed-off-by: Aaron Teo <redacted>
* gguf: update error message

Signed-off-by: Aaron Teo <redacted>
* gguf: make the non-native endian check more verbose

Signed-off-by: Aaron Teo <redacted>
* ggml: move ggml_assert location

Signed-off-by: Aaron Teo <redacted>
* ggml: reword the endianness check error message

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
3 months agosam : add default case for unsupported prompt types (#1263)
Daniel Bevenius [Fri, 6 Jun 2025 16:58:10 +0000 (18:58 +0200)]
sam : add default case for unsupported prompt types (#1263)

This commit adds a default case in the `embed_prompt_sparse` lambda
function.

The motivation for this change is that currently the following warning
is generated when compiling:
```console
/ggml/examples/sam/sam.cpp: In lambda function:
/ggml/examples/sam/sam.cpp:1499:5: warning:
control reaches end of non-void function [-Wreturn-type]
 1499 |     }();
      |     ^
```

4 months agoAdd in-build ggml::ggml ALIAS library (#1260)
Kai Pastor [Tue, 3 Jun 2025 10:33:28 +0000 (12:33 +0200)]
Add in-build ggml::ggml ALIAS library (#1260)

Enable uniform linking with subproject and with find_package.

4 months agoyolo : add support for dl backends (#1251)
Radoslav Gerganov [Mon, 2 Jun 2025 05:50:57 +0000 (08:50 +0300)]
yolo : add support for dl backends (#1251)

4 months agosync : whisper.cpp
Georgi Gerganov [Sun, 1 Jun 2025 12:15:11 +0000 (15:15 +0300)]
sync : whisper.cpp

4 months agosync : llama.cpp
Georgi Gerganov [Sun, 1 Jun 2025 10:44:37 +0000 (13:44 +0300)]
sync : llama.cpp

ggml-ci

4 months agothreading: support for GGML_SCHED_PRIO_LOW, update thread info on Windows to avoid...
Max Krasnyansky [Sat, 31 May 2025 22:39:19 +0000 (15:39 -0700)]
threading: support for GGML_SCHED_PRIO_LOW, update thread info on Windows to avoid throttling (llama/12995)

* threading: support for GGML_SCHED_PRIO_LOW, update thread info on Windows to avoid throttling

We talked about adding LOW priority for GGML threads in the original threadpool PR.
It might be useful for some cases to avoid contention.

Latest Windows ARM64 releases started parking (offlining) the CPU cores
more aggresively which results in suboptimal performance with n_threads > 4.
To deal with that we now disable Power Throttling for our threads for the NORMAL
and higher priorities.

Co-authored-by: Diego Devesa <redacted>
* threading: disable SetThreadInfo() calls for older Windows versions

* Update tools/llama-bench/llama-bench.cpp

Co-authored-by: Diego Devesa <redacted>
---------

Co-authored-by: Diego Devesa <redacted>
4 months agoCUDA: add a prop in ggml_cuda_device_infor for distinguish iGPU or dGPU in cuda ...
Shawn yang [Sat, 31 May 2025 06:48:04 +0000 (14:48 +0800)]
CUDA: add a prop in ggml_cuda_device_infor for distinguish iGPU or dGPU in cuda (#13856) (llama/13895)

* 1.  add "integrated" in ggml_cuda_device_info for distinguish whether it is Intergrate_gpu or discrete_gpu
2. Adjust the func:"ggml_backend_cuda_device_supports_buft" for this new feature

* Update src/ggml-cuda/ggml-cuda.cu

Adjusted code indentation

Co-authored-by: Johannes Gäßler <redacted>
* Update src/ggml-cuda/ggml-cuda.cu

Fixed incorrect setting of variable types

Co-authored-by: Johannes Gäßler <redacted>
* Update src/ggml-cuda/ggml-cuda.cu

Adjusted the judgment logic

Co-authored-by: Johannes Gäßler <redacted>
* add a host_buft assert in case of integrated_cuda_device with func:'evaluate_and_capture_cuda_graph()'

* Update src/ggml-cuda/ggml-cuda.cu

Add a defensive security assert

Co-authored-by: Johannes Gäßler <redacted>
* Update src/ggml-cuda/ggml-cuda.cu

Adjusted the support judgment logic.

Co-authored-by: Johannes Gäßler <redacted>
* revoke the suggest commit changes due to it's not applicable in jetson_device

* Update src/ggml-cuda/ggml-cuda.cu

Add parentheses to enforce operator precedence​

Co-authored-by: Diego Devesa <redacted>
* Update src/ggml-cuda/ggml-cuda.cu

Fix ci bug: add a spaces

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: yangxiao <redacted>
Co-authored-by: Johannes Gäßler <redacted>
Co-authored-by: yangxiao <redacted>
Co-authored-by: Diego Devesa <redacted>
4 months agoCUDA: fix typo in FlashAttention code (llama/13926)
Johannes Gäßler [Fri, 30 May 2025 19:22:03 +0000 (21:22 +0200)]
CUDA: fix typo in FlashAttention code (llama/13926)