]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
4 months agocmake: add ggml find package (llama/11369)
bandoti [Sun, 26 Jan 2025 16:07:48 +0000 (12:07 -0400)]
cmake: add ggml find package (llama/11369)

* Add initial ggml cmake package

* Add build numbers to ggml find-package

* Expand variables with GGML_ prefix

* Guard against adding to cache variable twice

* Add git to msys2 workflow

* Handle ggml-cpu-* variants

* Link ggml/ggml-base libraries to their targets

* Replace main-cmake-pkg with simple-cmake-pkg

* Interface features require c_std_90

* Fix typo

* Removed unnecessary bracket from status message

* Update examples/simple-cmake-pkg/README.md

Co-authored-by: Georgi Gerganov <redacted>
* Update examples/simple-cmake-pkg/README.md

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agovulkan: compile shaders on-demand (llama/11406)
Jeff Bolz [Sat, 25 Jan 2025 21:29:57 +0000 (15:29 -0600)]
vulkan: compile shaders on-demand (llama/11406)

Reduce first-run startup time and memory consumption.

Should fix #11339.

4 months agoHip: disable VMM on hip as it seams that it dosent work in some configurations (llama...
uvos [Sat, 25 Jan 2025 20:01:12 +0000 (21:01 +0100)]
Hip: disable VMM on hip as it seams that it dosent work in some configurations (llama/11420)

4 months agohip : Add hipGraph and VMM support to ROCM (llama/11362)
uvos [Fri, 24 Jan 2025 23:02:23 +0000 (00:02 +0100)]
hip : Add hipGraph and VMM support to ROCM (llama/11362)

* Add hipGraph support

* Enable VMM on rocm

4 months agoCUDA: fix FP16 cuBLAS GEMM (llama/11396)
Johannes Gäßler [Fri, 24 Jan 2025 20:02:43 +0000 (21:02 +0100)]
CUDA: fix FP16 cuBLAS GEMM (llama/11396)

4 months agorocBLAS: Avoid fp32->fp16->fp32 conversion on cdna (llama/11356)
uvos [Fri, 24 Jan 2025 16:50:49 +0000 (17:50 +0100)]
rocBLAS: Avoid fp32->fp16->fp32 conversion on cdna (llama/11356)

4 months agoCPU/CUDA: fix (GQA) mul mat back, add CUDA support (llama/11380)
Johannes Gäßler [Fri, 24 Jan 2025 11:38:31 +0000 (12:38 +0100)]
CPU/CUDA: fix (GQA) mul mat back, add CUDA support (llama/11380)

4 months agocmake : avoid -march=native when reproducible build is wanted (llama/11366)
Bernhard M. Wiedemann [Fri, 24 Jan 2025 11:21:35 +0000 (12:21 +0100)]
cmake : avoid -march=native when reproducible build is wanted (llama/11366)

See https://reproducible-builds.org/ for why this is good
and https://reproducible-builds.org/specs/source-date-epoch/
for the definition of this variable.

Without this patch, compiling on different machines produced different binaries, which made verification of results difficult.

Fixes: #11317
This patch was done while working on reproducible builds for openSUSE.

4 months agotests: fix some mul_mat test gaps (llama/11375)
Jeff Bolz [Thu, 23 Jan 2025 20:51:24 +0000 (14:51 -0600)]
tests: fix some mul_mat test gaps (llama/11375)

Now that we have batched mat-vec mul Vulkan shaders for up to n==8,
these tests weren't actually exercising the mat-mat mul path. Test
n==9 as well. Also, change to use all_types.

4 months agoVulkan-run-test: fix mmq_wg_denoms (llama/11343)
amd-dwang [Thu, 23 Jan 2025 07:14:28 +0000 (15:14 +0800)]
Vulkan-run-test: fix mmq_wg_denoms (llama/11343)

There should be a copy-and-paste error here.

*mmq_wg_denoms should be used together with *warptile_mmq, instead of
wg_denoms.

4 months agovulkan: sort shaders for more deterministic binary (llama/11315)
Jeff Bolz [Thu, 23 Jan 2025 07:07:50 +0000 (01:07 -0600)]
vulkan: sort shaders for more deterministic binary (llama/11315)

Fixes #11306.

4 months agovulkan: fix diag_mask_inf (llama/11323)
Jeff Bolz [Thu, 23 Jan 2025 07:01:17 +0000 (01:01 -0600)]
vulkan: fix diag_mask_inf (llama/11323)

With robustbufferaccess disabled, this shader was showing OOB stores. There
is a bounds check in the code, but the workgrouop dimensions were reversed vs
CUDA and it was running the wrong number of threads. So fix the workgroup
dimensions and disable robustness for this pipeline.

4 months agorpc : better caching of the base buffer pointer (llama/11331)
Radoslav Gerganov [Tue, 21 Jan 2025 13:06:41 +0000 (15:06 +0200)]
rpc : better caching of the base buffer pointer (llama/11331)

There is no need to use map, just store the base pointer in the buffer
context.

4 months agometal : fix out-of-bounds write (llama/11314)
Georgi Gerganov [Tue, 21 Jan 2025 06:48:13 +0000 (08:48 +0200)]
metal : fix out-of-bounds write (llama/11314)

ggml-ci

4 months agovulkan: fix coopmat2 validation failures (llama/11284)
Jeff Bolz [Mon, 20 Jan 2025 16:38:32 +0000 (10:38 -0600)]
vulkan: fix coopmat2 validation failures (llama/11284)

mul mat and flash attention shaders were loading f32 types directly into
A/B matrices, which happens to work but is technically invalid usage.
For FA, we can load it as an Accumulator matrix and convert and this
is not in the inner loop and is cheap enough. For mul mat, it's more
efficient to do this conversion in a separate pass and have the input(s)
be f16.

coopmat2 requires SPIR-V 1.6 (related using to LocalSizeId). LocalSizeId
requires maintenance4 be enabled, and SPIR-V 1.6 requires Vulkan 1.3.

4 months agoSYCL: Introducing memory host pool (llama/11251)
Nicolò Scipione [Sun, 19 Jan 2025 13:33:34 +0000 (14:33 +0100)]
SYCL: Introducing memory host pool (llama/11251)

* Implement host pool for matrix_info

Creating a new memory pool on the host to store memory location for
matrix_info needed to launch gemm_batch from oneMKL/oneMath.
Removing complex support in gemm_batch since it is not used in llama.cpp

* Remove unnecessary headers and cast

* Reorder member variable to avoid warning on initialization

* Formatting

* Remove unused variable

* Address PR review feedback - remove warning

---------

Signed-off-by: nscipione <redacted>
4 months agocmake : add sanitizer flags for llama.cpp (llama/11279)
Georgi Gerganov [Sat, 18 Jan 2025 14:18:15 +0000 (16:18 +0200)]
cmake : add sanitizer flags for llama.cpp (llama/11279)

* cmake : add sanitizer flags for llama.cpp

ggml-ci

* tests : fix compile warnings

ggml-ci

* cmake : move sanitizer flags to llama_add_compile_flags

ggml-ci

* cmake : move llama.cpp compile flags to top level lists

ggml-ci

* cmake : apply only sanitizer flags at top level

ggml-ci

* tests : fix gguf context use in same_tensor_data

* gguf-test: tensor data comparison

* dummy : trigger ggml-ci

* unicode : silence gcc warnings

ggml-ci

* ci : use sanitizer builds only in Debug mode

ggml-ci

* cmake : add status messages [no ci]

---------

Co-authored-by: Johannes Gäßler <redacted>
4 months agovulkan: fix coopmat2 flash attention for non-contiguous inputs (llama/11281)
Jeff Bolz [Sat, 18 Jan 2025 08:26:50 +0000 (02:26 -0600)]
vulkan: fix coopmat2 flash attention for non-contiguous inputs (llama/11281)

Add code similar to mul_mm_cm2 to force alignment of strides, to avoid
a performance regression.

Add noncontiguous FA tests in test-backend-ops.

Fixes #11268.

4 months agorpc : early register backend devices (llama/11262)
Radoslav Gerganov [Fri, 17 Jan 2025 08:57:09 +0000 (10:57 +0200)]
rpc : early register backend devices (llama/11262)

Early register RPC devices and do not propagate RPC specifics in the
llama model structures.

ref: #10609

4 months agovulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (llama/11166)
Jeff Bolz [Thu, 16 Jan 2025 21:47:10 +0000 (15:47 -0600)]
vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (llama/11166)

* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination

4 months agovulkan: optimize coopmat2 q4_k/q5_k dequant functions. (llama/11206)
Jeff Bolz [Thu, 16 Jan 2025 21:23:49 +0000 (15:23 -0600)]
vulkan: optimize coopmat2 q4_k/q5_k dequant functions. (llama/11206)

Do masking on whole dwords, fetch all scales at once.

4 months agovulkan: optimize coopmat2 q2_k dequant function (llama/11130)
Jeff Bolz [Thu, 16 Jan 2025 21:16:39 +0000 (15:16 -0600)]
vulkan: optimize coopmat2 q2_k dequant function (llama/11130)

4 months agoCUDA: backwards pass for misc. ops, add tests (llama/11257)
Johannes Gäßler [Thu, 16 Jan 2025 15:43:38 +0000 (16:43 +0100)]
CUDA: backwards pass for misc. ops, add tests (llama/11257)

* CUDA: backwards pass for misc. ops, add tests

* remove restrict from pointers

4 months agoggml: aarch64: implement SVE kernels for q4_K_q8_K vector dot (llama/11227)
fj-y-saito [Thu, 16 Jan 2025 09:11:49 +0000 (18:11 +0900)]
ggml: aarch64: implement SVE kernels for q4_K_q8_K vector dot (llama/11227)

* Add SVE support for q4_K_q8_K

* Update ggml/src/ggml-cpu/ggml-cpu-quants.c

change to use K_SCALE_SIZE

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agovulkan: scale caching for k quants + misc fixes (llama/11081)
Eve [Wed, 15 Jan 2025 19:50:13 +0000 (19:50 +0000)]
vulkan: scale caching for k quants + misc fixes (llama/11081)

* q6_k scale caching

* 16 bit unpack

* q4_k test (slow)

* revert it

* q3_k

* q2_k

* little stuff

* try precalculating products of a and q2_k scales

* Revert "try precalculating products of a and q2_k scales"

This reverts commit 65110b81f23f66331a50c6e889a7c1ab9470a86b.

* unpack should be u16, add vim swap to gitignore (about time)

* better q4_k scales

* q5_k

* better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations

* q2_k better dequant

* q3_k optimizations

* q3_k use hmask simd from cpu avx version

* make the caches happy

* q3_k separate out calculation

* q2_k separate out

* little stuff

* use calc_superblock everywhere

* q2_k optimize scale calculation

* more barriers

4 months agofix: ggml: fix vulkan-shaders-gen build (llama/10448)
Junil Kim [Wed, 15 Jan 2025 13:17:42 +0000 (22:17 +0900)]
fix: ggml: fix vulkan-shaders-gen build (llama/10448)

* fix: ggml: fix vulkan-shaders-gen build

The vulkan-shaders-gen target was not being built correctly
in case of cross-compilation.
Other outputs need to be built for the cross compile target,
but vulkan-shaders-gen needs to be built for the host.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

- Add GGML_SHADERS_GEN_TOOLCHAIN CMake option.
- Auto-detect host toolchain if not set.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

Use configure_file to generate host_toolchain.cmake from template

* fix: ggml: Fix compile error

Fix compile error not finding vulkan-shaders-gen

* fix: vulkan-shaders-gen build and path handling

Fix build issues with vulkan-shaders-gen:
- Add target dependency for correct build order
- Use CMAKE_HOST_SYSTEM_NAME for executable suffix
- Fix MSVC output directory in host toolchain
- Normalize path handling for cross-compilation

* fix: improve host compiler detection in vulkan shader build

Improve host compiler detection for vulkan shader generation:
- Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches
- Consolidate compiler detection logic
- Fix Windows-specific MSVC detection
- Ensure correct compiler search in cross-compilation

* refactor: Simplify CMake function for detecting host compiler

Simplified the CMake function to improve the process of detecting the host compiler.

* fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt

Since `vulkan-shader-gen.cpp` only requires the `glslc` executable
and not the Vulkan headers or libraries, CMakeLists.txt needs to
be corrected.
(See: ecc93d0558fc3ecb8a5af69d2ece02fae4710ade)

* refactor: Rename host_toolchain.cmake.in

- Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in

* refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN

Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN

4 months agoRoPE: fix back, CUDA support for back + noncont. (llama/11240)
Johannes Gäßler [Wed, 15 Jan 2025 11:51:37 +0000 (12:51 +0100)]
RoPE: fix back, CUDA support for back + noncont. (llama/11240)

* RoPE: fix back, CUDA support for back + noncont.

* fix comments reg. non-cont. RoPE support [no-ci]

4 months agoSYCL: Add gated linear attention kernel (llama/11175)
Akarshan Biswas [Wed, 15 Jan 2025 03:20:17 +0000 (08:50 +0530)]
SYCL: Add gated linear attention kernel (llama/11175)

* SYCL: Add Gated Linear attention kernel

* glahpp: add a space at the end of file

* gla: Put the barrier inside the main logic loop

5 months agocmake : fix build tests on arm (#1084)
Andrii Ryzhkov [Sat, 25 Jan 2025 13:13:00 +0000 (14:13 +0100)]
cmake : fix build tests on arm (#1084)

5 months agoggml : add option to not print stack on abort (#1081)
William Tambellini [Thu, 23 Jan 2025 19:59:08 +0000 (11:59 -0800)]
ggml : add option to not print stack on abort (#1081)

* Add option to not print stack on abort

Add option/envvar to disable stack printing on abort.
Also link some unittests with Threads to fix link errors on
ubuntu/g++11.

* Update src/ggml.c

---------

Co-authored-by: Diego Devesa <redacted>
5 months agoggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (#1065)
issixx [Fri, 17 Jan 2025 12:29:08 +0000 (21:29 +0900)]
ggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (#1065)

some threads kept looping and failed to terminate properly after an abort during CPU execution.

Co-authored-by: issi <redacted>
5 months agosync : whisper.cpp upstream/0.0.1642
Georgi Gerganov [Tue, 14 Jan 2025 08:39:25 +0000 (10:39 +0200)]
sync : whisper.cpp

5 months agoscripts : sync gguf (cont)
Georgi Gerganov [Tue, 14 Jan 2025 07:41:40 +0000 (09:41 +0200)]
scripts : sync gguf (cont)

5 months agoGGUF: C++ refactor, backend support, misc fixes (skip) (llama/11030)
Johannes Gäßler [Tue, 14 Jan 2025 07:31:07 +0000 (09:31 +0200)]
GGUF: C++ refactor, backend support, misc fixes (skip) (llama/11030)

ggml-ci

5 months agoscripts : sync gguf
Georgi Gerganov [Tue, 14 Jan 2025 07:29:27 +0000 (09:29 +0200)]
scripts : sync gguf

5 months agoggml : add opencl backend (skip) (llama/10693)
lhez [Tue, 14 Jan 2025 07:24:03 +0000 (09:24 +0200)]
ggml : add opencl backend (skip) (llama/10693)

---------

Co-authored-by: Skyler Szot <redacted>
Co-authored-by: Shangqing Gu <redacted>
Co-authored-by: Alexander Angus <redacted>
Co-authored-by: Hongqiang Wang <redacted>
Co-authored-by: Max Krasnyansky <redacted>
5 months agoscripts : sync opencl
Georgi Gerganov [Tue, 14 Jan 2025 07:20:27 +0000 (09:20 +0200)]
scripts : sync opencl

5 months agosync : llama.cpp
Georgi Gerganov [Tue, 14 Jan 2025 07:17:22 +0000 (09:17 +0200)]
sync : llama.cpp

ggml-ci

5 months agocuda : CUDA Graph Compute Function Refactor (precursor for performance improvements...
Andreas Kieslinger [Mon, 13 Jan 2025 15:45:53 +0000 (16:45 +0100)]
cuda : CUDA Graph Compute Function Refactor (precursor for performance improvements) (llama/11042)

* Refactor: Moves cuda graph executable update step to separate function.

* Refactor: Moves cuda graph update check to separate function.

* Refactor: Moves cuda graph maintenance (update or adjusting copy parameters) to separate function for improved readability.

* Fix: Adds missing reference to maintain_cuda_graph() definition.

* Refactor: Improves structure and abstractions by moving CUDA graph evaluation and capture to its own function.

* Refactor: Moves node graph checks and copy ops into individual function for improved readability.

* Refactor: Removes code permanently excluded from compilation to increase readability.

* Style: Adds missing newline

* Style: Consolidates several neighboring '#ifdef USE_CUDA_GRAPH' into a single one

* Refactor: Makes 'cuda_graph_update_required' a local variable

* remove double lines between functions

---------

Co-authored-by: slaren <redacted>
5 months agoggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (llama/11211)
Radoslav Gerganov [Mon, 13 Jan 2025 11:31:41 +0000 (13:31 +0200)]
ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (llama/11211)

Build fails when using HIP and GGML_BACKEND_DL:
```
/usr/bin/ld: ../ggml/src/libggml.so: undefined reference to `ggml_backend_cuda_reg'
collect2: error: ld returned 1 exit status
```
This patch fixes this.

5 months agoVulkan: Fix float16 use on devices without float16 support + fix subgroup_size_contro...
0cc4m [Fri, 10 Jan 2025 05:39:33 +0000 (06:39 +0100)]
Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (llama/11161)

* Vulkan: Remove float16 use in shaders

* Fix validation error about subgroup_size_control extension

5 months agollama: add support for QRWKV6 model architecture (llama/11001)
Molly Sophia [Fri, 10 Jan 2025 01:58:08 +0000 (09:58 +0800)]
llama: add support for QRWKV6 model architecture (llama/11001)

llama: add support for QRWKV6 model architecture (llama/11001)

* WIP: Add support for RWKV6Qwen2

Signed-off-by: Molly Sophia <redacted>
* RWKV: Some graph simplification

Signed-off-by: Molly Sophia <redacted>
* Add support for RWKV6Qwen2 with cpu and cuda GLA

Signed-off-by: Molly Sophia <redacted>
* RWKV6[QWEN2]: Concat lerp weights together to reduce cpu overhead

Signed-off-by: Molly Sophia <redacted>
* Fix some typos

Signed-off-by: Molly Sophia <redacted>
* code format changes

Signed-off-by: Molly Sophia <redacted>
* Fix wkv test & add gla test

Signed-off-by: Molly Sophia <redacted>
* Fix cuda warning

Signed-off-by: Molly Sophia <redacted>
* Update README.md

Signed-off-by: Molly Sophia <redacted>
* Update ggml/src/ggml-cuda/gla.cu

Co-authored-by: Georgi Gerganov <redacted>
* Fix fused lerp weights loading with RWKV6

Signed-off-by: Molly Sophia <redacted>
* better sanity check skipping for QRWKV6 in llama-quant

thanks @compilade

Signed-off-by: Molly Sophia <redacted>
Co-authored-by: compilade <redacted>
---------

Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: compilade <redacted>
5 months agoSYCL: Refactor ggml_sycl_compute_forward (llama/11121)
Akarshan Biswas [Fri, 10 Jan 2025 00:13:03 +0000 (05:43 +0530)]
SYCL: Refactor ggml_sycl_compute_forward (llama/11121)

* SYCL: refactor ggml_sycl_compute_forward

* SYCL: add back GGML_USED(dst) to ggml_sycl_cpy

* SYCL: add function name to noop debug

* SYCL: Some device info print refactoring and add details of XMX availability

5 months agofix: add missing msg in static_assert (llama/11143)
hydai [Wed, 8 Jan 2025 20:03:28 +0000 (04:03 +0800)]
fix: add missing msg in static_assert (llama/11143)

Signed-off-by: hydai <redacted>
5 months agollamafile : ppc64le MMA INT8 implementation (llama/10912)
amritahs-ibm [Wed, 8 Jan 2025 10:54:19 +0000 (16:24 +0530)]
llamafile : ppc64le MMA INT8 implementation (llama/10912)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for quantised int8 datatype.

This change results in 10% - 70% improvement
in total speed(ie all tokens/total time), across
various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
5 months agoDisable GL_KHR_cooperative_matrix Vulkan extension if not available. (llama/11117)
Mathieu Baudier [Wed, 8 Jan 2025 08:18:13 +0000 (09:18 +0100)]
Disable GL_KHR_cooperative_matrix Vulkan extension if not available. (llama/11117)

* Disable GL_KHR_cooperative_matrix Vulkan extension if not available.

* Perform Vulkan extensions checks in a more sensible order

* Remove unnecessary #ifdef directive

5 months agofix: Vulkan shader gen binary path when Cross-compiling (llama/11096)
ag2s20150909 [Wed, 8 Jan 2025 08:17:29 +0000 (16:17 +0800)]
fix: Vulkan shader gen binary path when Cross-compiling (llama/11096)

* fix: Vulkan shader gen binary path when cross compiling

5 months agoGGUF: C++ refactor, backend support, misc fixes (llama/11030)
Johannes Gäßler [Tue, 7 Jan 2025 17:01:58 +0000 (18:01 +0100)]
GGUF: C++ refactor, backend support, misc fixes (llama/11030)

* GGUF: C++ refactor, backend support, misc fixes

remove ggml_tensor.backend

update CODEOWNERS [no ci]

remove gguf_get_data from API

revise GGUF API data types

5 months agoggml-backend : only offload from host buffers (fix) (llama/11124)
Diego Devesa [Tue, 7 Jan 2025 15:11:57 +0000 (16:11 +0100)]
ggml-backend : only offload from host buffers (fix) (llama/11124)

5 months agoggml-backend : only offload from host buffers (llama/11120)
Diego Devesa [Tue, 7 Jan 2025 11:38:05 +0000 (12:38 +0100)]
ggml-backend : only offload from host buffers (llama/11120)

5 months agorpc : code cleanup (llama/11107)
Radoslav Gerganov [Tue, 7 Jan 2025 06:37:02 +0000 (08:37 +0200)]
rpc : code cleanup (llama/11107)

Remove duplicated macros, use GGML_LOG_ERROR for errors

5 months agoSYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6 (llama/11087)
Akarshan Biswas [Tue, 7 Jan 2025 06:26:07 +0000 (11:56 +0530)]
SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6 (llama/11087)

* SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6

* Revert "SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6"

This reverts commit f62dc45f318e48d375e7734b34cbddee81deed52.

* Reland: Use get_multi_ptr instead of deprecated get_pointer in wkv6

5 months agoCUDA: add BF16 support (llama/11093)
Johannes Gäßler [Mon, 6 Jan 2025 01:33:52 +0000 (02:33 +0100)]
CUDA: add BF16 support (llama/11093)

* CUDA: add BF16 support

5 months agoVulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver...
0cc4m [Sat, 4 Jan 2025 20:09:59 +0000 (21:09 +0100)]
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (llama/11074)

* Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver

* Add (TM) to AMD name check

5 months agoSupport for models with non-512-aligned tensors over RPC. (llama/11047)
matt23654 [Sat, 4 Jan 2025 16:10:30 +0000 (16:10 +0000)]
Support for models with non-512-aligned tensors over RPC. (llama/11047)

* Added init tensor calling code

* Added get_alloc_size forwarding

* Cleaned up and improved type/error handling.

* fix: remove trailing whitespaces.

* Cleanup and use GGML error logging functions.

* Handle potentially dangerous edge cases.

* Apply suggestions from code review

Co-authored-by: Diego Devesa <redacted>
---------

Co-authored-by: Diego Devesa <redacted>
5 months agofix: Vulkan shader gen binary path (llama/11037)
Gilad S. [Sat, 4 Jan 2025 08:17:31 +0000 (10:17 +0200)]
fix: Vulkan shader gen binary path (llama/11037)

5 months agoggml : allow loading backend with env variable (#1059)
Radoslav Gerganov [Sun, 5 Jan 2025 07:50:37 +0000 (09:50 +0200)]
ggml : allow loading backend with env variable (#1059)

ref: #1058

5 months agosync : whisper.cpp
Georgi Gerganov [Sat, 4 Jan 2025 08:51:09 +0000 (10:51 +0200)]
sync : whisper.cpp

5 months agosync : llama.cpp
Georgi Gerganov [Fri, 3 Jan 2025 11:33:25 +0000 (13:33 +0200)]
sync : llama.cpp

ggml-ci

5 months agometal : avoid uint (llama/11019)
Georgi Gerganov [Fri, 3 Jan 2025 09:26:14 +0000 (11:26 +0200)]
metal : avoid uint (llama/11019)

5 months agoggml : fixes for AVXVNNI instruction set with MSVC and Clang (llama/11027)
Srihari-mcw [Tue, 31 Dec 2024 14:23:33 +0000 (19:53 +0530)]
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (llama/11027)

* Fixes for clang AVX VNNI

* enable AVX VNNI and alder lake build for MSVC

* Apply suggestions from code review

---------

Co-authored-by: slaren <redacted>
5 months agovulkan: optimize mul_mat for small values of N (llama/10991)
Jeff Bolz [Mon, 30 Dec 2024 17:27:11 +0000 (11:27 -0600)]
vulkan: optimize mul_mat for small values of N (llama/10991)

Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.

5 months agovulkan: im2col and matmul optimizations for stable diffusion (llama/10942)
Jeff Bolz [Sun, 29 Dec 2024 09:16:34 +0000 (03:16 -0600)]
vulkan: im2col and matmul optimizations for stable diffusion (llama/10942)

* tests: Add im2col perf tests

* vulkan: optimize im2col, more elements per thread

* vulkan: increase small tile size for NV_coopmat2

* vulkan: change im2col to 512 elements per workgroup

5 months agovulkan: Use push constant offset to handle misaligned descriptors (llama/10987)
Jeff Bolz [Sun, 29 Dec 2024 08:35:11 +0000 (02:35 -0600)]
vulkan: Use push constant offset to handle misaligned descriptors (llama/10987)

5 months agovulkan: multi-row k quants (llama/10846)
Eve [Thu, 26 Dec 2024 15:54:44 +0000 (10:54 -0500)]
vulkan: multi-row k quants (llama/10846)

* multi row k quant shaders!

* better row selection

* more row choices

* readjust row selection

* rm_kq=2 by default

5 months agoexamples, ggml : fix GCC compiler warnings (llama/10983)
Peter [Thu, 26 Dec 2024 13:59:11 +0000 (00:59 +1100)]
examples, ggml : fix GCC compiler warnings (llama/10983)

Warning types fixed (observed under MSYS2 GCC 14.2.0):
* format '%ld' expects argument of type 'long int', but argument has type 'size_t'
* llama.cpp/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers]  (emitted for all struct field except first)

5 months agoggml : more perfo with llamafile tinyblas on x86_64 (llama/10714)
Djip007 [Tue, 24 Dec 2024 17:54:49 +0000 (18:54 +0100)]
ggml : more perfo with llamafile tinyblas on x86_64 (llama/10714)

* more perfo with llamafile tinyblas on x86_64.

- add bf16 suport
- change dispache strategie (thanks:
https://github.com/ikawrakow/ik_llama.cpp/pull/71 )
- reduce memory bandwidth

simple tinyblas dispache and more cache freindly

* tinyblas dynamic dispaching

* sgemm: add M blocs.

* - git 2.47 use short id of len 9.
- show-progress is not part of GNU Wget2

* remove not stable test

5 months agoggml : use wstring for backend search paths (llama/10960)
Diego Devesa [Tue, 24 Dec 2024 03:05:27 +0000 (04:05 +0100)]
ggml : use wstring for backend search paths (llama/10960)

ggml-ci

5 months agoggml : fix arm enabled features check (llama/10961)
Diego Devesa [Tue, 24 Dec 2024 03:05:17 +0000 (04:05 +0100)]
ggml : fix arm enabled features check (llama/10961)

5 months agoggml : fix const usage in SSE path (llama/10962)
Diego Devesa [Mon, 23 Dec 2024 19:25:52 +0000 (20:25 +0100)]
ggml : fix const usage in SSE path (llama/10962)

5 months agoggml : fix run-time on FreeBSD in get_executable_path() (llama/10948)
yuri@FreeBSD [Mon, 23 Dec 2024 00:20:11 +0000 (16:20 -0800)]
ggml : fix run-time on FreeBSD in get_executable_path() (llama/10948)

5 months agovulkan: build fixes for 32b (llama/10927)
Jeff Bolz [Sun, 22 Dec 2024 09:44:01 +0000 (03:44 -0600)]
vulkan: build fixes for 32b (llama/10927)

* vulkan: build fixes for 32b

Should fix #10923

* vulkan: initialize some buffer/offset variables

5 months agovulkan: optimize coopmat2 dequant functions (llama/10855)
Jeff Bolz [Sat, 21 Dec 2024 07:04:45 +0000 (01:04 -0600)]
vulkan: optimize coopmat2 dequant functions (llama/10855)

Change the code to do 16b loads when possible and extract the appropriate
component late, so the code is effectively decoding a pair of elements and
then selecting one. This can allow more commoning to happen in the compiler
when neighboring elements are loaded.

5 months agoggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0() (llama/10874)
Adrien Gallouët [Fri, 20 Dec 2024 23:33:37 +0000 (00:33 +0100)]
ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0() (llama/10874)

* ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0()

Signed-off-by: Adrien Gallouët <redacted>
* ggml-cpu: format code

Signed-off-by: Adrien Gallouët <redacted>
---------

Signed-off-by: Adrien Gallouët <redacted>
5 months agoSYCL: Migrate away from deprecated ggml_tensor->backend (llama/10840)
Akarshan Biswas [Fri, 20 Dec 2024 15:31:28 +0000 (21:01 +0530)]
SYCL: Migrate away from deprecated ggml_tensor->backend (llama/10840)

* Migrate to tensor->buffer for checking backend buffer type: 1

* SYCL: common.cpp try to migrate away from tensor->backend

* SYCL: fix assertions and add proper comments

* SYCL: remove extra space

* SYCL: Add back static to ggml_backend_buffer_is_sycl_split function

* SYCL: Add pragma directive to suppress warning spam

* SYCL: Integrate debug logs with GGML_LOG and other fixes

* Revert "SYCL: Integrate debug logs with GGML_LOG and other fixes"

This reverts commit 2607b7de0f0d2f4f1f690226f86fa861aa39cb97.
Let's keep the current SYCL specific logging mechanism for now

* SYCL: Use GGML_SYCL_DEBUG after reverting

* SYCL: reg_get_proc_address func, update to the current func signature

* SYCL: Refactor SYCL buffer checks in ggml_sycl_cpy_tensor_2d

5 months agoggml : add test for SVE and disable when it fails (llama/10906)
Diego Devesa [Fri, 20 Dec 2024 12:31:28 +0000 (13:31 +0100)]
ggml : add test for SVE and disable when it fails (llama/10906)

5 months agoggml: fix arm build with gcc (llama/10895)
Adrien Gallouët [Thu, 19 Dec 2024 13:20:41 +0000 (14:20 +0100)]
ggml: fix arm build with gcc (llama/10895)

Signed-off-by: Adrien Gallouët <redacted>
5 months agoggml : fix arm build (llama/10890)
Diego Devesa [Wed, 18 Dec 2024 22:21:42 +0000 (23:21 +0100)]
ggml : fix arm build (llama/10890)

* ggml: GGML_NATIVE uses -mcpu=native on ARM

Signed-off-by: Adrien Gallouët <redacted>
* ggml: Show detected features with GGML_NATIVE

Signed-off-by: Adrien Gallouët <redacted>
* remove msvc support, add GGML_CPU_ARM_ARCH option

* disable llamafile in android example

* march -> mcpu, skip adding feature macros

ggml-ci

---------

Signed-off-by: Adrien Gallouët <redacted>
Co-authored-by: Adrien Gallouët <redacted>
5 months agotts : add OuteTTS support (llama/10784)
Georgi Gerganov [Wed, 18 Dec 2024 17:27:21 +0000 (19:27 +0200)]
tts : add OuteTTS support (llama/10784)

* server : add "tokens" output

ggml-ci

* server : output embeddings for all tokens when pooling = none

ggml-ci

* server : be explicit about the pooling type in the tests

ggml-ci

* server : do not normalize embeddings when there is no pooling

ggml-ci

* llama : add OuteTTS support (wip)

* wip

* extract features

* first conv

* group norm

* resnet conv

* resnet

* attn

* pos net

* layer norm

* convnext

* head

* hann window

* fix n_embd + remove llama.cpp hacks

* compute hann window

* fft

* spectrum processing

* clean-up

* tts : receive input text and generate codes

* clip : fix new conv name

* tts : minor fix

* tts : add header + minor fixes

ggml-ci

* tts : add matchematical constant

ggml-ci

* tts : fix sampling + cut initial noise

* tts : fixes

* tts : update default samplers

ggml-ci

* tts : text pre-processing

* tts : outetts-voc -> wavtokenizer-dec

* tts : remove hardcoded constants

ggml-ci

* tts : fix tensor shapes

* llama : refactor wavtokenizer tensors

ggml-ci

* cont

ggml-ci

* cont [no ci]

* llama : update WavTokenizer to non-causal attn

* llama : handle no-vocab detokenization

* tts : add Python example for OuteTTS (wip)

* tts : extend python example to generate spectrogram

ggml-ci

* server : fix rebase artifacts

* tts : enable "return_tokens" in Python example

ggml-ci

* tts : minor fixes

* common : support HF download for vocoder

5 months agotests: add tests for GGUF (llama/10830)
Johannes Gäßler [Tue, 17 Dec 2024 18:09:35 +0000 (19:09 +0100)]
tests: add tests for GGUF (llama/10830)

6 months agoggml : do not install metal source when embed library (#1054)
Georgi Gerganov [Mon, 23 Dec 2024 19:25:52 +0000 (21:25 +0200)]
ggml : do not install metal source when embed library (#1054)

ggml-ci

6 months agoggml : improve inputs log sched_print_assignments (#1053)
Daniel Bevenius [Thu, 19 Dec 2024 02:50:12 +0000 (03:50 +0100)]
ggml : improve inputs log sched_print_assignments (#1053)

This commit attempts to improve the log message for the inputs of the
splits in the sched_print_assignments function.

The motivation for this change is that currently even if there are no
inputs a colon is displayed at the end of the line, which can make it a
little confusing when reading the output as it could be interpreted as
the line below are inputs when they are in fact nodes. With this change
the colon will only be printed if there actually are inputs.

6 months agosync : whisper.cpp
Georgi Gerganov [Wed, 18 Dec 2024 11:03:17 +0000 (13:03 +0200)]
sync : whisper.cpp

6 months agofiles : remove old sources (#1052)
Georgi Gerganov [Wed, 18 Dec 2024 06:38:17 +0000 (08:38 +0200)]
files : remove old sources (#1052)

ggml-ci

6 months agosync : llama.cpp
Georgi Gerganov [Tue, 17 Dec 2024 16:36:44 +0000 (18:36 +0200)]
sync : llama.cpp

ggml-ci

6 months agoggml : update ggml_backend_cpu_device_supports_op (llama/10867)
Georgi Gerganov [Tue, 17 Dec 2024 16:35:42 +0000 (18:35 +0200)]
ggml : update ggml_backend_cpu_device_supports_op (llama/10867)

* ggml : fix cpy op for IQ-quants to use reference impl

ggml-ci

* ggml : disable tests involving i-matrix quantization

* ggml : update ggml_backend_cpu_device_supports_op

ggml-ci

6 months agovulkan: bugfixes for small subgroup size systems + llvmpipe test (llama/10809)
Eve [Tue, 17 Dec 2024 05:52:55 +0000 (05:52 +0000)]
vulkan: bugfixes for small subgroup size systems + llvmpipe test (llama/10809)

* ensure mul mat shaders work on systems with subgroup size less than 32

more fixes

add test

* only s_warptile_mmq needs to be run with 32 threads or more

6 months agorwkv6: add wkv6 support for Vulkan backend (llama/10829)
Zhiyuan Li [Mon, 16 Dec 2024 21:00:46 +0000 (05:00 +0800)]
rwkv6: add wkv6 support for Vulkan backend (llama/10829)

* rwkv_wkv6 vulkan shader

* RWKV_WKV6 Vulkan op tests passed

Signed-off-by: Molly Sophia <redacted>
* Apply code format changes

Signed-off-by: Molly Sophia <redacted>
* add [[unroll]] and remove unnecessary conditions

* add uma support

* fix erros in EditorConfig Checker

---------

Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Molly Sophia <redacted>
6 months agollama : add Qwen2VL support + multimodal RoPE (llama/10361)
HimariO [Sat, 14 Dec 2024 12:43:46 +0000 (20:43 +0800)]
llama : add Qwen2VL support + multimodal RoPE (llama/10361)

* Barebone Qwen2VL LLM convertor

* Add Qwen2VL cli entrypoint

* [WIP] add qwen2vl arch

* Verify m-rope output

* Add vl-rope/2d-rope support for qwen2vl ViT

* update qwen2vl cli tool

* update 5D tensor op workaround

* [WIP] qwen2vl vision model

* make batch and clip utils compatible with qwen2vl

* [WIP] create inference workflow, gguf convert script but fix

* correcting vision-rope behavior, add the missing last layer back to ViT

* add arg parser to qwen2vl_surgery

* replace variable size array with vector

* cuda-gdb cmake preset

* add fp32 mrope, vision rope kernel

* add fp16 support for qwen2vl and m-rope

* add `GGML_ROPE_TYPE_MROPE`, `GGML_ROPE_TYPE_VISION`

* fix rope op mode switching, out dated func args

* update `llama_hparams`

* update to keep up stream changes

* resolve linter, test errors

* add makefile entry, update speical image padding token

* add mrope unit test, fix few compiler warnings

* rename `mrope` related function, params

* minor updates on debug util, bug fixs

* add `m-rope` testcase to `test-backend-ops`

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
* fix traililng whitespce

* store `llama_hparams.rope_sections` with fixed size array

* update position id tensor size check in GGML_OP_ROPE

* minor updates

* update `ggml_backend_*_supports_op` of unsupported backends

* remote old `rope_section` compare operator

---------

Co-authored-by: Georgi Gerganov <redacted>
6 months agoIntroducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (llama...
lhez [Fri, 13 Dec 2024 20:23:52 +0000 (12:23 -0800)]
Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (llama/10693)

* [cl][adreno] Add Adreno GPU support

Add new OpenCL backend to support Adreno GPUs

---------

Co-authored-by: Skyler Szot <redacted>
Co-authored-by: Shangqing Gu <redacted>
Co-authored-by: Alexander Angus <redacted>
Co-authored-by: Hongqiang Wang <redacted>
Co-authored-by: Max Krasnyansky <redacted>
* [cl][ci] Add workflow for CL

* [cl][adreno] Fix memory leak for non SMALL_ALLOC path

* opencl: integrate backend dyn.load interface and fix compiler and format warnings

* opencl: remove small-alloc support and fix build errors for non-opencl platforms

* opencl: fixed merge conflict (MUSA added twice in cmake)

* opencl-ci: use RUNNER_TEMP instead of github.workspace

* opencl: fix embed tool invocation with python3

* opencl: CI workflow fixes

* opencl: Clean up small-alloc in CMake files

* opencl: cleanup ggml-opencl2 header file

* opencl: use ulong for offsets and strides in ADD kernel

* opencl: use cl_ulong for all offsets

* opencl: use cl_ulong for sizes and strides

* opencl: use `GGML_LOG_xxx` instead of `fprintf(stderr, ...)`

* opencl: rename backend `opencl2` -> `opencl`

* opencl: rename kernel files `ggml-opencl2` -> `ggml-opencl`

* opencl: make OpenCL required, remove redundant lib and inc directories

* `ggml-base`, `..` and `.` are added by `ggml_add_backend_library`

* opencl: rename backend - funcs, structs, etc `opencl2` -> `opencl`

* opencl: remove copyright marker since main license already covers

* opencl: replace some more OPENCL2 leftovers

* opencl: remove limits on `tensor_extra`

* opencl: use pools for `tensor_extra`

* opencl: fix compiler warnings with GCC and Clang

Still getting the warning about clCreateCmdQueue being obsolete.
Will fix that separately.

* opencl: fail gracefully if opencl devices are not available

Also for unsupported GPUs.

* opencl: fix MSVC builds (string length error)

* opencl: check for various requirements, allow deprecated API

* opencl: update log message for unsupported GPUs

---------

Co-authored-by: Skyler Szot <redacted>
Co-authored-by: Shangqing Gu <redacted>
Co-authored-by: Alexander Angus <redacted>
Co-authored-by: Hongqiang Wang <redacted>
Co-authored-by: Max Krasnyansky <redacted>
6 months agoFix crash caused by ggml_backend_load_all when launching on Android Activity (llama...
谢乃闻 [Fri, 13 Dec 2024 12:56:07 +0000 (12:56 +0000)]
Fix crash caused by ggml_backend_load_all when launching on Android Activity (llama/10812)

* Fix crash caused by ggml_backend_load_all when launching on AndroidActivity.

Details:
Calling ggml_backend_load_all during initialization in the AndroidActivity project leads to a crash with the error:
terminating with uncaught exception of type std::__ndk1::__fs::filesystem::filesystem_error: filesystem error: in directory_iterator::directory_iterator(...): Permission denied [./].
This issue occurs because AndroidActivity restricts file access due to sandboxing.

Reproduction:
In the example folder, the LlamaAndroid project can reproduce the crash by calling ggml_backend_load_all first in Java_android_llama_cpp_LLamaAndroid_backend_1init.

* Update ggml/src/ggml-backend-reg.cpp

---------

Co-authored-by: Diego Devesa <redacted>
6 months agovulkan: small mul_mat_vec optimizations (llama/10665)
Eve [Fri, 13 Dec 2024 08:42:04 +0000 (08:42 +0000)]
vulkan: small mul_mat_vec optimizations (llama/10665)

* double the number of rows per workgroup

* Update ggml-vulkan.cpp

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* only increase the number of rows for amd and subgroup size 64

* fix missing NUM_ROWS for mul_mat_vec_iq4_nl_f16_f32, untested

* use subgroup min and max to check for gcn (requires https://github.com/ggerganov/llama.cpp/pull/10721)

* manual merge ggml-vulkan.cpp

* set min and max subgroup size in any case

* Also double the number of rows for Intel GPUs

6 months agoSYCL: Reduce most of the compiler warnings (llama/10748)
Akarshan Biswas [Fri, 13 Dec 2024 06:42:15 +0000 (12:12 +0530)]
SYCL: Reduce most of the compiler warnings (llama/10748)

* Try to reduce some unused and typecast warnings

* Reduce compiler warnings step 2

* add a newline at the end of the file

* Initialize nreduce as size_t

* [SYCL] Remove pragma directives from mmq.cpp

* SYCL: mmq add condition to prevent blocks_per_tile_x_row variable from becoming 0

* SYCL softmax: Initialize nreduce as size_t

* ggml-sycl.cpp: fix some trailing whitespaces

* SYCL: remove the unused variables instead of commenting it out

* SYCL poo2d kernel: set NAN for invalid pooling op

* SYCL gemm.hpp: remove pragma directives

* SYCL gemm.hpp: use const cast to properly support dnnl::memory

* SYCL: wkv6 remove a comment

* SYCL: clean comments step 2

* SYCL: clean comments and variables step 3

* SYCL: Use GGML_UNUSED for unused variables

* SYCL: remove extra empty lines and a comment

* Remove TODO

* cleanup spaces

* add a stdout for unsupported op

* use sycl printf over fprintf

* remove prints for CI

* SYCL ggml-sycl: pool2D use sycl::nan and remove if-else block

---------

Co-authored-by: Abhilash Majumder <redacted>
6 months agoggml : Fix compilation issues on ARM platform when building without fp16 (llama/10811)
Karol Kontny [Fri, 13 Dec 2024 00:04:19 +0000 (01:04 +0100)]
ggml : Fix compilation issues on ARM platform when building without fp16 (llama/10811)

6 months agoCUDA: faster non-contiguous concat (llama/10760)
a3sh [Thu, 12 Dec 2024 18:09:50 +0000 (02:09 +0800)]
CUDA: faster non-contiguous concat (llama/10760)

* faster uncontiguous concat

* Use a lambda to avoid code duplication

Co-authored-by: Diego Devesa <redacted>
* Update ggml/src/ggml-cuda/concat.cu

* add constexpr  and static assert

---------

Co-authored-by: Diego Devesa <redacted>
6 months agoremove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)
Diego Devesa [Thu, 12 Dec 2024 18:02:49 +0000 (19:02 +0100)]
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)

other windows build fixes

6 months agoVulkan: Use improved q4_k and q5_k dequant code in dequant shaders (llama/10798)
0cc4m [Thu, 12 Dec 2024 17:36:00 +0000 (18:36 +0100)]
Vulkan: Use improved q4_k and q5_k dequant code in dequant shaders (llama/10798)

6 months agoVulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmat...
0cc4m [Thu, 12 Dec 2024 17:35:37 +0000 (18:35 +0100)]
Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats (llama/10721)

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* Fix subgroup size control extension support check

Add accf32 and accf16 checks for coopmats

* Also disable coopmats on amdvlk

6 months agoggml: load all backends from a user-provided search path (llama/10699)
Gilad S [Wed, 11 Dec 2024 00:47:21 +0000 (02:47 +0200)]
ggml: load all backends from a user-provided search path (llama/10699)

* feat: load all backends from a user-provided search path

* fix: Windows search path

* refactor: rename `ggml_backend_load_all_in_search_path` to `ggml_backend_load_all_from_path`

* refactor: rename `search_path` to `dir_path`

* fix: change `NULL` to `nullptr`

Co-authored-by: Diego Devesa <redacted>
* fix: change `NULL` to `nullptr`

---------

Co-authored-by: Diego Devesa <redacted>
6 months agovulkan: request round-to-even for fp16 in im2col/rope_head (llama/10767)
Jeff Bolz [Tue, 10 Dec 2024 20:23:17 +0000 (14:23 -0600)]
vulkan: request round-to-even for fp16 in im2col/rope_head (llama/10767)

Vulkan doesn't mandate a specific rounding mode, but the shader_float_controls
feature allows rounding mode to be requested if the implementation supports it.