]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
pkg/ggml/sources/whisper.cpp
6 months agovulkan: small mul_mat_vec optimizations (llama/10665)
Eve [Fri, 13 Dec 2024 08:42:04 +0000 (08:42 +0000)]
vulkan: small mul_mat_vec optimizations (llama/10665)

* double the number of rows per workgroup

* Update ggml-vulkan.cpp

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* only increase the number of rows for amd and subgroup size 64

* fix missing NUM_ROWS for mul_mat_vec_iq4_nl_f16_f32, untested

* use subgroup min and max to check for gcn (requires https://github.com/ggerganov/llama.cpp/pull/10721)

* manual merge ggml-vulkan.cpp

* set min and max subgroup size in any case

* Also double the number of rows for Intel GPUs

6 months agoSYCL: Reduce most of the compiler warnings (llama/10748)
Akarshan Biswas [Fri, 13 Dec 2024 06:42:15 +0000 (12:12 +0530)]
SYCL: Reduce most of the compiler warnings (llama/10748)

* Try to reduce some unused and typecast warnings

* Reduce compiler warnings step 2

* add a newline at the end of the file

* Initialize nreduce as size_t

* [SYCL] Remove pragma directives from mmq.cpp

* SYCL: mmq add condition to prevent blocks_per_tile_x_row variable from becoming 0

* SYCL softmax: Initialize nreduce as size_t

* ggml-sycl.cpp: fix some trailing whitespaces

* SYCL: remove the unused variables instead of commenting it out

* SYCL poo2d kernel: set NAN for invalid pooling op

* SYCL gemm.hpp: remove pragma directives

* SYCL gemm.hpp: use const cast to properly support dnnl::memory

* SYCL: wkv6 remove a comment

* SYCL: clean comments step 2

* SYCL: clean comments and variables step 3

* SYCL: Use GGML_UNUSED for unused variables

* SYCL: remove extra empty lines and a comment

* Remove TODO

* cleanup spaces

* add a stdout for unsupported op

* use sycl printf over fprintf

* remove prints for CI

* SYCL ggml-sycl: pool2D use sycl::nan and remove if-else block

---------

Co-authored-by: Abhilash Majumder <redacted>
6 months agoggml : Fix compilation issues on ARM platform when building without fp16 (llama/10811)
Karol Kontny [Fri, 13 Dec 2024 00:04:19 +0000 (01:04 +0100)]
ggml : Fix compilation issues on ARM platform when building without fp16 (llama/10811)

6 months agoCUDA: faster non-contiguous concat (llama/10760)
a3sh [Thu, 12 Dec 2024 18:09:50 +0000 (02:09 +0800)]
CUDA: faster non-contiguous concat (llama/10760)

* faster uncontiguous concat

* Use a lambda to avoid code duplication

Co-authored-by: Diego Devesa <redacted>
* Update ggml/src/ggml-cuda/concat.cu

* add constexpr  and static assert

---------

Co-authored-by: Diego Devesa <redacted>
6 months agoremove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)
Diego Devesa [Thu, 12 Dec 2024 18:02:49 +0000 (19:02 +0100)]
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)

other windows build fixes

6 months agoVulkan: Use improved q4_k and q5_k dequant code in dequant shaders (llama/10798)
0cc4m [Thu, 12 Dec 2024 17:36:00 +0000 (18:36 +0100)]
Vulkan: Use improved q4_k and q5_k dequant code in dequant shaders (llama/10798)

6 months agoVulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmat...
0cc4m [Thu, 12 Dec 2024 17:35:37 +0000 (18:35 +0100)]
Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats (llama/10721)

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* Fix subgroup size control extension support check

Add accf32 and accf16 checks for coopmats

* Also disable coopmats on amdvlk

6 months agoggml: load all backends from a user-provided search path (llama/10699)
Gilad S [Wed, 11 Dec 2024 00:47:21 +0000 (02:47 +0200)]
ggml: load all backends from a user-provided search path (llama/10699)

* feat: load all backends from a user-provided search path

* fix: Windows search path

* refactor: rename `ggml_backend_load_all_in_search_path` to `ggml_backend_load_all_from_path`

* refactor: rename `search_path` to `dir_path`

* fix: change `NULL` to `nullptr`

Co-authored-by: Diego Devesa <redacted>
* fix: change `NULL` to `nullptr`

---------

Co-authored-by: Diego Devesa <redacted>
6 months agovulkan: request round-to-even for fp16 in im2col/rope_head (llama/10767)
Jeff Bolz [Tue, 10 Dec 2024 20:23:17 +0000 (14:23 -0600)]
vulkan: request round-to-even for fp16 in im2col/rope_head (llama/10767)

Vulkan doesn't mandate a specific rounding mode, but the shader_float_controls
feature allows rounding mode to be requested if the implementation supports it.

6 months agovulkan: dynamic subgroup size for the remaining k quants (llama/10745)
Eve [Tue, 10 Dec 2024 19:33:23 +0000 (19:33 +0000)]
vulkan: dynamic subgroup size for the remaining k quants (llama/10745)

* q5_k

q4_k

q3_k

q2_k

q6_k multi row example

* revert as multi row isnt faster for k quants

6 months agoCUDA: rename macros to avoid conflicts with WinAPI (llama/10736)
Andreas Kieslinger [Tue, 10 Dec 2024 17:23:24 +0000 (18:23 +0100)]
CUDA: rename macros to avoid conflicts with WinAPI (llama/10736)

* Renames NVIDIA GPU-architecture flags to avoid name clashes with WinAPI. (e.g. CC_PASCAL, GPU architecture or WinAPI pascal compiler flag?)

* Reverts erroneous rename in SYCL-code.

* Renames GGML_CUDA_MIN_CC_DP4A to GGML_CUDA_CC_DP4A.

* Renames the rest of the compute capability macros for consistency.

6 months agovulkan: disable spirv-opt for coopmat shaders (llama/10763)
Jeff Bolz [Tue, 10 Dec 2024 17:22:20 +0000 (11:22 -0600)]
vulkan: disable spirv-opt for coopmat shaders (llama/10763)

There are some bugs in the 1.3.296 SDK, so disable this. It isn't strictly
necessary anyway.

Add missing dependency on vulkan-shaders-gen, so shaders get recompiled when it
changes.

Fix coopmat support reporting when glslc doesn't support NV_coopmat2.

6 months agoggml : remove return from ggml_gallocr_allocate_node (ggml/1048)
Daniel Bevenius [Sat, 14 Dec 2024 02:23:08 +0000 (03:23 +0100)]
ggml : remove return from ggml_gallocr_allocate_node (ggml/1048)

This commit removes the return statement from ggml_gallocr_allocate_node
function.

The motivation behind this change is to make the code more readable and
consistent.

6 months agoggml : add check for grad_accs (ggml/1046)
Daniel Bevenius [Fri, 13 Dec 2024 07:19:38 +0000 (08:19 +0100)]
ggml : add check for grad_accs (ggml/1046)

* ggml : add check for grad_accs

This commit adds a check for grad_accs in ggml_graph_get_grad and
ggml_graph_get_grad_acc functions. This is necessary to avoid segfaults
when grad_accs is not initialized.

The motivation for this change is that I find it nice to be able to
print out a computation graph using ggml_graph_print but this function
segfaults when grad_accs is not initialized:
```console
(gdb) p g1
$2 = (ggml_cgraph *) 0x7ffff66004b0
(gdb) p *g1
$3 = {size = 2048, n_nodes = 1, n_leafs = 2, nodes = 0x7ffff6600500,
grads = 0x0, grad_accs = 0x0, leafs = 0x7ffff6604500,
visited_hash_set = {size = 4099, used = 0x7ffff6610518,
keys = 0x7ffff6608500}, order = GGML_CGRAPH_EVAL_ORDER_LEFT_TO_RIGHT}
(gdb) p ggml_graph_print(g1)
=== GRAPH ===
n_nodes = 1

Program received signal SIGSEGV, Segmentation fault.
0x0000555555579775 in ggml_graph_get_grad
(cgraph=0x7ffff66004b0,node=0x7ffff6600340)
    at /ggml/ggml/src/ggml.c:5990
5990  return igrad != GGML_HASHSET_FULL &&
          ggml_bitset_get(cgraph->visited_hash_set.used, igrad) ?
          cgraph->grads[igrad] : NULL;
```

* squash! ggml : add check for grad_accs

Fix the check in ggml_graph_get_grad. The check was incorrectly using
cgraph->grad_accs instead of cgraph->grads.

6 months agocommon : remove old types
Georgi Gerganov [Tue, 10 Dec 2024 15:19:09 +0000 (17:19 +0200)]
common : remove old types

ggml-ci

6 months agoCUDA: fix shared memory access condition for mmv (llama/10740)
Johannes Gäßler [Mon, 9 Dec 2024 19:07:12 +0000 (20:07 +0100)]
CUDA: fix shared memory access condition for mmv (llama/10740)

6 months agovulkan: fix compile warnings (llama/10731)
Jeff Bolz [Mon, 9 Dec 2024 07:24:01 +0000 (01:24 -0600)]
vulkan: fix compile warnings (llama/10731)

6 months agoVulkan: fix NaN in tanh.comp with AMD proprietary driver on Windows (llama/10723)
stduhpf [Sun, 8 Dec 2024 18:19:19 +0000 (19:19 +0100)]
Vulkan: fix NaN in tanh.comp with AMD proprietary driver on Windows (llama/10723)

* Vulkan: fix NaN in tanh.comp

* Faster NaN-free tanh

6 months agovulkan: compile a test shader in cmake to check for coopmat2 support (llama/10713)
Jeff Bolz [Sun, 8 Dec 2024 08:05:55 +0000 (02:05 -0600)]
vulkan: compile a test shader in cmake to check for coopmat2 support (llama/10713)

6 months agoggml : disable iq4_nl interleave size 8 (llama/10709)
Georgi Gerganov [Sat, 7 Dec 2024 16:38:15 +0000 (18:38 +0200)]
ggml : disable iq4_nl interleave size 8 (llama/10709)

ggml-ci

6 months agoggml : refactor online repacking (llama/10446)
Djip007 [Sat, 7 Dec 2024 12:37:50 +0000 (13:37 +0100)]
ggml : refactor online repacking (llama/10446)

* rename ggml-cpu-aarch64.c to .cpp

* reformat extra cpu backend.

- clean Q4_0_N_M and IQ4_0_N_M
  - remove from "file" tensor type
  - allow only with dynamic repack

- extract cpu extra bufts and convert to C++
  - hbm
  - "aarch64"

- more generic use of extra buffer
  - generalise extra_supports_op
  - new API for "cpu-accel":
     - amx
     - aarch64

* clang-format

* Clean Q4_0_N_M ref

Enable restrict on C++

* add op GGML_OP_MUL_MAT_ID for Q4_0_N_M with runtime repack

* added/corrected control on tensor size for Q4 repacking.

* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Co-authored-by: Georgi Gerganov <redacted>
* add debug logs on repacks.

---------

Co-authored-by: Georgi Gerganov <redacted>
6 months agoVulkan: VK_KHR_cooperative_matrix support to speed up prompt processing (llama/10597)
0cc4m [Sat, 7 Dec 2024 09:24:15 +0000 (10:24 +0100)]
Vulkan: VK_KHR_cooperative_matrix support to speed up prompt processing (llama/10597)

* Vulkan: Implement VK_KHR_cooperative_matrix support in the matrix matrix multiplication shader

* Improve performance with better q4_k and q5_k dequant and store unrolling

* Add Vulkan MUL_MAT and MUL_MAT_ID accumulator precision selection

* Rework mulmat shader selection and compilation logic, avoid compiling shaders that won't get used by device

* Vulkan: Implement accumulator switch for specific mul mat mat shaders

* Vulkan: Unroll more loops for more mul mat mat performance

* Vulkan: Add VK_AMD_shader_core_properties2 support to read Compute Unit count for split_k logic

* Disable coopmat support on AMD proprietary driver

* Remove redundant checks

* Add environment variable GGML_VK_DISABLE_COOPMAT to disable VK_KHR_cooperative_matrix support

* Fix rebase typo

* Fix coopmat2 MUL_MAT_ID pipeline selection

6 months agometal : Extend how Llama.cpp locates metal resources (llama/10676)
Robert Ormandi [Sat, 7 Dec 2024 07:55:01 +0000 (01:55 -0600)]
metal : Extend how Llama.cpp locates metal resources (llama/10676)

* metal : Extend how Llama.cpp locates metal resources (llama/10675)

  * It searches the resource file in the directory where the current
    binary is located as well.
  * Resolves symbolic links.

Rationale:

When we plug this dependency into a Bazel build and run it in the
context of Bazel (e.g. testing):

  * the execution directory is often very different from where the files
    are located and no direct control over this (Bazel sandboxing),
  * the Bazel sandbox often use symbolic links to make files available.

With this patch, we can have the resource file added to the target,
can build and run tests in the context of Bazel.

* Update ggml/src/ggml-metal/ggml-metal.m

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.m

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
6 months agovulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (llama...
Jeff Bolz [Thu, 5 Dec 2024 19:15:05 +0000 (13:15 -0600)]
vulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (llama/10206)

6 months agoruby : Add no_speech_thold (#2641)
KITAITI Makoto [Wed, 18 Dec 2024 09:00:50 +0000 (18:00 +0900)]
ruby : Add no_speech_thold (#2641)

* Remove Whisper::Model.[]

* Fix Whisper::Model::URI#request

* Make Whisper::Context#initialize accept pre-converted model name

* Use downloading pre-converted model feature for testing

* Update README

* Remove unnecessary task

* Move whisper/model.rb -> whisper/model/uri.rb

* Update document comment of Whisper::Context#initialize

* Don't show download progress when not tty

* Pass String to raise

* Use cache model file if download fails

* Add test for auto download

* Specify required Ruby version

* Fix a typo

* Remove unnecessary flags

* Initialize Whisper::Params#diarize explicitely

* Remove redundant code from README for simplicity

* Add Whisper::Params#no_speech_thold attribute

* Add test for Whisper::Params#no_speech_thold

6 months agostream : improve consistency in README (#2642)
crummyh [Wed, 18 Dec 2024 06:43:48 +0000 (00:43 -0600)]
stream : improve consistency in README (#2642)

6 months agowhisper : support no_speech_thold (#2625)
Karthick [Tue, 17 Dec 2024 17:15:47 +0000 (22:45 +0530)]
whisper : support no_speech_thold (#2625)

* Implement no_speech_thold

no_speech_thold functionality is on par with OpenAI's whisper

* Addressed review comments

6 months agowhisper : add single-timestamp logic (#2629)
Karthick [Tue, 17 Dec 2024 17:07:08 +0000 (22:37 +0530)]
whisper : add single-timestamp logic (#2629)

* Fix hallucinations during silence

When the predicted tokens end with a single timestamp the the entire 30 segment should be considered as done, to avoid hallucinations for the remaining part of segment.
This behaviour is on par with openai's whisper. Refer to logic related to `single_timestamp_ending` in https://github.com/openai/whisper/blob/main/whisper/transcribe.py

* Accept review comments related to formatting.

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
6 months agoreadme : fix typo (#2637)
crummyh [Tue, 17 Dec 2024 17:05:35 +0000 (11:05 -0600)]
readme : fix typo (#2637)

6 months agocmake : fix "amd64" processor string (#2638)
Georgi Gerganov [Tue, 17 Dec 2024 16:34:32 +0000 (18:34 +0200)]
cmake : fix "amd64" processor string (#2638)

6 months agovulkan : fix soft_max.comp division by zero (#2633)
gn64 [Mon, 16 Dec 2024 10:34:38 +0000 (19:34 +0900)]
vulkan : fix soft_max.comp division by zero (#2633)

This change prevents a division by zero error when p.KY is 0.

6 months agocommon : add cstdio header
Georgi Gerganov [Mon, 16 Dec 2024 06:57:04 +0000 (08:57 +0200)]
common : add cstdio header

6 months agostream : update build instructions
Georgi Gerganov [Sun, 15 Dec 2024 19:55:36 +0000 (21:55 +0200)]
stream : update build instructions

6 months agoandroid : fix build and ci (#2624)
Thamster [Sat, 14 Dec 2024 15:25:53 +0000 (10:25 -0500)]
android : fix build and ci (#2624)

* Adding missing CMakeLists.txt include for ggm-cpu needed by whisper.android

* attempt to re-enable CI for JNI android

---------

Co-authored-by: Your Name <redacted>
6 months agomodels : fix typo in download-ggml-model.sh (#2623)
Michael Rienstra [Thu, 12 Dec 2024 16:02:00 +0000 (08:02 -0800)]
models : fix typo in download-ggml-model.sh (#2623)

Introduced in #2589

6 months agoruby : Sync whisper.cpp and model download feature (#2617)
KITAITI Makoto [Mon, 9 Dec 2024 11:17:50 +0000 (20:17 +0900)]
ruby : Sync whisper.cpp and model download feature (#2617)

* Use C++17

* Add test for Pathname of model

* Make Whisper::Context#initialize accept Pathname

* Add shorthand for pre-converted models

* Update documents

* Add headings to API section in README [skip ci]

* Remove unused function

* Don't care about no longer included file

* Cosmetic fix

* Use conditional get when get model files

6 months agoscripts : update to new build system
Georgi Gerganov [Mon, 9 Dec 2024 09:30:16 +0000 (11:30 +0200)]
scripts : update to new build system

6 months agodevops : add cmake
Georgi Gerganov [Sun, 8 Dec 2024 21:09:26 +0000 (23:09 +0200)]
devops : add cmake

6 months agodevops : update make commands
Georgi Gerganov [Sun, 8 Dec 2024 21:07:29 +0000 (23:07 +0200)]
devops : update make commands

6 months agoggml : remove old files (skip) (#0)
Georgi Gerganov [Sun, 8 Dec 2024 21:04:26 +0000 (23:04 +0200)]
ggml : remove old files (skip) (#0)

6 months agoggml : sync remnants (skip) (#0)
Georgi Gerganov [Sun, 8 Dec 2024 20:48:25 +0000 (22:48 +0200)]
ggml : sync remnants (skip) (#0)

6 months agoscripts : remove amx from sync
Georgi Gerganov [Sun, 8 Dec 2024 20:48:14 +0000 (22:48 +0200)]
scripts : remove amx from sync

6 months agoci : disable freeBSD builds [no ci]
Georgi Gerganov [Sun, 8 Dec 2024 13:52:57 +0000 (15:52 +0200)]
ci : disable freeBSD builds [no ci]

6 months agoreadme : update build instructions
Georgi Gerganov [Sun, 8 Dec 2024 13:48:14 +0000 (15:48 +0200)]
readme : update build instructions

6 months agoci : disable CUDA and Android builds
Georgi Gerganov [Sun, 8 Dec 2024 13:36:01 +0000 (15:36 +0200)]
ci : disable CUDA and Android builds

6 months agoci : disable Obj-C build + fixes
Georgi Gerganov [Sun, 8 Dec 2024 11:35:35 +0000 (13:35 +0200)]
ci : disable Obj-C build + fixes

6 months agomake : shim cmake
Georgi Gerganov [Fri, 6 Dec 2024 13:34:53 +0000 (15:34 +0200)]
make : shim cmake

6 months agotalk-llama : sync llama.cpp
Georgi Gerganov [Thu, 5 Dec 2024 12:30:33 +0000 (14:30 +0200)]
talk-llama : sync llama.cpp

6 months agosync : ggml
Georgi Gerganov [Thu, 5 Dec 2024 12:29:18 +0000 (14:29 +0200)]
sync : ggml

6 months agoggml : add predefined list of CPU backend variants to build (llama/10626)
Diego Devesa [Wed, 4 Dec 2024 13:45:40 +0000 (14:45 +0100)]
ggml : add predefined list of CPU backend variants to build (llama/10626)

* ggml : add predefined list of CPU backend variants to build

* update CPU dockerfiles

6 months agoggml-cpu : fix HWCAP2_I8MM value (llama/10646)
Diego Devesa [Wed, 4 Dec 2024 13:40:44 +0000 (14:40 +0100)]
ggml-cpu : fix HWCAP2_I8MM value (llama/10646)

6 months agovulkan: Implement "fast divide" (mul+shift) for unary ops like copy (llama/10642)
Jeff Bolz [Wed, 4 Dec 2024 07:28:59 +0000 (01:28 -0600)]
vulkan: Implement "fast divide" (mul+shift) for unary ops like copy (llama/10642)

6 months agoSYCL : Move to compile time oneMKL interface backend selection for NVIDIA backend...
Nicolò Scipione [Wed, 4 Dec 2024 01:29:20 +0000 (02:29 +0100)]
SYCL : Move to compile time oneMKL interface backend selection for NVIDIA backend (llama/10584)

* [SYCL] Move to Compile Time backend selection on oneMKL Interface for NVIDIA backend

Move to compile time selection to backend to avoid latency at run time.
Add it to all mkl gemm calls and only for NVIDIA backend.

Signed-off-by: nscipione <redacted>
* Formatting

* Address PR comments to increase readibility

---------

Signed-off-by: nscipione <redacted>
6 months agoAvoid using __fp16 on ARM with old nvcc (llama/10616)
Frankie Robertson [Wed, 4 Dec 2024 00:41:37 +0000 (02:41 +0200)]
Avoid using __fp16 on ARM with old nvcc (llama/10616)

6 months agovulkan: optimize and reenable split_k (llama/10637)
Jeff Bolz [Tue, 3 Dec 2024 19:29:54 +0000 (13:29 -0600)]
vulkan: optimize and reenable split_k (llama/10637)

Use vector loads when possible in mul_mat_split_k_reduce. Use split_k
when there aren't enough workgroups to fill the shaders.

6 months agoggml: add `GGML_SET` Metal kernel + i32 CPU kernel (ggml/1037)
PAB [Wed, 4 Dec 2024 08:19:30 +0000 (09:19 +0100)]
ggml: add `GGML_SET` Metal kernel + i32 CPU kernel (ggml/1037)

* implemented cpu kernel

* add i32 test cases in test-backend-ops

* typedef `ggml_metal_kargs_set`

* implemented `kernel_set`

* memcpy

6 months agoggml : add `GGML_PAD_REFLECT_1D` operation (ggml/1034)
PAB [Tue, 3 Dec 2024 19:20:04 +0000 (20:20 +0100)]
ggml : add `GGML_PAD_REFLECT_1D` operation (ggml/1034)

* ggml_pad_reflect_1d defined in header

* implemented on CPU

* called the forward pass

* impl Metal kernel

* added Metal kernel

* added OP_PAD_REFLECT_1D in test-backend-ops.cpp

* add test-pad-reflect-1d test case

* test case support multiple backend

6 months agofiles : remove make artifacts
Georgi Gerganov [Tue, 3 Dec 2024 18:29:32 +0000 (20:29 +0200)]
files : remove make artifacts

6 months agocommon : fix compile warning
Georgi Gerganov [Tue, 3 Dec 2024 18:25:37 +0000 (20:25 +0200)]
common : fix compile warning

ggml-ci

6 months agoggml : move AMX to the CPU backend (llama/10570)
Diego Devesa [Tue, 3 Dec 2024 18:22:12 +0000 (20:22 +0200)]
ggml : move AMX to the CPU backend (llama/10570)

ggml : automatic selection of best CPU backend (llama/10606)

6 months agometal : small-batch mat-mul kernels (llama/10581)
Georgi Gerganov [Tue, 3 Dec 2024 09:52:33 +0000 (11:52 +0200)]
metal : small-batch mat-mul kernels (llama/10581)

* metal : small-batch mat-mul kernels

ggml-ci

* metal : add rest of types

ggml-ci

* metal : final adjustments

ggml-ci

* metal : add comments

ggml-ci

6 months agoSYCL: Fix and switch to GGML_LOG system instead of fprintf (llama/10579)
Akarshan Biswas [Mon, 2 Dec 2024 07:04:11 +0000 (12:34 +0530)]
SYCL: Fix and switch to GGML_LOG system instead of fprintf (llama/10579)

* Switched to GGML_LOG

* Fix missing semicolon

6 months agoggml-cpu: replace AArch64 NEON assembly with intrinsics in ggml_gemv_q4_0_4x4_q8_0...
Adrien Gallouët [Sat, 30 Nov 2024 17:13:18 +0000 (18:13 +0100)]
ggml-cpu: replace AArch64 NEON assembly with intrinsics in ggml_gemv_q4_0_4x4_q8_0() (llama/10567)

Signed-off-by: Adrien Gallouët <redacted>
6 months agovulkan: Dynamic subgroup size support for Q6_K mat_vec (llama/10536)
Eve [Sat, 30 Nov 2024 07:00:02 +0000 (07:00 +0000)]
vulkan: Dynamic subgroup size support for Q6_K mat_vec (llama/10536)

* subgroup 64 version with subgroup add. 15% faster

scalable version

tested for subgroup sizes 16-128

* check for subgroup multiple of 16 and greater than 16

* subgroup sizes are always a power of 2 (https://github.com/KhronosGroup/GLSL/issues/45)

* force 16 sequential threads per block

* make 16 subgroup size a constant

6 months agoggml : fix I8MM Q4_1 scaling factor conversion (llama/10562)
Georgi Gerganov [Fri, 29 Nov 2024 14:25:39 +0000 (16:25 +0200)]
ggml : fix I8MM Q4_1 scaling factor conversion (llama/10562)

ggml-ci

6 months agoggml-cpu: fix typo in gemv/gemm iq4_nl_4_4 (llama/10580)
Shupei Fan [Fri, 29 Nov 2024 13:49:02 +0000 (21:49 +0800)]
ggml-cpu: fix typo in gemv/gemm iq4_nl_4_4 (llama/10580)

6 months agosycl : offload of get_rows set to 0 (llama/10432)
Alberto Cabrera Pérez [Fri, 29 Nov 2024 12:38:45 +0000 (12:38 +0000)]
sycl : offload of get_rows set to 0 (llama/10432)

6 months agosycl : Reroute permuted mul_mats through oneMKL (llama/10408)
Alberto Cabrera Pérez [Fri, 29 Nov 2024 09:49:43 +0000 (09:49 +0000)]
sycl : Reroute permuted mul_mats through oneMKL (llama/10408)

This PR fixes the failing MUL_MAT tests for the sycl backend.

6 months agoCANN: RoPE operator optimization (llama/10563)
Chenguang Li [Fri, 29 Nov 2024 06:46:55 +0000 (14:46 +0800)]
CANN: RoPE operator optimization (llama/10563)

* [cann] RoPE operator optimization

* [CANN]Code Formatting

---------

Co-authored-by: noemotiovon <redacted>
6 months agovulkan: get the first command buffer submitted sooner (llama/10499)
Jeff Bolz [Fri, 29 Nov 2024 06:18:02 +0000 (00:18 -0600)]
vulkan: get the first command buffer submitted sooner (llama/10499)

This is an incremental improvement over #9118 to get work to the GPU a bit
sooner. The first part is to start with a smaller number of nodes before
the first submit, and ramp it up to the current 100 nodes/submit. The
second part is to reduce the dryrun overhead for all the nodes that just
need to request descriptor space.

With these changes I get around 1-2% speedup on RTX 4070 combined with my
old Haswell-era CPU.

6 months agoggml : remove redundant copyright notice + update authors
Georgi Gerganov [Thu, 28 Nov 2024 18:46:40 +0000 (20:46 +0200)]
ggml : remove redundant copyright notice + update authors

6 months agoggml : fix row condition for i8mm kernels (llama/10561)
Georgi Gerganov [Thu, 28 Nov 2024 12:56:37 +0000 (14:56 +0200)]
ggml : fix row condition for i8mm kernels (llama/10561)

ggml-ci

6 months agocmake : fix ARM feature detection (llama/10543)
Georgi Gerganov [Thu, 28 Nov 2024 12:56:23 +0000 (14:56 +0200)]
cmake : fix ARM feature detection (llama/10543)

ggml-ci

6 months agoggml-cpu: support IQ4_NL_4_4 by runtime repack (llama/10541)
Shupei Fan [Thu, 28 Nov 2024 12:52:03 +0000 (20:52 +0800)]
ggml-cpu: support IQ4_NL_4_4 by runtime repack (llama/10541)

* ggml-cpu: support IQ4_NL_4_4 by runtime repack

* ggml-cpu: add __ARM_FEATURE_DOTPROD guard

6 months agokompute : improve backend to pass test_backend_ops (llama/10542)
Sergio López [Thu, 28 Nov 2024 11:51:38 +0000 (12:51 +0100)]
kompute : improve backend to pass test_backend_ops (llama/10542)

* kompute: op_unary: reject unsupported parameters

Signed-off-by: Sergio Lopez <redacted>
* kompute: softmax: implement ALiBi support

Signed-off-by: Sergio Lopez <redacted>
* kompute: rope: implement neox and phi3 support

Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_q4_k permutted support

Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_[q4_0|q4_1|q8_0] permutted support

Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_f16 permutted support

Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_q6_k permutted support

Signed-off-by: Sergio Lopez <redacted>
---------

Signed-off-by: Sergio Lopez <redacted>
6 months agoCANN: Fix SOC_TYPE compile bug (llama/10519)
leo-pony [Thu, 28 Nov 2024 07:25:24 +0000 (15:25 +0800)]
CANN: Fix SOC_TYPE compile bug (llama/10519)

* CANN: Fix the bug build fail on Ascend310P under two cases:
1) Manual specify SOC_TYPE
2) Under some unusual compile environment

* Update the cann backend News content: Support F16 and F32 data type model for Ascend 310P NPU.

* fix CANN  compile fail bug: the assert in ascend kernel function doesn't supportted on some CANN version

6 months agoCANN: ROPE operator optimization (llama/10540)
Chenguang Li [Thu, 28 Nov 2024 06:24:46 +0000 (14:24 +0800)]
CANN: ROPE operator optimization (llama/10540)

* [cann] ROPE operator optimization

Co-authored-by: noemotiovon <redacted>
6 months agoAdd some minimal optimizations for CDNA (llama/10498)
uvos [Wed, 27 Nov 2024 16:10:08 +0000 (17:10 +0100)]
Add some minimal optimizations for CDNA (llama/10498)

* Add some minimal optimizations for CDNA

* ggml_cuda: set launch bounds also for GCN as it helps there too

6 months agometal : fix group_norm support condition (llama/0)
Georgi Gerganov [Wed, 27 Nov 2024 09:22:14 +0000 (11:22 +0200)]
metal : fix group_norm support condition (llama/0)

6 months agovulkan: define all quant data structures in types.comp (llama/10440)
Jeff Bolz [Wed, 27 Nov 2024 07:32:54 +0000 (01:32 -0600)]
vulkan: define all quant data structures in types.comp (llama/10440)

6 months agovulkan: Handle GPUs with less shared memory (llama/10468)
Jeff Bolz [Wed, 27 Nov 2024 07:30:27 +0000 (01:30 -0600)]
vulkan: Handle GPUs with less shared memory (llama/10468)

There have been reports of failure to compile on systems with <= 32KB
of shared memory (e.g. #10037). This change makes the large tile size
fall back to a smaller size if necessary, and makes mul_mat_id fall
back to CPU if there's only 16KB of shared memory.

6 months agovulkan: further optimize q5_k mul_mat_vec (llama/10479)
Jeff Bolz [Wed, 27 Nov 2024 07:21:59 +0000 (01:21 -0600)]
vulkan: further optimize q5_k mul_mat_vec (llama/10479)

6 months agovulkan: skip integer div/mod in get_offsets for batch_idx==0 (llama/10506)
Jeff Bolz [Wed, 27 Nov 2024 07:08:54 +0000 (01:08 -0600)]
vulkan: skip integer div/mod in get_offsets for batch_idx==0 (llama/10506)

6 months agovulkan: optimize Q2_K and Q3_K mul_mat_vec (llama/10459)
Jeff Bolz [Wed, 27 Nov 2024 07:00:50 +0000 (01:00 -0600)]
vulkan: optimize Q2_K and Q3_K mul_mat_vec (llama/10459)

6 months agomtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (llama/10516)
R0CKSTAR [Tue, 26 Nov 2024 16:00:41 +0000 (00:00 +0800)]
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (llama/10516)

Signed-off-by: Xiaodong Ye <redacted>
6 months agovulkan: fix group_norm (llama/10496)
Jeff Bolz [Tue, 26 Nov 2024 15:45:05 +0000 (09:45 -0600)]
vulkan: fix group_norm (llama/10496)

Fix bad calculation of the end of the range. Add a backend test that
covers the bad case (taken from stable diffusion).

Fixes https://github.com/leejet/stable-diffusion.cpp/issues/439.

6 months agocmake : enable warnings in llama (llama/10474)
Georgi Gerganov [Tue, 26 Nov 2024 12:18:08 +0000 (14:18 +0200)]
cmake : enable warnings in llama (llama/10474)

* cmake : enable warnings in llama

ggml-ci

* cmake : add llama_get_flags and respect LLAMA_FATAL_WARNINGS

* cmake : get_flags -> ggml_get_flags

* speculative-simple : fix warnings

* cmake : reuse ggml_get_flags

ggml-ci

* speculative-simple : fix compile warning

ggml-ci

6 months agoggml-cpu: cmake add arm64 cpu feature check for macos (llama/10487)
Charles Xu [Tue, 26 Nov 2024 11:37:05 +0000 (12:37 +0100)]
ggml-cpu: cmake add arm64 cpu feature check for macos (llama/10487)

* ggml-cpu: cmake add arm64 cpu feature check for macos

* use vmmlaq_s32 for compile option i8mm check

6 months agoCANN: Improve the Inferencing Performance for Ascend NPU Device (llama/10454)
Shanshan Shen [Tue, 26 Nov 2024 10:08:37 +0000 (18:08 +0800)]
CANN: Improve the Inferencing Performance for Ascend NPU Device (llama/10454)

* improve inferencing performance for ascend npu.

Co-authored-by: Frank Mai <redacted>
* some modification after review

* some modifications after review

* restore some modifications

* restore some modifications

---------

Co-authored-by: shanshan shen <redacted>
Co-authored-by: Frank Mai <redacted>
6 months agoCANN: RoPE and CANCAT operator optimization (llama/10488)
Chenguang Li [Tue, 26 Nov 2024 09:31:05 +0000 (17:31 +0800)]
CANN: RoPE and CANCAT operator optimization (llama/10488)

Co-authored-by: noemotiovon <redacted>
6 months agovulkan: Fix a vulkan-shaders-gen arugment parsing error (llama/10484)
Junil Kim [Tue, 26 Nov 2024 01:47:20 +0000 (10:47 +0900)]
vulkan: Fix a vulkan-shaders-gen arugment parsing error (llama/10484)

The vulkan-shaders-gen was not parsing the --no-clean argument correctly.
Because the previous code was parsing the arguments which have a value only
and the --no-clean argument does not have a value, it was not being parsed
correctly. This commit can now correctly parse arguments that don't have values.

6 months agometal : enable mat-vec kernels for bs <= 4 (llama/10491)
Georgi Gerganov [Mon, 25 Nov 2024 19:49:31 +0000 (21:49 +0200)]
metal : enable mat-vec kernels for bs <= 4 (llama/10491)

6 months agollama : accept a list of devices to use to offload a model (llama/10497)
Diego Devesa [Mon, 25 Nov 2024 18:30:06 +0000 (19:30 +0100)]
llama : accept a list of devices to use to offload a model (llama/10497)

* llama : accept a list of devices to use to offload a model

* accept `--dev none` to completely disable offloading

* fix dev list with dl backends

* rename env parameter to LLAMA_ARG_DEVICE for consistency

6 months agoggml : add support for dynamic loading of backends (llama/10469)
Diego Devesa [Mon, 25 Nov 2024 14:13:39 +0000 (15:13 +0100)]
ggml : add support for dynamic loading of backends (llama/10469)

* ggml : add support for dynamic loading of backends

---------

Co-authored-by: Georgi Gerganov <redacted>
6 months agometal : minor code formatting
Georgi Gerganov [Mon, 25 Nov 2024 13:08:04 +0000 (15:08 +0200)]
metal : minor code formatting

6 months agoggml : do not use ARM features not included in the build (llama/10457)
Diego Devesa [Sat, 23 Nov 2024 13:41:12 +0000 (14:41 +0100)]
ggml : do not use ARM features not included in the build (llama/10457)

6 months agoCANN: Support Ascend310P to accelerate F32 and F16 Model (llama/10216)
leo-pony [Fri, 22 Nov 2024 06:07:20 +0000 (14:07 +0800)]
CANN: Support Ascend310P to accelerate F32 and F16 Model (llama/10216)

* CANN Support Ascend310P to accelerate F32 and F16 Model

* Add compile option soc type macro ASCEND_310P to ggml-cann lib

* Remove unused code

* Remove the ascend soc_type hard code compile option in CMakelist.txt

6 months agocuda : optimize argmax (llama/10441)
Diego Devesa [Thu, 21 Nov 2024 17:18:50 +0000 (18:18 +0100)]
cuda : optimize argmax (llama/10441)

* cuda : optimize argmax

* remove unused parameter

ggml-ci

* fixup : use full warps

ggml-ci

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* fix ub

* ggml : check ne00 <= INT32_MAX in argmax and argsort

---------

Co-authored-by: Johannes Gäßler <redacted>
6 months agovulkan: predicate max operation in soft_max shaders/soft_max (llama/10437)
Jeff Bolz [Wed, 20 Nov 2024 19:47:36 +0000 (13:47 -0600)]
vulkan: predicate max operation in soft_max shaders/soft_max (llama/10437)

Fixes #10434

6 months agovulkan: copy iq4_nl LUT into shared memory (llama/10409)
Jeff Bolz [Wed, 20 Nov 2024 07:40:18 +0000 (01:40 -0600)]
vulkan: copy iq4_nl LUT into shared memory (llama/10409)