]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
pkg/ggml/sources/whisper.cpp
4 weeks agoggml-cpu: x86 feature detection is specific to x86 (llama/13811)
Christian Kastner [Tue, 27 May 2025 11:18:39 +0000 (13:18 +0200)]
ggml-cpu: x86 feature detection is specific to x86 (llama/13811)

4 weeks agoggml : allow CUDA graphs when using pipeline parallelism (llama/13814)
Diego Devesa [Tue, 27 May 2025 11:05:18 +0000 (04:05 -0700)]
ggml : allow CUDA graphs when using pipeline parallelism (llama/13814)

4 weeks agocuda : avoid cuGetErrorString (llama/13791)
Georgi Gerganov [Mon, 26 May 2025 19:14:52 +0000 (22:14 +0300)]
cuda : avoid cuGetErrorString (llama/13791)

ggml-ci

4 weeks agoSYCL: Add non contiguous support in RMS_NORM and NORM kernels (llama/13611)
Akarshan Biswas [Mon, 26 May 2025 15:40:36 +0000 (21:10 +0530)]
SYCL: Add non contiguous support in RMS_NORM and NORM kernels (llama/13611)

* SYCL: Add non contiguous input support to norm kernel

* refactor and add RMS_NORM non contiguous input support

ggml-ci

* restore subgroup reduction for multi-subgroup thread blocks in norm kernels

* Swap grid dims of nsamples and nrows

ggml-ci

* Revert "Swap grid dims of nsamples and nrows"

This reverts commit 43be2d657fec7f7fba54e2cd154106bc0fc45adf.

* restore not required changes
ggml-ci

* address review comments: change it to more like SYCL

* Use a common function to calculate offset

* remove wrap around logic for handling broadcasts

* remove static from calculate_offset fn and use ceil_div

4 weeks agosycl: Add more debug prints (llama/13640)
Romain Biessy [Mon, 26 May 2025 08:28:53 +0000 (10:28 +0200)]
sycl: Add more debug prints (llama/13640)

4 weeks agovulkan: mark IM2COL as supporting non-contig (llama/13783)
Jeff Bolz [Mon, 26 May 2025 04:02:07 +0000 (23:02 -0500)]
vulkan: mark IM2COL as supporting non-contig (llama/13783)

4 weeks agoCANN: Add the basic supports of Flash Attention kernel (llama/13627)
Bizhao Shi [Mon, 26 May 2025 02:20:18 +0000 (10:20 +0800)]
CANN: Add the basic supports of Flash Attention kernel (llama/13627)

* cann: add the basic FA support

* cann: update the readme

* cann: update the FlashAttention with PSEShift

* cann: update the input parameters in FA

* cann: update the alibi with max_bias

* cann: add the constrints of softcap

* cann: update the docs CANN.md

* cann: update the docs CANN.md

* cann: fix typo of CANN.md

* cann: add some comments and update the CANN.md

* cann: update the CANN.md

* cann: update the inner precise for fusedInferAttention

* cann: update the constraints of flash_attn_ext on ggml-cann.cpp

* cann: clean the whitespace

* cann: clean the whitespace

* cann: add a new endline

4 weeks agoSYCL: revert "sycl: simplify bin_bcast_kernel (ggml/13383)" (llama/13752)
Akarshan Biswas [Sun, 25 May 2025 07:08:37 +0000 (12:38 +0530)]
SYCL: revert "sycl: simplify bin_bcast_kernel (ggml/13383)" (llama/13752)

Temporarily reverted due to failing fp16 DIV operation

This reverts commit 02cdd2d8b092b5a4bb18e013c6887ce49ba20ac5.

ggml-ci

4 weeks agoggml-cpu : set openmp wait time if not set (llama/13758)
Diego Devesa [Sat, 24 May 2025 20:26:47 +0000 (13:26 -0700)]
ggml-cpu : set openmp wait time if not set (llama/13758)

4 weeks agoggml : add ggml_gelu_erf() CUDA kernel (llama/13719)
Xuan-Son Nguyen [Sat, 24 May 2025 11:06:47 +0000 (13:06 +0200)]
ggml : add ggml_gelu_erf() CUDA kernel (llama/13719)

* ggml : add ggml_gelu_erf() CUDA kernel

* missing semicolon

4 weeks agoCUDA: fix race condition in FA vector kernels (llama/13742)
Johannes Gäßler [Sat, 24 May 2025 09:46:19 +0000 (11:46 +0200)]
CUDA: fix race condition in FA vector kernels (llama/13742)

4 weeks agoCANN: Support MUL_MAT_ID for q8_0 and q4_0 (llama/13705)
Chenguang Li [Fri, 23 May 2025 08:47:53 +0000 (16:47 +0800)]
CANN: Support MUL_MAT_ID for q8_0 and q4_0 (llama/13705)

* [CANN]Support MUL_MAT_ID Q8 && Q4

Signed-off-by: noemotiovon <redacted>
* codestyle adjustment

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
4 weeks agoggml : fix the order of ggml_unary_op (llama/13718)
Xuan-Son Nguyen [Fri, 23 May 2025 06:12:48 +0000 (08:12 +0200)]
ggml : fix the order of ggml_unary_op (llama/13718)

4 weeks agovulkan: support CPY from any type to itself (llama/13695)
Jeff Bolz [Fri, 23 May 2025 04:45:02 +0000 (00:45 -0400)]
vulkan: support CPY from any type to itself (llama/13695)

Reuse the f16/f32 copy shaders, and just scale the number of elements
according to the type size.

4 weeks agovulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it (llama...
Jeff Bolz [Fri, 23 May 2025 04:33:45 +0000 (00:33 -0400)]
vulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it (llama/13696)

4 weeks agouse LOG_WARN to replace `std::cerr` (llama/13657)
Judd [Fri, 23 May 2025 04:33:08 +0000 (12:33 +0800)]
use LOG_WARN to replace `std::cerr` (llama/13657)

4 weeks agosycl : Remove waits from function calls (llama/13702)
Nicolò Scipione [Thu, 22 May 2025 11:54:43 +0000 (13:54 +0200)]
sycl : Remove waits from function calls (llama/13702)

* removes the waits in async memcpy functions

4 weeks agoSYCL: Avoid using with SYCL-Graph for unsupported nodes (llama/13587)
Ewan Crawford [Thu, 22 May 2025 08:24:09 +0000 (09:24 +0100)]
SYCL: Avoid using with SYCL-Graph for unsupported nodes (llama/13587)

Currently on a CUDA backend to SYCL when running
`GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0` there
are two operations that throw an exception from the blocking
waits during queue recording.

* `-o CONCAT` : Use of blocking waits on a queue that's being recorded https://github.com/ggml-org/llama.cpp/blob/master/ggml/src/ggml-sycl/concat.cpp#L185-L187
* `-o MUL_MAT_ID`: Blocking wait on a recording queue for a copy to host memory https://github.com/ggml-org/llama.cpp/blob/master/ggml/src/ggml-sycl/ggml-sycl.cpp#L3072-L3074

We've noticed that `ggml-cuda.cu` has the
[check_node_graph_compatibility_and_refresh_copy_ops](https://github.com/ggml-org/llama.cpp/blob/39e73ae0d69f882d7e29cecc6dd8f5052fca6731/ggml/src/ggml-cuda/ggml-cuda.cu#L2458-L2458)
method for checking if a graph can be used, even if enabled. I've taken a
similar approach in this PR by adding a method to `ggml-sycl.cpp` for checking
if a graph can be used for the operations even if a user has asked for it to be
enabled.

4 weeks agoopencl: Add support for multiple devices (llama/12622)
Henry Linjamäki [Wed, 21 May 2025 23:21:45 +0000 (02:21 +0300)]
opencl: Add support for multiple devices (llama/12622)

* opencl: Add support for multiple devices

... but limited to one platform. A platform with a GPU will be preferred.

Additionally:

* Filter out devices that lack capabilities needed by the backend
  implementation (half support, OpenCL 2.0+, etc).

* Make ggml_backend_opencl_reg() thread-safe.

* fixup: fix an error in sync_with_other_backends

... when there is only one OpenCL device available.

4 weeks agoopencl: fix couple crashes (llama/12795)
Henry Linjamäki [Wed, 21 May 2025 20:21:17 +0000 (23:21 +0300)]
opencl: fix couple crashes (llama/12795)

* opencl: fix couple crashes

* fix kernel launches failed on devices which do not support
  non-uniform work-groups. When non-uniform work-groups are not
  supported, set `local_work_size` to NULL (= let driver choose the
  work-group sizes). This patch does not cover everything - just the
  cases tested by test-backend-ops.

* fix sub-buffer creation failed due to `cl_buffer_region::origin` not
  being aligned to `CL_DEVICE_MEM_BASE_ADDR_ALIGN`.

* OpenCL: query non-uniform WG sizes only on OpenCL 3.0+

4 weeks agoggml : add ggml_gelu_erf() (llama/13667)
Xuan-Son Nguyen [Wed, 21 May 2025 14:26:33 +0000 (16:26 +0200)]
ggml : add ggml_gelu_erf() (llama/13667)

* ggml : add ggml_gelu_na (not approximated)

* fix naming order

* rename na --> erf

* apply review suggesions

* revert naming order

4 weeks agomusa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accele...
R0CKSTAR [Wed, 21 May 2025 01:58:49 +0000 (09:58 +0800)]
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (llama/13647)

* musa: fix build warning (unused parameter)

Signed-off-by: Xiaodong Ye <redacted>
* musa: upgrade MUSA SDK version to rc4.0.1

Signed-off-by: Xiaodong Ye <redacted>
* musa: use mudnn::Unary::IDENTITY op to accelerate D2D memory copy

Signed-off-by: Xiaodong Ye <redacted>
* Update ggml/src/ggml-cuda/cpy.cu

Co-authored-by: Johannes Gäßler <redacted>
* musa: remove MUDNN_CHECK_GEN and use CUDA_CHECK_GEN instead in MUDNN_CHECK

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: Johannes Gäßler <redacted>
4 weeks agovulkan: fix warnings (llama/13626)
Eve [Tue, 20 May 2025 21:35:16 +0000 (21:35 +0000)]
vulkan: fix warnings (llama/13626)

* small fixes

* remove ifdef

4 weeks agoCUDA: skip fully masked-out KV in FA vec kernel (llama/13584)
Johannes Gäßler [Tue, 20 May 2025 12:45:07 +0000 (14:45 +0200)]
CUDA: skip fully masked-out KV in FA vec kernel (llama/13584)

* CUDA: skip fully masked-out KV in FA vec kernel

4 weeks agosycl: disable reorder for sycl mulmat (llama/13536)
Svetlozar Georgiev [Tue, 20 May 2025 09:34:15 +0000 (10:34 +0100)]
sycl: disable reorder for sycl mulmat (llama/13536)

4 weeks agometal : fix typo in FA kernel comments (llama/13651)
Georgi Gerganov [Tue, 20 May 2025 07:41:40 +0000 (10:41 +0300)]
metal : fix typo in FA kernel comments (llama/13651)

4 weeks agosycl : Overcoming workaround for mmap() allocation on Windows (llama/13482)
Nicolò Scipione [Tue, 20 May 2025 00:54:43 +0000 (02:54 +0200)]
sycl : Overcoming workaround for mmap() allocation on Windows (llama/13482)

* Remove mmap workaround on windows

After some testing I found that mmap is supported on windows and for
many GPUs on Linux. Therefore I remove the workaround for windows since
it is not necessary.

* Update llama-bench README

SYCL backend introduced a workaround that allows execution of
llama-bench also without specifying `--mmp 0` flag

4 weeks agoVulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence...
0cc4m [Mon, 19 May 2025 15:54:08 +0000 (17:54 +0200)]
Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (llama/13607)

4 weeks agosync : ggml
Georgi Gerganov [Tue, 27 May 2025 14:06:49 +0000 (17:06 +0300)]
sync : ggml

4 weeks agodocs : convert README_sycl.md to utf8 format [no ci] (#3191)
Daniel Bevenius [Tue, 27 May 2025 08:53:50 +0000 (10:53 +0200)]
docs : convert README_sycl.md to utf8 format [no ci] (#3191)

This commit updates the README_sycl.md file to use UTF-8 encoding.

The motivation for this is that while this file displays correctly in
github it will fail to render with tools that expect UTF-8 encoding.
For example this is the case when using `grip` to view the file locally.

4 weeks agonode : enable no_prints to suppress all output (#3189)
Daniel Bevenius [Tue, 27 May 2025 03:51:47 +0000 (05:51 +0200)]
node : enable no_prints to suppress all output (#3189)

This commit enable the node addon to suppress all output, even the
result of the transcription if the no_prints parameter is set to true.

The motivation for this is that for the node addon there is a
fullfilment handler/success callback to process the transcription
result. And it might be useful to be able to disable the printing of
the transcription result to the console, so that the user can handle
the result in their own way.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3176

4 weeks agotalk-llama : fix for swedish umlauts + expose model inference settings in talk-llama...
matteng1 [Mon, 26 May 2025 05:57:39 +0000 (07:57 +0200)]
talk-llama : fix for swedish umlauts + expose model inference settings in talk-llama.cpp (#3187)

Quick fix for not removing swedish umlauts.

* Update talk-llama.cpp

Expose model inference settings to user instead of hard coding them. Same defaults as previous defaults.

* Update examples/talk-llama/talk-llama.cpp

Co-authored-by: Georgi Gerganov <redacted>
5 weeks agodocs : fix VAD section heading levels (#3186)
KITAITI Makoto [Fri, 23 May 2025 08:38:26 +0000 (17:38 +0900)]
docs : fix VAD section heading levels (#3186)

5 weeks agoci : use dynamic libopenblas.dll for window-blas (#3177)
Daniel Bevenius [Fri, 23 May 2025 03:48:08 +0000 (05:48 +0200)]
ci : use dynamic libopenblas.dll for window-blas (#3177)

* ci : use dynamic libopenblas.dll for window-blas

This commit updates the windows-blas job to use the dynamic (can load
different kernels depending of the CPU arch) libopenblas.dll instead of
the "static" openblas.dll that get installed by vcpgk.

The motivation for this change is that there have been reports of
performance drops in later version specifically related to blas. Please
see the links below for more details.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3166
Refs: https://github.com/ggml-org/whisper.cpp/issues/2666#issuecomment-2885978811

5 weeks agoserver : Add k6 Load Testing Script (#3175)
Sacha Arbonel [Thu, 22 May 2025 08:03:04 +0000 (10:03 +0200)]
server : Add k6 Load Testing Script (#3175)

* add load testing script and update README for k6 integration

5 weeks agodocs : add VAD model download instructions [no ci] (#3180)
Daniel Bevenius [Thu, 22 May 2025 05:49:29 +0000 (07:49 +0200)]
docs : add VAD model download instructions [no ci] (#3180)

5 weeks agodocs : replace typo "]"with ")" in README (#3179)
Alpaim [Thu, 22 May 2025 03:49:44 +0000 (06:49 +0300)]
docs : replace typo "]"with ")" in README (#3179)

5 weeks agowhisper : remove redundant assignments (#3178)
Daniel Bevenius [Wed, 21 May 2025 11:23:20 +0000 (13:23 +0200)]
whisper : remove redundant assignments (#3178)

This commit removes some redundant assignments in the function
`whisper_exp_compute_token_level_timestamps`.

The motivations for this is that tokens[j] and token are references to
the same object and this can be a little confusing when reading the
code.

5 weeks agowhisper : update CMakeLists.txt to handle deprecated gpu Warnings (#3163)
Jugal Haresh Sheth [Tue, 20 May 2025 09:58:25 +0000 (10:58 +0100)]
whisper : update CMakeLists.txt to handle deprecated gpu Warnings (#3163)

* Fix CMakeLists.txt to handle deprecated gpu Warnings

* Conditionally apply -Wno-deprecated-gpu-targets only when GGML_CUDA is enabled

* Conditionally apply -Wno-deprecated-gpu-targets only when GGML_CUDA is enabled and not MSVC

---------

Co-authored-by: Jugal Sheth <redacted>
5 weeks agoruby : add GGML_SYCL_DNN option to ruby bindings (#3172)
Daniel Bevenius [Mon, 19 May 2025 15:59:43 +0000 (17:59 +0200)]
ruby : add GGML_SYCL_DNN option to ruby bindings (#3172)

This commit adds the `GGML_SYCL_DNN` option to the Ruby bindings for
the GGML library. This option as added to ggml in
Commit (5e7e07758a5f3172380500e173ca71f679bbef1e "sycl: use oneDNN for
matrices multiplication")

The motivation for this change to enable the CI build to pass.

5 weeks agotalk-llama : sync llama.cpp
Georgi Gerganov [Mon, 19 May 2025 10:39:12 +0000 (13:39 +0300)]
talk-llama : sync llama.cpp

ggml-ci

5 weeks agosync : ggml
Georgi Gerganov [Mon, 19 May 2025 10:38:44 +0000 (13:38 +0300)]
sync : ggml

ggml-ci

5 weeks agoCANN: Support MOE Model MUL_MAT_ID (llama/13042)
Chenguang Li [Mon, 19 May 2025 06:21:17 +0000 (14:21 +0800)]
CANN: Support MOE Model MUL_MAT_ID (llama/13042)

Signed-off-by: noemotiovon <redacted>
5 weeks agocmake: use the current build config for vulkan-shaders-gen (llama/13595)
Gilad S. [Sat, 17 May 2025 18:26:43 +0000 (21:26 +0300)]
cmake: use the current build config for vulkan-shaders-gen (llama/13595)

* fix: use the current build config for `vulkan-shaders-gen`

* fix: only pass a valid build type to `--config`

5 weeks agovulkan: move common FA code to flash_attn_base.comp (llama/13556)
Jeff Bolz [Sat, 17 May 2025 07:14:55 +0000 (16:14 +0900)]
vulkan: move common FA code to flash_attn_base.comp (llama/13556)

* vulkan: move common FA code to flash_attn_base.comp

* vulkan: move common FA index/stride setup code to flash_attn_base.comp

* build fix

5 weeks agovulkan: use scalar FA rather than coopmat2 when N==1 (llama/13554)
Jeff Bolz [Sat, 17 May 2025 06:35:47 +0000 (15:35 +0900)]
vulkan: use scalar FA rather than coopmat2 when N==1 (llama/13554)

5 weeks agometal : add FA-vec kernel for head size 64 (llama/13583)
Georgi Gerganov [Fri, 16 May 2025 17:32:58 +0000 (20:32 +0300)]
metal : add FA-vec kernel for head size 64 (llama/13583)

ggml-ci

5 weeks agosycl : fixed compilation warnings (llama/13582)
Łukasz Ślusarczyk [Fri, 16 May 2025 10:15:29 +0000 (12:15 +0200)]
sycl : fixed compilation warnings (llama/13582)

5 weeks agogguf : use ggml log system (llama/13571)
Diego Devesa [Thu, 15 May 2025 17:13:11 +0000 (10:13 -0700)]
gguf : use ggml log system (llama/13571)

* gguf : use ggml log system

* llama : remove unnecessary new lines in exception messages

5 weeks agosycl: simplify bin_bcast_kernel (llama/13383)
Atharva Dubey [Thu, 15 May 2025 15:39:52 +0000 (16:39 +0100)]
sycl: simplify bin_bcast_kernel (llama/13383)

5 weeks agosycl: reordered Q4_K MMVQ (llama/13109)
Svetlozar Georgiev [Thu, 15 May 2025 15:35:44 +0000 (16:35 +0100)]
sycl: reordered Q4_K MMVQ (llama/13109)

5 weeks agosycl: use oneDNN for matrices multiplication (llama/12972)
Łukasz Ślusarczyk [Thu, 15 May 2025 14:53:41 +0000 (16:53 +0200)]
sycl: use oneDNN for matrices multiplication (llama/12972)

5 weeks agoarm64: optimize q6_k_q8_k kernel with i8mm (llama/13519)
Yibo Cai [Wed, 14 May 2025 19:53:52 +0000 (03:53 +0800)]
arm64: optimize q6_k_q8_k kernel with i8mm (llama/13519)

This PR improves q6_k_q8_k gemm kernel with arm64 i8mm instruction.

Tested on neoverse-n2 with llama3 8b q6_k quantization model.
- 40% ~ 54% S_PP uplift for all batch sizes
- 16% ~ 47% S_TG uplift for batch size 4 and above

Perplexity doesn't change with this PR.

```
// tested on neoverse-n2
$ llama-batched-bench \
      -m Meta-Llama-3-8B-Instruct-Q6_K.gguf \
      --no-mmap -fa \
      -c 8192 -b 4096 -ub 512 -npp 128 -ntg 128 \
      -npl 1,2,4,8,16,32 \
      -t 64

---------------------------------------------------------------------
|    PP |     TG |    B |       S_PP t/s      |       S_TG t/s      |
|       |        |      | original |  this pr | original |  this pr |
|-------|--------|------|----------|----------|----------|----------|
|   128 |    128 |    1 |    78.52 |   109.18 |    18.63 |    18.88 |
|   128 |    128 |    2 |    84.62 |   123.94 |    34.54 |    36.92 |
|   128 |    128 |    4 |    84.36 |   122.49 |    52.65 |    61.32 |
|   128 |    128 |    8 |    90.52 |   138.87 |    63.46 |    84.41 |
|   128 |    128 |   16 |    90.11 |   138.56 |    71.04 |   101.33 |
|   128 |    128 |   32 |    89.81 |   137.79 |    75.14 |   110.47 |
---------------------------------------------------------------------
```

5 weeks agoCUDA: fix crash on large batch size for quant. MoE (llama/13537)
Johannes Gäßler [Wed, 14 May 2025 14:41:02 +0000 (16:41 +0200)]
CUDA: fix crash on large batch size for quant. MoE (llama/13537)

5 weeks agoCUDA: faster Deepseek FA, add Turing support (llama/13435)
Johannes Gäßler [Wed, 14 May 2025 14:08:20 +0000 (16:08 +0200)]
CUDA: faster Deepseek FA, add Turing support (llama/13435)

5 weeks agocmake: simplify vulkan shader test logic (llama/13263)
bandoti [Wed, 14 May 2025 10:53:57 +0000 (07:53 -0300)]
cmake: simplify vulkan shader test logic (llama/13263)

5 weeks agovulkan: KHR_coopmat flash attention (llama/13506)
Jeff Bolz [Wed, 14 May 2025 09:55:26 +0000 (18:55 +0900)]
vulkan: KHR_coopmat flash attention (llama/13506)

This shader uses coopmat1 to do the Q*K^T multiply. The P*V multiply is more
difficult for various reasons so I haven't done it. Performance for this
shader is around 2.5x better than for the scalar shader when doing prompt
processing. Some of the benefit may be from other optimizations like staging
through shared memory, or splitting by rows.

5 weeks agovulkan: workaround FA compile failures on macos (llama/13517)
Jeff Bolz [Wed, 14 May 2025 04:15:50 +0000 (13:15 +0900)]
vulkan: workaround FA compile failures on macos (llama/13517)

5 weeks agometal : use FA-vec kernel up to batch size 20 (llama/13496)
Georgi Gerganov [Tue, 13 May 2025 15:04:39 +0000 (18:04 +0300)]
metal : use FA-vec kernel up to batch size 20 (llama/13496)

* batched-bench : fix pp batch contents

* metal : optimize multi-sequence FA vec kernel

ggml-ci

* metal : use FA-vec kernel up to batch size 20

ggml-ci

5 weeks agometal : optimize multi-sequence FA vec kernel (llama/13493)
Georgi Gerganov [Tue, 13 May 2025 15:04:00 +0000 (18:04 +0300)]
metal : optimize multi-sequence FA vec kernel (llama/13493)

* batched-bench : fix pp batch contents

* metal : optimize multi-sequence FA vec kernel

ggml-ci

5 weeks agoggml-cpu: Update KleidiAI to v1.6 and fix include directives (llama/13509)
Dan Johansson [Tue, 13 May 2025 15:02:28 +0000 (17:02 +0200)]
ggml-cpu: Update KleidiAI to v1.6 and fix include directives (llama/13509)

Signed-off-by: Dan Johansson <redacted>
5 weeks agomnist: fix segmentation fault (ggml/1227)
Johannes Gäßler [Mon, 19 May 2025 07:33:35 +0000 (09:33 +0200)]
mnist: fix segmentation fault (ggml/1227)

5 weeks agoggml : fix apple OS check in ggml_print_backtrace (ggml/1229)
Diego Devesa [Mon, 19 May 2025 01:30:13 +0000 (18:30 -0700)]
ggml : fix apple OS check in ggml_print_backtrace (ggml/1229)

5 weeks agoggml : Fix missing backtrace on Linux (ggml/1228)
Daniel Tang [Sat, 17 May 2025 23:06:26 +0000 (19:06 -0400)]
ggml : Fix missing backtrace on Linux (ggml/1228)

* Modern Linux defaults /proc/sys/kernel/yama/ptrace_scope to 1
* Fixed lldb attach
* Simplify by having the child do ggml_print_backtrace_symbols

5 weeks agoexamples : add vad-speech-segments to win warns [no ci] (#3170)
Daniel Bevenius [Mon, 19 May 2025 10:17:18 +0000 (12:17 +0200)]
examples : add vad-speech-segments to win warns [no ci] (#3170)

The commit includes the vad-speech-segments in the disable msvc warnings
"list".

6 weeks agovad : return early if no vad segments are detected (#3158)
Daniel Bevenius [Fri, 16 May 2025 06:50:53 +0000 (08:50 +0200)]
vad : return early if no vad segments are detected (#3158)

This commit adds a check to `whisper_full_with_state` and if no VAD
segments are detected, the function will return early.

The motivation for this is that if no VAD segments are detected, the
function will not have any samples to process which can happen if an
audio sample does not contain any speech. I did not test this previously
and only discovered this when updating the stream example.

6 weeks agovad : store VAD context in whisper_state (#3156)
Daniel Bevenius [Fri, 16 May 2025 05:53:26 +0000 (07:53 +0200)]
vad : store VAD context in whisper_state (#3156)

* vad : store VAD context in whisper_state

This commit stores the VAD context in the whisper_state structure,
allowing for better management and reuse of the VAD context across
multiple calls to the whisper_vad function.

The motivation for this change is that when updating the stream example
I noticed that the VAD context was being re-initialized every time the
whisper_vad function was called. This involved loading the VAD model
which is expensive and unnecessary if the context can be reused.

Storing this in the whisper_state seems follow the pattern simliar to
how whisper_coreml_context and whisper_openvion_context are stored.

* vad : free vad_context in whisper_free_state

6 weeks agowhisper : add build_*/ to .gitignore [no ci] (#3157)
Daniel Bevenius [Thu, 15 May 2025 12:28:10 +0000 (14:28 +0200)]
whisper : add build_*/ to .gitignore [no ci] (#3157)

This commit add `build_*/` to `.gitignore` to ignore all build
directories that start with `build_`.

The motivation for this is that the Go bindings creates a directory
named build_go, which is not ignored by the current .gitignore. I was
not sure if changing this to build-go could effect exising users so I
opted to update .gitignore instead.

6 weeks agoexamples : add --print-confidence option to cli (#3150)
Daniel Bevenius [Wed, 14 May 2025 17:21:48 +0000 (19:21 +0200)]
examples : add --print-confidence option to cli (#3150)

* examples : add --print-confidence option to cli

This commit adds a new command-line option `--print-confidence` to the
whisper-cli. When enabled, this option prints the confidence level of each
token in the transcribed text using ANSI formatting codes.

The confidence levels are represented using different styles:
```console
main: confidence: highlighted (low confidence), underlined (medium), dim (high confidence)
```

Refs: https://github.com/ggml-org/whisper.cpp/issues/3135

6 weeks agovad : add download-vad-model scripts (#3149)
Daniel Bevenius [Wed, 14 May 2025 14:47:18 +0000 (16:47 +0200)]
vad : add download-vad-model scripts (#3149)

* vad : add download-vad-model scripts

This commit adds a script to download VAD models.

* vad : add vad model download script for windows [no ci]

Refs: https://github.com/ggml-org/whisper.cpp/issues/3146

6 weeks agoserver : add --flash-attn usage output (#3152)
Daniel Bevenius [Wed, 14 May 2025 13:22:05 +0000 (15:22 +0200)]
server : add --flash-attn usage output (#3152)

This commit adds the `--flash-attn` option to the usage output of the
server example.

The motivation for this change is that while it is possible to set this
option it is not printed in the usage output.

6 weeks agotalk-llama : sync llama.cpp
Georgi Gerganov [Tue, 13 May 2025 10:20:19 +0000 (13:20 +0300)]
talk-llama : sync llama.cpp

ggml-ci

6 weeks agowhisper : update to ggml-backend changes (#0)
Georgi Gerganov [Tue, 13 May 2025 10:11:24 +0000 (13:11 +0300)]
whisper : update to ggml-backend changes (#0)

ggml-ci

6 weeks agosync : ggml
Georgi Gerganov [Tue, 13 May 2025 10:10:17 +0000 (13:10 +0300)]
sync : ggml

ggml-ci

6 weeks agoggml : add mrope kernel for metal (llama/13457)
Xuan-Son Nguyen [Tue, 13 May 2025 10:10:08 +0000 (13:10 +0300)]
ggml : add mrope kernel for metal (llama/13457)

6 weeks agometal : optimize MoE for large batches (llama/13388)
Georgi Gerganov [Tue, 13 May 2025 10:09:20 +0000 (13:09 +0300)]
metal : optimize MoE for large batches (llama/13388)

6 weeks agoopencl: remove unnecessary assert for `add` (llama/13257)
lhez [Mon, 12 May 2025 20:13:49 +0000 (13:13 -0700)]
opencl: remove unnecessary assert for `add` (llama/13257)

6 weeks agollama/ggml: add LLM training support (llama/10544)
Johannes Gäßler [Mon, 12 May 2025 12:44:49 +0000 (14:44 +0200)]
llama/ggml: add LLM training support (llama/10544)

* llama/ggml: add LLM training support

more compact progress bar

llama_save_model_to_file

llama_opt_param_filter

ggml_graph_dup force_grads

refactor ggml_opt, fix test-opt

* remove logits_all

* refactor CUDA implementation for ACC

* reset graph at beginning of opt period

6 weeks agoggml-cpu: Integrate fp32=bf16xbf16 SME KleidiAI kernel (llama/13053)
Dan Johansson [Mon, 12 May 2025 11:06:19 +0000 (13:06 +0200)]
ggml-cpu: Integrate fp32=bf16xbf16 SME KleidiAI kernel (llama/13053)

* ggml-cpu: Integrate fp32=bf16xbf16 SME KleidiAI kernel

Signed-off-by: Dan Johansson <redacted>
* * code review fixes

Signed-off-by: Dan Johansson <redacted>
* * adds a comment that clarifies barrier usage

Signed-off-by: Dan Johansson <redacted>
---------

Signed-off-by: Dan Johansson <redacted>
Co-authored-by: Charles Xu <redacted>
6 weeks agoCUDA: fix misaligned synchronization in FA (llama/13469)
Johannes Gäßler [Mon, 12 May 2025 08:51:21 +0000 (10:51 +0200)]
CUDA: fix misaligned synchronization in FA (llama/13469)

6 weeks agoenable dpcpp nightly builds with libraries (llama/13406)
Atharva Dubey [Mon, 12 May 2025 05:15:32 +0000 (06:15 +0100)]
enable dpcpp nightly builds with libraries (llama/13406)

6 weeks agoCUDA: fix crash with partial offloading of MoE (llama/13439)
Johannes Gäßler [Sun, 11 May 2025 14:09:33 +0000 (16:09 +0200)]
CUDA: fix crash with partial offloading of MoE (llama/13439)

6 weeks agoAdd `--no-op-offload` to improve `-ot` pp perf in MoE models like llama4 400B (llama...
David Huang [Sun, 11 May 2025 12:18:39 +0000 (20:18 +0800)]
Add `--no-op-offload` to improve `-ot` pp perf in MoE models like llama4 400B (llama/13386)

6 weeks agoCUDA: fix race conditions FlashAttention kernels (llama/13438)
Johannes Gäßler [Sat, 10 May 2025 20:22:48 +0000 (22:22 +0200)]
CUDA: fix race conditions FlashAttention kernels (llama/13438)

6 weeks agoCUDA: fix FlashAttention on Turing (llama/13415)
Johannes Gäßler [Sat, 10 May 2025 07:16:52 +0000 (09:16 +0200)]
CUDA: fix FlashAttention on Turing (llama/13415)

6 weeks agovulkan: scalar flash attention implementation (llama/13324)
Jeff Bolz [Sat, 10 May 2025 06:07:07 +0000 (23:07 -0700)]
vulkan: scalar flash attention implementation (llama/13324)

* vulkan: scalar flash attention implementation

* vulkan: always use fp32 for scalar flash attention

* vulkan: use vector loads in scalar flash attention shader

* vulkan: remove PV matrix, helps with register usage

* vulkan: reduce register usage in scalar FA, but perf may be slightly worse

* vulkan: load each Q value once. optimize O reduction. more tuning

* vulkan: support q4_0/q8_0 KV in scalar FA

* CI: increase timeout to accommodate newly-supported tests

* vulkan: for scalar FA, select between 1 and 8 rows

* vulkan: avoid using Float16 capability in scalar FA

6 weeks agosycl : implementation of reordered Q4_0 MMVQ for Intel GPUs (llama/12858)
Alberto Cabrera Pérez [Fri, 9 May 2025 15:34:08 +0000 (16:34 +0100)]
sycl : implementation of reordered Q4_0 MMVQ for Intel GPUs (llama/12858)

* sycl : Implemented reorder Q4_0 mmvq

Signed-off-by: Alberto Cabrera <redacted>
* sycl : Fixed mmvq being called when reorder is disabled

* sycl : Improved comments in the quants header

Signed-off-by: Alberto Cabrera <redacted>
* Use static_assert

* safe_div -> ceil_div

* Clarify qi comment

* change the reorder tensor from init to execute OP

* dbg

* Undo changes to test-backend-ops

* Refactor changes on top of q4_0 reorder fix

* Missing Reverts

* Refactored opt_for_reorder logic to simplify code path

* Explicit inlining and unroll

* Renamed mul_mat_algo enum for consistency

---------

Signed-off-by: Alberto Cabrera <redacted>
Co-authored-by: romain.biessy <redacted>
6 weeks agoCUDA: FA support for Deepseek (Ampere or newer) (llama/13306)
Johannes Gäßler [Fri, 9 May 2025 11:34:58 +0000 (13:34 +0200)]
CUDA: FA support for Deepseek (Ampere or newer) (llama/13306)

* CUDA: FA support for Deepseek (Ampere or newer)

* do loop unrolling via C++ template

6 weeks agoCUDA: fix crash on large batch size for MoE models (llama/13384)
Johannes Gäßler [Fri, 9 May 2025 10:14:04 +0000 (12:14 +0200)]
CUDA: fix crash on large batch size for MoE models (llama/13384)

6 weeks agorpc : add rpc_msg_set_tensor_hash_req (llama/13353)
Radoslav Gerganov [Fri, 9 May 2025 07:31:07 +0000 (10:31 +0300)]
rpc : add rpc_msg_set_tensor_hash_req (llama/13353)

* rpc : add rpc_msg_set_tensor_hash_req

Use a dedicated struct for the request of RPC_CMD_SET_TENSOR_HASH which
makes the code cleaner.

* fix

6 weeks agovulkan: Allow up to 4096 elements for mul_mat_id row_ids (llama/13326)
Jeff Bolz [Fri, 9 May 2025 07:23:41 +0000 (02:23 -0500)]
vulkan: Allow up to 4096 elements for mul_mat_id row_ids (llama/13326)

This assert fired running Qwen_Qwen3-30B-A3B-Q2_K.gguf:

GGML_ASSERT(nei0 * nei1 <= 3072);

The tensor is 8 x 512. Increase this array size to accommodate.

6 weeks agosycl: addressing non-contiguous src1 mul_mats (nc and batched) (llama/13343)
Alberto Cabrera Pérez [Thu, 8 May 2025 09:08:01 +0000 (10:08 +0100)]
sycl: addressing non-contiguous src1 mul_mats (nc and batched) (llama/13343)

* sycl: fixed non-contiguous src1 mul_mats (nc and batched)

* Fixed wrong static_cast inside kernel

6 weeks agoexamples : add VAD speech segments example (#3147)
Daniel Bevenius [Tue, 13 May 2025 10:31:00 +0000 (12:31 +0200)]
examples : add VAD speech segments example (#3147)

This commit adds an example that demonstrates how to use a VAD (Voice
Activity Detection) model to segment an audio file into speech segments.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3144

6 weeks agovad : remove shortform for --vad option in cli.cpp (#3145)
Daniel Bevenius [Tue, 13 May 2025 04:04:05 +0000 (06:04 +0200)]
vad : remove shortform for --vad option in cli.cpp (#3145)

This commit removes the shortform for the --vad option in cli.cpp.

The motivation for this is that `-v` is often used for verbose or
version is many tools and this might cause confusion.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3065#issuecomment-2873243334

6 weeks agodocs : update README.md for whisper.objc app (#2569)
Tomer Schlesinger [Tue, 13 May 2025 04:03:50 +0000 (06:03 +0200)]
docs : update README.md for whisper.objc app (#2569)

6 weeks agovad : add initial Voice Activity Detection (VAD) support (#3065)
Daniel Bevenius [Mon, 12 May 2025 14:10:11 +0000 (16:10 +0200)]
vad : add initial Voice Activity Detection (VAD) support (#3065)

* vad : add initial Voice Activity Detection (VAD) support

This commit add support for Voice Activity Detection (VAD). When enabled
this feature will process the audio input and detect speech segments.
This information is then used to reduce the number of samples that need
to be processed by whisper_full.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3003

---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agowhisper : remove dummy commit comment [no ci] (#3143)
Daniel Bevenius [Mon, 12 May 2025 12:40:17 +0000 (14:40 +0200)]
whisper : remove dummy commit comment [no ci] (#3143)

This commit removes a dummy comment that was add by
Commit(589b408 "ci : dummy commit to trigger CI").

6 weeks agodocs : fix -owts flag typo karaoke section [no ci] (#3142)
Daniel Bevenius [Mon, 12 May 2025 08:56:39 +0000 (10:56 +0200)]
docs : fix -owts flag typo karaoke section [no ci] (#3142)

6 weeks agocli : print color scheme info for --print-colors (#3141)
Daniel Bevenius [Mon, 12 May 2025 08:43:04 +0000 (10:43 +0200)]
cli : print color scheme info for --print-colors (#3141)

This commit adds a description of the color scheme used in the CLI
when the --print-colors option is enabled.

The motivation for this is that it is not immediately clear what the
color scheme is when using the CLI with the --print-colors option.

Example output:
```console
$ ./build/bin/whisper-cli -f samples/jfk.wav --print-colors
...

main: color scheme: red (low confidence), yellow (medium), green (high confidence)

[00:00:00.000 --> 00:00:11.000]   And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.
```
The description will not be dispayed if the `--no-prints` options is
set.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3135

6 weeks agodocs : update Readme to recommend same Openvino as Python tools (#3138)
Simon Booth [Mon, 12 May 2025 07:06:51 +0000 (08:06 +0100)]
docs : update Readme to recommend same Openvino as Python tools (#3138)