]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
pkg/ggml/sources/whisper.cpp
2 months agoSYCL: Add fp16 type support to unary op kernels (llama/12788)
Akarshan Biswas [Fri, 11 Apr 2025 08:03:50 +0000 (13:33 +0530)]
SYCL: Add fp16 type support to unary op kernels (llama/12788)

* SYCL: Add fp16 support to some elementwise OP kernels

* remove comment

ggml-ci

* Use static_cast directly

* remove not needed cast from tanh

* Use static cast and remove unneeded castings

* Adjust device_support_op for unary OPs

* Use cast_data and typed_data struct to deduplicate casting code

2 months agoggml: fix compilation error s390x (llama/12848)
Aaron Teo [Fri, 11 Apr 2025 05:20:07 +0000 (13:20 +0800)]
ggml: fix compilation error s390x (llama/12848)

* ggml: fixes #12846 compilation error

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: add documentation for code change

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: refactor to type-cast and update documentation

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: update documentation to provide full issue link

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
---------

Co-authored-by: Aleksei Nikiforov <redacted>
2 months agocpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running...
cmdr2 [Fri, 11 Apr 2025 06:44:19 +0000 (12:14 +0530)]
cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (ggml/1190)

2 months agoCANN: Support more ops (llama/12841)
Chenguang Li [Thu, 10 Apr 2025 00:51:52 +0000 (08:51 +0800)]
CANN: Support more ops (llama/12841)

* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D

* [CANN]Support COUNT_EQUAL && STEP && SGN

* [CANN]codestyle adjustment

* [CANN]codestyle adjustment

---------

Signed-off-by: noemotiovon <redacted>
2 months agoFixes #12823 (llama/12830)
Prajwal B Mehendarkar [Wed, 9 Apr 2025 23:18:01 +0000 (04:48 +0530)]
Fixes #12823 (llama/12830)

* Including limits file on AIX

* Fixes #12823

2 months agoggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:34 +0000 (23:00 +0000)]
ggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)

error: unknown type name '_Bool'

2 months agoggml-impl.h: fix build on POWER9 (llama/12855)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:25 +0000 (23:00 +0000)]
ggml-impl.h: fix build on POWER9 (llama/12855)

error: ISO C++17 does not allow 'register' storage class specifier

2 months agoCANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)
Chenguang Li [Wed, 9 Apr 2025 06:04:14 +0000 (14:04 +0800)]
CANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)

* [CANN] Support ELU and CONV_TRANSPOSE_1D

* [CANN]Modification review comments

* [CANN]Modification review comments

* [CANN]name adjustment

* [CANN]remove lambda used in template

* [CANN]Use std::func instead of template

* [CANN]Modify the code according to the review comments

---------

Signed-off-by: noemotiovon <redacted>
2 months agovulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)
Jeff Bolz [Wed, 9 Apr 2025 05:25:08 +0000 (00:25 -0500)]
vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)

q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.

2 months agovulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
Jeff Bolz [Wed, 9 Apr 2025 05:12:57 +0000 (00:12 -0500)]
vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)

This is consistent with the ggml-cuda behavior and the mul_mat fallback.

2 months agocuda : add f32 to bf16 copy op (llama/12806)
Sigbjørn Skjæret [Tue, 8 Apr 2025 21:21:31 +0000 (23:21 +0200)]
cuda : add f32 to bf16 copy op (llama/12806)

This allows BF16 KV-cache on CUDA.

2 months agollama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)
Georgi Gerganov [Tue, 8 Apr 2025 16:54:51 +0000 (19:54 +0300)]
llama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)

* ggml : FA supports F32 V

* graph : cast KV to F16 when the KV cache is not used

ggml-ci

* server : add test that exercises embeddings with FA enabled

ggml-ci

2 months agoggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
cmdr2 [Thu, 10 Apr 2025 12:23:08 +0000 (17:53 +0530)]
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)

fix #1186

2 months agoggml : add bilinear upscale support (ggml/1185)
Diego Devesa [Wed, 9 Apr 2025 10:32:13 +0000 (12:32 +0200)]
ggml : add bilinear upscale support (ggml/1185)

2 months agoggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
Diego Devesa [Wed, 9 Apr 2025 10:31:34 +0000 (12:31 +0200)]
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)

* ggml : add more generic ggml_custom op

* ggml : remove deprecated custom ops

2 months agoRevert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor...
Neo Zhang Jianyu [Tue, 8 Apr 2025 07:03:21 +0000 (15:03 +0800)]
Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (llama/12812)

* Revert "sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_s…"

This reverts commit 518a01480eb3a7c80a4951b430db9dee55428310.

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* rm tail space

2 months agoopencl: better identify Adreno GPU (llama/12760)
lhez [Mon, 7 Apr 2025 20:22:54 +0000 (13:22 -0700)]
opencl: better identify Adreno GPU (llama/12760)

2 months agocuda : fix HIP and MUSA BF16 (llama/0)
Georgi Gerganov [Mon, 7 Apr 2025 10:18:07 +0000 (13:18 +0300)]
cuda : fix HIP and MUSA BF16 (llama/0)

ggml-ci

2 months agosycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (llama...
zhouwg [Mon, 7 Apr 2025 15:22:57 +0000 (23:22 +0800)]
sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (llama/12734)

2 months agoCANN: fix typo in ggml-cann (llama/12733)
zhouwg [Mon, 7 Apr 2025 11:34:14 +0000 (19:34 +0800)]
CANN: fix typo in ggml-cann (llama/12733)

2 months agoCANN: Refactor to reduce duplicate code (llama/12731)
hipudding [Mon, 7 Apr 2025 09:10:36 +0000 (17:10 +0800)]
CANN: Refactor to reduce duplicate code (llama/12731)

* CANN: Refactor to reduce duplicate code

* CANN: fix review comment

2 months agomusa: fix compilation warnings in mp_22/31 (llama/12780)
R0CKSTAR [Sun, 6 Apr 2025 13:23:54 +0000 (21:23 +0800)]
musa: fix compilation warnings in mp_22/31 (llama/12780)

Signed-off-by: Xiaodong Ye <redacted>
2 months agovulkan: fix NaN issue in flash attention shader (llama/12776)
Jeff Bolz [Sun, 6 Apr 2025 09:03:47 +0000 (04:03 -0500)]
vulkan: fix NaN issue in flash attention shader (llama/12776)

Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.

2 months agovulkan: Use unclamped loads for flash attention mask (llama/12720)
Jeff Bolz [Sun, 6 Apr 2025 08:47:13 +0000 (03:47 -0500)]
vulkan: Use unclamped loads for flash attention mask (llama/12720)

nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.

2 months agoVulkan: Tune Vulkan mmq int dot shader for performance (llama/12767)
0cc4m [Sat, 5 Apr 2025 16:04:03 +0000 (18:04 +0200)]
Vulkan: Tune Vulkan mmq int dot shader for performance (llama/12767)

2 months agosycl: allow ggml-sycl configuration and compilation using Visual Studio project/solut...
Nicolò Scipione [Fri, 4 Apr 2025 14:00:46 +0000 (16:00 +0200)]
sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (llama/12625)

2 months agocmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)
Ronny Brendel [Fri, 4 Apr 2025 13:12:40 +0000 (15:12 +0200)]
cmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)

fixes error for compiler paths with spaces

2 months agovulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)
Jeff Bolz [Fri, 4 Apr 2025 05:54:35 +0000 (00:54 -0500)]
vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)

There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.

2 months agovulkan: set cmake minimum and project name in vulkan-shaders (llama/12744)
Jeff Bolz [Fri, 4 Apr 2025 05:53:20 +0000 (00:53 -0500)]
vulkan: set cmake minimum and project name in vulkan-shaders (llama/12744)

2 months agoCUDA: Prefer vector flash decoding kernel for Gemma models (llama/12738)
Gaurav Garg [Thu, 3 Apr 2025 16:20:29 +0000 (21:50 +0530)]
CUDA: Prefer vector flash decoding kernel for Gemma models (llama/12738)

* Prefer vector flash decoding kernel for Gemma models

Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.

* Update ggml/src/ggml-cuda/fattn.cu

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
2 months agovulkan: Fix missing cmake logic for dot product extension (llama/12721)
Jeff Bolz [Thu, 3 Apr 2025 15:08:26 +0000 (10:08 -0500)]
vulkan: Fix missing cmake logic for dot product extension (llama/12721)

2 months agofix MUSA compiler warning (llama/12704)
a3sh [Thu, 3 Apr 2025 07:32:55 +0000 (15:32 +0800)]
fix MUSA compiler warning (llama/12704)

* fix MUSA compiler warning

* replace (void) with GGML_UNUSED

2 months agoCANN: Support operator SIN COS ARGMAX (llama/12709)
Chenguang Li [Thu, 3 Apr 2025 07:18:08 +0000 (15:18 +0800)]
CANN: Support operator SIN COS ARGMAX (llama/12709)

* [CANN]support sin cos argmax

Signed-off-by: noemotiovon <redacted>
* [CANN]codestyle adjustment

Signed-off-by: noemotiovon <redacted>
* [CANN]Remove redundant code

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
Co-authored-by: noemotiovon <redacted>
2 months agoSimplify and improve CUDA graphs through use of indirect copy pointers (llama/9017)
Alan Gray [Thu, 3 Apr 2025 01:31:15 +0000 (02:31 +0100)]
Simplify and improve CUDA graphs through use of indirect copy pointers (llama/9017)

* CUDA: Simplify and improve CUDA graphs through use of indirect copy pointers

Previously there was complexity in the CUDA graphs implementation due
frequently changing parameters to copy kernels associated with K and V
cache pointers. This patch simplifies by using indirection to avoid
such parameters frequently changing, avoiding the need for frequent
graph updates.

Fixes #12152

* Addressed comments

* fix HIP builds

* properly sync to stream

* removed ggml_cuda_cpy_fn_ptrs

* move stream sync before free

* guard to only use indirection with graphs

* style fixes

* check for errors

---------

Co-authored-by: slaren <redacted>
2 months agoCANN: Fix failed test cases (llama/12708)
hipudding [Thu, 3 Apr 2025 00:49:51 +0000 (08:49 +0800)]
CANN: Fix failed test cases (llama/12708)

* CANN: Fix memory waste in aclnn_tensor

* CANN: fix backend ops fail

* CANN: fix acl_tensor memory alloc.

* CANN: format

* CANN: remove trailing whitespace

2 months agoopencl: use `max_alloc_size` in backend ctx instead of querying again (llama/12705)
lhez [Thu, 3 Apr 2025 00:01:42 +0000 (17:01 -0700)]
opencl: use `max_alloc_size` in backend ctx instead of querying again (llama/12705)

2 months agovulkan: Implement split_k for coopmat2 flash attention. (llama/12627)
Jeff Bolz [Wed, 2 Apr 2025 19:25:08 +0000 (14:25 -0500)]
vulkan: Implement split_k for coopmat2 flash attention. (llama/12627)

When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.

2 months agocmake: remove caching from vulkan coopmat checks (llama/12719)
bandoti [Wed, 2 Apr 2025 17:56:26 +0000 (14:56 -0300)]
cmake: remove caching from vulkan coopmat checks (llama/12719)

2 months agovulkan: Implement grouped query attention in the coopmat2 FA shader (llama/12559)
Jeff Bolz [Wed, 2 Apr 2025 17:40:32 +0000 (12:40 -0500)]
vulkan: Implement grouped query attention in the coopmat2 FA shader (llama/12559)

When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:

dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))

previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.

This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.

2 months agoVulkan: Fix mmq int dot float cache size (llama/12722)
0cc4m [Wed, 2 Apr 2025 17:12:30 +0000 (19:12 +0200)]
Vulkan: Fix mmq int dot float cache size (llama/12722)

2 months agollama : add option to override model tensor buffers (llama/11397)
Diego Devesa [Wed, 2 Apr 2025 12:52:01 +0000 (14:52 +0200)]
llama : add option to override model tensor buffers (llama/11397)

* llama : add option to override tensor buffers

* ggml : fix possible underflow in ggml_nbytes

2 months agoggml : simplify Arm fp16 CPU logic (ggml/1177)
Georgi Gerganov [Mon, 7 Apr 2025 09:25:15 +0000 (12:25 +0300)]
ggml : simplify Arm fp16 CPU logic (ggml/1177)

* ggml : simlpify Arm fp16 CPU logic

ggml-ci

* cont : bring back CUDA/MUSA checks

ggml-ci

2 months agoCUDA: don't convert BF16 weights to FP32 (ggml/1174)
Sigbjørn Skjæret [Fri, 4 Apr 2025 19:05:12 +0000 (21:05 +0200)]
CUDA: don't convert BF16 weights to FP32 (ggml/1174)

* add bf16 support

* use convert_from_bf16_cuda instead of convert_unary_cuda for f32

* revert 7ec5085

* move functionality into convert_unary with constexpr

2 months agocoreml : set convert_to="mlprogram" in convert
Daniel Bevenius [Wed, 23 Apr 2025 06:24:38 +0000 (08:24 +0200)]
coreml : set convert_to="mlprogram" in convert

* coreml : skip model load in convert-whisper-to-coreml.py

This commit updates the conversion process for Whisper models to use the
"mlprogram" format instead of "neuralnetwork".

The motivation for this change is that when using the "neuralnetwork"
format the underlying model produced is based on protobuf and my
understanding is that there are limitations to this format, such as
sizes of strings and the complexity of the model.

Currently when trying to convert larger models such as large-v3 the
conversion fails but succeeds for smaller models.

The "mlprogram" format is a more recent addition to CoreML and is
designed to be more flexible and powerful, allowing for more complex
models and larger data types. This seems to work for larger and smaller
models alike and unless I'm there are considerations that I'm not aware
of I think this is what we should be using moving forward.
The error that is generated for large models is the following:
```console
Running MIL backend_neuralnetwork pipeline: 100%|█████████| 9/9 [00:00<00:00, 35.44 passes/s]
Translating MIL ==> NeuralNetwork Ops: 100%|███████████| 5641/5641 [03:31<00:00, 26.65 ops/s]
Traceback (most recent call last):
  File "/Users/danbev/work/ai/whisper-work/models/convert-whisper-to-coreml.py", line 322, in <module>
    encoder = convert_encoder(hparams, encoder, quantize=args.quantize)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/models/convert-whisper-to-coreml.py", line 255, in convert_encoder
    model = ct.convert(
            ^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py", line 635, in convert
    mlmodel = mil_convert(
              ^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/mil/converter.py", line 186, in mil_convert
    return _mil_convert(
           ^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/mil/converter.py", line 245, in _mil_convert
    return modelClass(
           ^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/models/model.py", line 489, in __init__
    self.__proxy__, self._spec, self._framework_error = self._get_proxy_and_spec(
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/models/model.py", line 550, in _get_proxy_and_spec
    _MLModelProxy(
ValueError: basic_string
```

Refs: https://github.com/ggml-org/whisper.cpp/issues/3012

2 months agoci : disable freeBSD job in build.yml (#3064)
Daniel Bevenius [Tue, 22 Apr 2025 09:07:54 +0000 (11:07 +0200)]
ci : disable freeBSD job in build.yml (#3064)

This commit disables the FreeBSD job in build.yml of the GitHub Actions
workflow.

The motivation for this is that this job seems to stall and timeout from
time to time, taking up to 6 hours to complete/cancel.

2 months agoexamples : add HEAPU8 to exported runtime methods (#3062)
Daniel Bevenius [Sun, 20 Apr 2025 17:40:25 +0000 (19:40 +0200)]
examples : add HEAPU8 to exported runtime methods (#3062)

This commit adds `HEAPU8` to the list of exported methods.

The motivation for this commit is that currently this is causing an
error on Window systems where HEAPU8 in undefined, which results in the
following error message in the web console:
```console
main.js:1 Uncaught TypeError:
Cannot read properties of undefined (reading 'buffer') at __emval_get_property
(main.js:1:1363125) at 003a453a:0xc4a47 at 003a453a:0xc51cd at
Object.full_default (eval at craftInvokerFunction (main.js:1:1347011),
<anonymous>:9:10) at whisper.cpp/:647:42
```

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3059

2 months agoruby : make Ruby bindings installed with build options (#3056)
KITAITI Makoto [Thu, 17 Apr 2025 09:49:58 +0000 (18:49 +0900)]
ruby : make Ruby bindings installed with build options (#3056)

* Fix signature of URI.new7s return value

* Use path instead of string | _ToPath

* Add document comment to RBS

* Remove unnecessary build flags

* Remove unnecessary line

* Remove files have become unnecessary

* Make gem install accept build options for whisper.cpp

* Add instraction for build options in README

* Add methods for check to Options

* Test build options

* Rename: configs -> options

* Add assert_installed assertion

* Use assert_installed

* Remove unused attribute

* Extract dependency check logic as Dependencies class

* Update README

* Add WHISPER_FFMPEG option

* Test extra build options only on local test

* Bump version to 1.3.2 [skip ci]

2 months agowhisper : add no_context parameter to whisper_params (#3045)
Sacha Arbonel [Wed, 16 Apr 2025 04:24:38 +0000 (06:24 +0200)]
whisper : add no_context parameter to whisper_params (#3045)

2 months agoexamples : add FFmpeg v7.0 support to ffmpeg-transcode.cpp (#3038)
Fujimoto Seiji [Tue, 15 Apr 2025 04:09:00 +0000 (13:09 +0900)]
examples : add FFmpeg v7.0 support to ffmpeg-transcode.cpp (#3038)

FFmpeg introduced a new channel layout API that uses `AVChannelLayout`
interface in v6.0. It subsequently dropped the old bitmask-based API
in v7.0.

This updates decode_audio() to support the new channel layout API,
so that we can compile `whisper-cli` and `whisper-server` with FFmpeg
v7.0 or later.

Tested on on Ubuntu 24.10 with FFmpeg v7.0.2.

Signed-off-by: Fujimoto Seiji <redacted>
2 months agoruby: use CMake in build process (#3043)
KITAITI Makoto [Mon, 14 Apr 2025 09:18:27 +0000 (18:18 +0900)]
ruby: use CMake in build process (#3043)

* Use CMake to build shared object

* Make Rakefile follow change of build process

* Add test for packaging

* Run CI for Ruby bindings almost always

because each CMakeLists.txt might affect Ruby bindings

* Enable PIC

* Bump Ruby version to 3.2 on CI

* Check libgomp

* Check dependency of whisper.cpp accurately

2 months agodocs : update README.md to note newer nvidia gpus (#3031)
Jeff Klassen [Fri, 11 Apr 2025 06:54:51 +0000 (00:54 -0600)]
docs : update README.md to note newer nvidia gpus (#3031)

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3030

2 months agoaddon.node : support max_context api for addon.node (#3025)
Lin Xiaodong [Fri, 11 Apr 2025 04:36:38 +0000 (12:36 +0800)]
addon.node : support max_context api for addon.node (#3025)

* feat: support max content

* feat: show api in test file

---------

Co-authored-by: linxiaodong <redacted>
2 months agowhisper : reduce delta_min from 1000ms to 100ms (#3028)
Georgi Gerganov [Fri, 11 Apr 2025 04:23:02 +0000 (07:23 +0300)]
whisper : reduce delta_min from 1000ms to 100ms (#3028)

ggml-ci

2 months agodocs : document how to use 'WHISPER_FFMPEG' build option (#3029)
Fujimoto Seiji [Thu, 10 Apr 2025 16:21:38 +0000 (01:21 +0900)]
docs : document how to use 'WHISPER_FFMPEG' build option (#3029)

FFmpeg integration was introduced in 1b51fdf by William Tambellini,
but not mentioned in the main documentation.

Add a short guide on how to enable the feature. Confirmed to work
on both Ubuntu 24.04 and Fedora 39.

Signed-off-by: Fujimoto Seiji <redacted>
2 months agodocs : fix README.md (#3024)
Ekaitz Zárraga [Wed, 9 Apr 2025 17:49:37 +0000 (19:49 +0200)]
docs : fix README.md (#3024)

2 months agoxcf : use check for visionos build version (#3021)
Daniel Bevenius [Wed, 9 Apr 2025 14:34:58 +0000 (16:34 +0200)]
xcf : use check for visionos build version (#3021)

This commit adds a check for the visionos build version used with vtool
in build-xcframework.sh. The script now checks the Xcode version and
determines whether to use "xros" or "visionos" for the build version.

This commit also uses xcrun for the vtool so that the version of vtool
in xcode command line tools is used instead of the one in the system
path.

Refs: https://github.com/ggml-org/whisper.cpp/pull/2994#issuecomment-2773292223

2 months agoruby : fix types of arguments for rb_get_kwargs in ruby_whisper_params.c (#3022)
Olli [Wed, 9 Apr 2025 11:49:25 +0000 (13:49 +0200)]
ruby : fix types of arguments for rb_get_kwargs in ruby_whisper_params.c (#3022)

Change param_names and values not to be references for rb_get_kwargs - so it can be compiled on ruby 3.3.6 and 3.4.1

2 months agoruby : Update uri.rb (#3016)
Olli [Tue, 8 Apr 2025 13:27:40 +0000 (15:27 +0200)]
ruby : Update uri.rb (#3016)

Bugfix ... without this Pathname the "/" operator wouldn't work and will throw an error

2 months agomodels : fix dead link to models in readme (#3006)
Greg Sadetsky [Sun, 6 Apr 2025 05:29:41 +0000 (01:29 -0400)]
models : fix dead link to models in readme (#3006)

2 months agoruby : change homepage URI in Ruby gemspec (#3007)
KITAITI Makoto [Sat, 5 Apr 2025 04:55:09 +0000 (13:55 +0900)]
ruby : change homepage URI in Ruby gemspec (#3007)

2 months agotests : add script to benchmark whisper.cpp on LibriSpeech corpus (#2999)
Fujimoto Seiji [Fri, 4 Apr 2025 16:51:26 +0000 (01:51 +0900)]
tests : add script to benchmark whisper.cpp on LibriSpeech corpus (#2999)

* tests : add script to benchmark whisper.cpp on LibriSpeech corpus

LibriSpeech is a widely-used benchmark dataset for training and
testing speech recognition models.

This adds a set of scripts to measure the recognition accuracy of
whisper.cpp models, following the common benchmark standards.

Signed-off-by: Fujimoto Seiji <redacted>
* Document how to prepare `whisper-cli` and model files

Feedback from Daniel Bevenius.

This adds a short code example how to prepare the `whisper-cli`
command, to make the initial setup step a little bit clearer.

Signed-off-by: Fujimoto Seiji <redacted>
* tests : Simplify how to set up Python environment

Based on a feedback from Georgi Gerganov.

Instead of setting up a virtual environment in Makefile, let users
set up the Python environment. This is better since users may have
their own preferred workflow/toolkit.

Signed-off-by: Fujimoto Seiji <redacted>
---------

Signed-off-by: Fujimoto Seiji <redacted>
2 months agowhisper : fix "bench-all outputs an invalid result on larger models" (#3002)
Fujimoto Seiji [Fri, 4 Apr 2025 15:36:19 +0000 (00:36 +0900)]
whisper : fix "bench-all outputs an invalid result on larger models" (#3002)

The benchmark script 'scripts/bench-all.sh' assumes that the 11th
field of the output line is a timestamp. This assumption does not
hold when the target model takes a bit longer to process.

Fix this issue by introducing an explicit whitespace to the output
lines of `whisper_print_timings()`.

Signed-off-by: Fujimoto Seiji <redacted>
2 months agorename : ggerganov -> ggml-org (#3005)
Georgi Gerganov [Fri, 4 Apr 2025 13:11:52 +0000 (16:11 +0300)]
rename : ggerganov -> ggml-org (#3005)

2 months agoexamples : update server.py to match github pages app [no ci] (#3004)
Daniel Bevenius [Fri, 4 Apr 2025 08:23:53 +0000 (10:23 +0200)]
examples : update server.py to match github pages app [no ci] (#3004)

This commit updates examples/server.py which is used to serve the wasm
examples locally. The changes include:

- Added a redirect from the root URL to /whisper.cpp.
  So now accessing http://localhost:8000/ will redirect to
  http://localhost:8000/whisper.cpp/ which matches the url for the app
  deployed to github pages.

- Custom handling for coi-serviceworker.js to serve it to avoid
  and error in the console. This file is not strictly necessary
  for the local server to work as the headers are provided already but
  it is nice to not have an error in the console.

- Fixed the shutdown of the server to ensure it exits cleanly
  on Ctrl+C. Previously it would continue to hang onto the port even
  after the processed had exited.

2 months agowhisper.wasm : fix unknown language issue (#3000)
Daniel Bevenius [Thu, 3 Apr 2025 17:50:47 +0000 (19:50 +0200)]
whisper.wasm : fix unknown language issue (#3000)

* whisper.wasm : fix unknown language issue

This commit addresses an issue with whisper.wasm where the following
error was being displayed when running the application in github pages:
```
whisper_lang_id: unknown language 'д=␙c'
```

This turned out to be a memory corruption issue and further details
can be found in the reference issue below.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2998

2 months agoexamples : add new sources
Georgi Gerganov [Wed, 2 Apr 2025 12:24:02 +0000 (15:24 +0300)]
examples : add new sources

ggml-ci

2 months agosync : ggml
Georgi Gerganov [Wed, 2 Apr 2025 12:23:55 +0000 (15:23 +0300)]
sync : ggml

2 months agocpu: move all the operators into a separate c++ file (except mul_mat) (ggml/1167)
cmdr2 [Wed, 2 Apr 2025 12:16:16 +0000 (17:46 +0530)]
cpu: move all the operators into a separate c++ file (except mul_mat) (ggml/1167)

* cpu: refactor SIMD mappings and vectorized op functions into separate files

* Fix warning for ggml_float to float

* Fix warnings

* cpu: move all the operations (except mul_mat) to a separate c++ file

* fix whitespace

* Update ggml/src/ggml-cpu/vec.h

Co-authored-by: Diego Devesa <redacted>
* Fix PR comments - use GGML_UNUSED, use cassert in ops.cpp

* Reverse the order of import for ops.h and vec.h, to match what was present in ggml-cpu.c previously

---------

Co-authored-by: Diego Devesa <redacted>
2 months agodocs : add xcframework section to README.md [no ci] (#2997)
Daniel Bevenius [Thu, 3 Apr 2025 07:06:53 +0000 (09:06 +0200)]
docs : add xcframework section to README.md [no ci] (#2997)

This adds a section to the README.md file that describes how to use the
XCFramework.

The modification for this is that is not obvious how to use the
XCFramework and and example will help.
One thing to note is that the example is using the latest release
including the checksum. We are thinking about how we might automate
this in the future but for now this is a good start.

2 months agoreadme : update roadmap link
Georgi Gerganov [Wed, 2 Apr 2025 14:38:35 +0000 (17:38 +0300)]
readme : update roadmap link

2 months agorelease : v1.7.5 upstream/1.7.5
Georgi Gerganov [Wed, 2 Apr 2025 13:31:22 +0000 (16:31 +0300)]
release : v1.7.5

2 months agobench : update numbers [no ci] (#2993)
Georgi Gerganov [Wed, 2 Apr 2025 13:27:36 +0000 (16:27 +0300)]
bench : update numbers [no ci] (#2993)

2 months agosync : ggml
Georgi Gerganov [Wed, 2 Apr 2025 12:13:40 +0000 (15:13 +0300)]
sync : ggml

ggml-ci

2 months agoget_rows and dup optimization (llama/12671)
Chenguang Li [Wed, 2 Apr 2025 07:22:13 +0000 (15:22 +0800)]
get_rows and dup optimization (llama/12671)

* [CANN]get_rows and dup optimization.

Co-authored-by: hipudding <redacted>
Signed-off-by: noemotiovon <redacted>
* [CANN]GET_ROWS and CPY/DUP optimization

Co-authored-by: hipudding <redacted>
Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment

Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment

Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment

Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
Co-authored-by: noemotiovon <redacted>
Co-authored-by: hipudding <redacted>
2 months agoopencl : fix memory allocation size (llama/12649)
Junil Kim [Tue, 1 Apr 2025 16:54:34 +0000 (01:54 +0900)]
opencl : fix memory allocation size (llama/12649)

issue:
https://github.com/CodeLinaro/llama.cpp/pull/17#issuecomment-2760611283

This patch fixes the memory allocation size
not exceeding the maximum size of the OpenCL device.

2 months agometal : use F32 prec in FA kernels (llama/12688)
Georgi Gerganov [Tue, 1 Apr 2025 11:57:19 +0000 (14:57 +0300)]
metal : use F32 prec in FA kernels (llama/12688)

* metal : use F32 prec in FA kernels

ggml-ci

* cont : fix FA vec kernel

ggml-ci

2 months agoFix clang warning in gguf_check_reserved_keys (llama/12686)
R0CKSTAR [Tue, 1 Apr 2025 11:12:53 +0000 (19:12 +0800)]
Fix clang warning in gguf_check_reserved_keys (llama/12686)

* Fix clang warning in gguf_check_reserved_keys

Signed-off-by: Xiaodong Ye <redacted>
* Fix typo

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
2 months agovulkan: fix build when glslc doesn't support coopmat (llama/12683)
Wagner Bruna [Tue, 1 Apr 2025 09:38:07 +0000 (06:38 -0300)]
vulkan: fix build when glslc doesn't support coopmat (llama/12683)

2 months agoSYCL: Rename oneMKL to oneMath (llama/12192)
Romain Biessy [Tue, 1 Apr 2025 08:24:29 +0000 (10:24 +0200)]
SYCL: Rename oneMKL to oneMath (llama/12192)

* Rename oneMKL Interface to oneMath

* Use oneMath for Intel vendor

* Rename occurences to mkl

* clang-format

* Silence verbose warnings

* Set oneMath HIP_TARGETS

* Fix silence warnings

* Remove step to build oneMath from build instructions

* Use fixed oneMath version

* Remove INTEL_CPU

* Fold CMake oneDNN conditions

* Use Intel oneMKL for Intel devices

* Improve CMake message

* Link against MKL::MKL_SYCL::BLAS only

* Move oneMath documentation to Nvidia and AMD sections

2 months agoSYCL: switch to SYCL namespace (llama/12674)
Akarshan Biswas [Tue, 1 Apr 2025 08:11:39 +0000 (13:41 +0530)]
SYCL: switch to SYCL namespace (llama/12674)

2 months agoggml : faster ssm scan (llama/10558)
a3sh [Mon, 31 Mar 2025 16:05:13 +0000 (00:05 +0800)]
ggml : faster ssm scan (llama/10558)

* faster ssm_scan

* delete unused commnet

* clang format

* add space

* modify unnecessary calculations

* faster ssm conv implementatioin

* modify file name with dash

2 months agoVulkan: Add DP4A MMQ and Q8_1 quantization shader (llama/12135)
0cc4m [Mon, 31 Mar 2025 12:37:01 +0000 (14:37 +0200)]
Vulkan: Add DP4A MMQ and Q8_1 quantization shader (llama/12135)

* Vulkan: Add DP4A MMQ and Q8_1 quantization shader

* Add q4_0 x q8_1 matrix matrix multiplication support

* Vulkan: Add int8 coopmat MMQ support

* Vulkan: Add q4_1, q5_0 and q5_1 quants, improve integer dot code

* Add GL_EXT_integer_dot_product check

* Remove ggml changes, fix mmq pipeline picker

* Remove ggml changes, restore Intel coopmat behaviour

* Fix glsl compile attempt when integer vec dot is not supported

* Remove redundant code, use non-saturating integer dot, enable all matmul sizes for mmq

* Remove redundant comment

* Fix integer dot check

* Fix compile issue with unsupported int dot glslc

* Update Windows build Vulkan SDK version

2 months agocmake : fix whitespace (llama/0)
Georgi Gerganov [Mon, 31 Mar 2025 12:05:30 +0000 (15:05 +0300)]
cmake : fix whitespace (llama/0)

2 months agotests : remove gh label test-whisper-cli-tiny-en (#2988)
Daniel Bevenius [Wed, 2 Apr 2025 08:50:31 +0000 (10:50 +0200)]
tests : remove gh label test-whisper-cli-tiny-en (#2988)

This commit removes test-whisper-cli-tiny-en from the gh label.

The motivation for this change is that until recently the tests were
disabled. But now that they are enabled some of the tests, specifically
the ci jobs that use sanatizers (e.g. thread-sanitizer) take a long time
to run as they are instrumented.
Some of these jobs also have matricies which means that there are
multiple jobs are created that all run these tests.
The suggestion here is to limit the number of tests that are run in the
ci jobs so cut down the CI build time.

2 months agoexamples : clarify Core ML encoder model usage [no ci] (#2987)
Daniel Bevenius [Wed, 2 Apr 2025 06:32:14 +0000 (08:32 +0200)]
examples : clarify Core ML encoder model usage [no ci] (#2987)

This commit clarifies the usage of the Core ML encoder model in the
whisper.obj and whisper.swiftui examples.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2783

2 months agoci : remove intermediate build on push to master (#2986)
Daniel Bevenius [Wed, 2 Apr 2025 06:29:28 +0000 (08:29 +0200)]
ci : remove intermediate build on push to master (#2986)

This commit removes the builds that happen on each push to master.

Refs: https://github.com/ggerganov/whisper.cpp/discussions/2983#discussioncomment-12691424

2 months agowhisper.objc : fix typo in README.md [no ci] (#2985)
Daniel Bevenius [Wed, 2 Apr 2025 06:26:57 +0000 (08:26 +0200)]
whisper.objc : fix typo in README.md [no ci] (#2985)

This commit fixes a typo in the README.md file of the whisper.objc
example.

Resolves: https://github.com/ggerganov/whisper.cpp/issues/2984

2 months agocoreml: fix Whisper to CoreML conversion by disabling SDPA [no ci] (#2979)
Daniel Bevenius [Tue, 1 Apr 2025 16:01:23 +0000 (18:01 +0200)]
coreml: fix Whisper to CoreML conversion by disabling SDPA [no ci] (#2979)

* coreml: fix Whisper to CoreML conversion by disabling SDPA

This commit disables the use of PyTorch's
`scaled_dot_product_attention` in the Whisper model to avoid
compatibility issues during CoreML conversion.
The issue occurs because coremltools requires PyTorch 2.5.0, but the
Whisper implementation may expect behavior from newer PyTorch versions.

By setting `MultiHeadAttention.use_sdpa = False`, we force Whisper to
use its fallback manual attention implementation, which works correctly
with PyTorch 2.5.0 during the tracing process.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2783

* coreml: fix audio shape in whisper decoder conversion

This commit fixes the audio shape in the whisper decoder conversion
script.

The motivation for this is that the  audio shape was incorrect and
was causing the conversion to fail.

* coreml : set -e in generate-coreml-interface.sh

The commit sets the -e flag in the generate-coreml-interface.sh script
to make sure the script fails if any command fails.

* coreml : update generated encoder/decoder interfaces

This commit updates the generated encoder/decoder interfaces for the
whisper model which is the result of running the
generate-coreml-interface.sh script.

2 months agoci : add coreml job that converts base.en to coreml [no ci] (#2981)
Daniel Bevenius [Tue, 1 Apr 2025 15:04:32 +0000 (17:04 +0200)]
ci : add coreml job that converts base.en to coreml [no ci] (#2981)

* ci : add coreml job that converts base.en to coreml [no ci]

This commit adds a new job to the CI pipeline that downloads the base.en
model and converts it to CoreML format. The CoreML model is then packed
into a zip file and uploaded as an artifact.

This will only be done for pushes to master, releases, or pre-releases.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2783

* coreml : remove publishing of coreml model

* ci : add GGML_OPENMP=OFF to ubuntu-22-gcc-sanitized

2 months agotests : re-enable tests [no ci] (#2977)
Daniel Bevenius [Mon, 31 Mar 2025 15:04:37 +0000 (17:04 +0200)]
tests : re-enable tests [no ci] (#2977)

This commit re-enables the tests in the build process which are
currently commented out.

It is possible to build the tests using `-DWHISPER_BUILD_TESTS=ON` and
then run a single test using:
```console
$ ctest -R test-whisper-cli-tiny.en --test-dir build
Internal ctest changing into directory: /home/danbev/work/ai/whisper-work/build
Test project /home/danbev/work/ai/whisper-work/build
    Start 2: test-whisper-cli-tiny.en
1/1 Test #2: test-whisper-cli-tiny.en .........   Passed    4.44 sec

100% tests passed, 0 tests failed out of 1

Label Time Summary:
en      =   4.44 sec*proc (1 test)
gh      =   4.44 sec*proc (1 test)
tiny    =   4.44 sec*proc (1 test)

Total Test time (real) =   4.44 sec
```

Some of the tests take a long time to run so it might not be a good idea
to enable them in CI, or perhaps we could only run a subset of the tests
in CI.

2 months agoandroid.java : re-add ggml source updates (#2975)
Daniel Bevenius [Mon, 31 Mar 2025 14:14:33 +0000 (16:14 +0200)]
android.java : re-add ggml source updates (#2975)

This commit updates the ggml source to include the new unary and binary
operations. I merged https://github.com/ggerganov/whisper.cpp/pull/2958
which seems to have overwritten the changes to the ggml source which
were added in https://github.com/ggerganov/whisper.cpp/pull/2972.

Sorry about this.

2 months agoci : re-enable freeBDS-latest job (#2973)
Daniel Bevenius [Mon, 31 Mar 2025 13:24:08 +0000 (15:24 +0200)]
ci : re-enable freeBDS-latest job (#2973)

This commit re-enables the freeBSD-latest job which has been commented
out.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2781

2 months agoci : re-enable android_java job (#2958)
Daniel Bevenius [Mon, 31 Mar 2025 13:14:24 +0000 (15:14 +0200)]
ci : re-enable android_java job (#2958)

This commit re-enables the android_java job in the CI workflow. The job
was disabled because of a failing build.

The motivation for this is that Commit
226d344f565ea6140e7c6a583bc300a64454af58 ("whisper.android.java : update
build with ggml source changes") addressed build issues and it should
now be possible to re-enable this job.

2 months agoandroid : add new ggml source files
Georgi Gerganov [Mon, 31 Mar 2025 11:38:43 +0000 (14:38 +0300)]
android : add new ggml source files

ggml-ci

2 months agoruby : add new ggml sources
Georgi Gerganov [Mon, 31 Mar 2025 11:19:25 +0000 (14:19 +0300)]
ruby : add new ggml sources

ggml-ci

2 months agosync : ggml
Georgi Gerganov [Mon, 31 Mar 2025 11:13:54 +0000 (14:13 +0300)]
sync : ggml

ggml-ci

2 months agoSYCL: Remove misleading ggml_sycl_op_flatten function (llama/12387)
Akarshan Biswas [Mon, 31 Mar 2025 09:25:24 +0000 (14:55 +0530)]
SYCL: Remove misleading ggml_sycl_op_flatten function (llama/12387)

* SYCL: Remove misleading ggml_sycl_op_flatten function

* remove trailing whitespace

* Fix L2 norm from rebase

* remove try catch block from element_wise.cpp

* remove comment from common.hp

* ggml-sycl.cpp: Add try catch sycl::exception block in compute_forward

* norm.cpp: remove try catch exception block

2 months agometal : use constexpr in FA kernels + fix typedef (llama/12659)
Georgi Gerganov [Sun, 30 Mar 2025 19:04:04 +0000 (22:04 +0300)]
metal : use constexpr in FA kernels + fix typedef (llama/12659)

* metal : use constexpr in FA kernels

ggml-ci

* cont

ggml-ci

* cont : fix typedef

ggml-ci

2 months agomusa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc...
R0CKSTAR [Sun, 30 Mar 2025 08:59:38 +0000 (16:59 +0800)]
musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (llama/12611)

* musa: fix all warnings

Signed-off-by: Xiaodong Ye <redacted>
* musa: enable -DLLAMA_FATAL_WARNINGS=ON in run.sh

Signed-off-by: Xiaodong Ye <redacted>
* musa: update ci doc (install ccache)

Signed-off-by: Xiaodong Ye <redacted>
* fix Windows build issue

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
2 months agocmake : fix ccache conflict (llama/12522)
Jay [Sat, 29 Mar 2025 10:04:58 +0000 (18:04 +0800)]
cmake : fix ccache conflict (llama/12522)

If users already set CMAKE_C_COMPILER_LAUNCHER globally, setting it in
cmake again will lead to conflict and compile fail.

Signed-off-by: Jay <redacted>