]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
pkg/ggml/sources/whisper.cpp
8 weeks agorpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943)
Radoslav Gerganov [Fri, 25 Apr 2025 07:08:08 +0000 (10:08 +0300)]
rpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943)

RPC_CMD_SET_TENSOR always returns an empty response and we send this 4
times per token. We can improve TG speed if we don't wait for this empty
response.

The performance impact of this change depends on the network latency.

8 weeks agoggml : fix ggml_gallocr_ptr type (ggml/1205)
Diego Devesa [Wed, 30 Apr 2025 13:20:40 +0000 (15:20 +0200)]
ggml : fix ggml_gallocr_ptr type (ggml/1205)

8 weeks agowhisper : add check that target name exists (#3103)
Daniel Bevenius [Thu, 1 May 2025 08:05:24 +0000 (10:05 +0200)]
whisper : add check that target name exists (#3103)

This commit adds a check to makes sure that the target exists before
trying to add compile options to ignore warnings when using MSVC.

The motivation for this is currently the build is broken depending on
the cmake options provided. With this fix it should be possible to build
even if the targets are not actually available.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3090#issuecomment-2842760104

8 weeks agoserver : add --no-gpu option to print usage output (#3098)
Daniel Bevenius [Thu, 1 May 2025 06:15:12 +0000 (08:15 +0200)]
server : add --no-gpu option to print usage output (#3098)

This commit adds the the command line option `--no-gpu` to the server
examples print usage function.

The motivation for this is that this options is available and can be set
but it is not displayed in the usage message.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3095

8 weeks agoruby : ignore "Downloading" output in test_log_suppress (#3106)
Daniel Bevenius [Thu, 1 May 2025 06:12:48 +0000 (08:12 +0200)]
ruby : ignore "Downloading" output in test_log_suppress (#3106)

This commit adds a temporary fix to the `test_log_suppress` test in the
Ruby bindings.

The motivation for this changes is that I suspect that the recent
migration of the models to HuggingFace Xet has changed the way HTTP
caching works for the models. This is causing the test in question to
fail. This is a temporary fix so that CI is not broken while we
investigate this further.

8 weeks agomake : fix samples glob pattern (#3100)
Georgi Gerganov [Wed, 30 Apr 2025 11:21:51 +0000 (14:21 +0300)]
make : fix samples glob pattern (#3100)

8 weeks agoggml : suppress Windows compiler warnings (#3075)
Daniel Bevenius [Tue, 29 Apr 2025 13:47:55 +0000 (15:47 +0200)]
ggml : suppress Windows compiler warnings (#3075)

* whisper: suppress Windows compiler warnings

This commit disables compiler warnings on window using MSVC.

The motivation for these changes is that some compilers generate
warnings for these conversion, for example Windows MSVC, and
there are quite a few of them. This makes it a little difficult to
spot new warnings that may be introduced and also can be difficult
for users/embedders of ggml where these warnings are hard to separate
from their own warnings.

* squash! whisper: suppress Windows compiler warnings

Move ggml related warnings into ggml. This commit also fixes the
indentation and adds a missing whitespace to the if statement.

2 months agowhisper : fix grammar advance stack warning (#3087)
Daniel Bevenius [Mon, 28 Apr 2025 17:11:38 +0000 (19:11 +0200)]
whisper : fix grammar advance stack warning (#3087)

This commit addresses a warnings that is present for Release builds:
```console
[ 30%] Building CXX object src/CMakeFiles/whisper.dir/whisper.cpp.o
In file included from /usr/include/c++/13/bits/stl_tree.h:63,
                 from /usr/include/c++/13/map:62,
                 from /home/danbev/work/ai/whisper.cpp/src/whisper-arch.h:5,
                 from /home/danbev/work/ai/whisper.cpp/src/whisper.cpp:2:
In static member function ‘static void std::__copy_move<false, false, std::random_access_iterator_tag>::__assign_one(_Tp*, _Up*) [with _Tp = const whisper_grammar_element*; _Up = const whisper_grammar_element* const]’,
    inlined from ‘static _Up* std::__copy_move<_IsMove, true, std::random_access_iterator_tag>::__copy_m(_Tp*, _Tp*, _Up*) [with _Tp = const whisper_grammar_element* const; _Up = const whisper_grammar_element*; bool _IsMove = false]’ at /usr/include/c++/13/bits/stl_algobase.h:440:20,
    inlined from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false; _II = const whisper_grammar_element* const*; _OI = const whisper_grammar_element**]’ at /usr/include/c++/13/bits/stl_algobase.h:506:30,
    inlined from ‘_OI std::__copy_move_a1(_II, _II, _OI) [with bool _IsMove = false; _II = const whisper_grammar_element* const*; _OI = const whisper_grammar_element**]’ at /usr/include/c++/13/bits/stl_algobase.h:533:42,
...
```
This warning is caused by the fact that the `stack` vector is empty
when it is passed to `new_stacks.push_back(stack);`.

The suggested fix is to use `new_stacks.emplace_back();` instead of
`new_stacks.push_back(stack);`.

2 months agoexamples : expose language detection probabilities to server example (#3044)
Sacha Arbonel [Mon, 28 Apr 2025 16:25:45 +0000 (18:25 +0200)]
examples : expose language detection probabilities to server example (#3044)

* feat: expose language detection probabilities to server.cpp

* feat: enhance language detection output in server.cpp

* Remove empty spaces.

2 months agowhisper : remove empty .gitmodules file [no ci] (#3085)
Daniel Bevenius [Mon, 28 Apr 2025 13:52:05 +0000 (15:52 +0200)]
whisper : remove empty .gitmodules file [no ci] (#3085)

This commit removes the empty `.gitmodules` file from the repository.

The motivation of this is that this file is currently empty and the
project does not use any submodules at this time. Removing it mainly to
reduce clutter in the repository and any confusion when seen the file
in repo.

2 months agotalk-llama : sync llama.cpp (#3084)
Georgi Gerganov [Mon, 28 Apr 2025 13:40:23 +0000 (16:40 +0300)]
talk-llama : sync llama.cpp (#3084)

ggml-ci

2 months agoci : disable publishing of java binding [no ci] (#3086)
Daniel Bevenius [Mon, 28 Apr 2025 13:38:52 +0000 (15:38 +0200)]
ci : disable publishing of java binding [no ci] (#3086)

This commit disables the publishing of the Java binding to the Maven
repository.

The motivation for this is that this job was disabled for some time and
recently it was re-enabled, but the publishing of the Java binding
caused the build to fail and needs to be investigated further.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3079

2 months agobuild : Add Moore Threads GPU support and update GitHub workflow for MUSA build ...
R0CKSTAR [Mon, 28 Apr 2025 08:06:41 +0000 (16:06 +0800)]
build : Add Moore Threads GPU support and update GitHub workflow for MUSA build (#3069)

* Update PATH for main/main-cuda container

Signed-off-by: Xiaodong Ye <redacted>
* Add Dockerfile for musa, .dockerignore and update CI

Signed-off-by: Xiaodong Ye <redacted>
* Add Moore Threads GPU Support in README.md and replace ./main with whisper-cli

Signed-off-by: Xiaodong Ye <redacted>
* Forward GGML_CUDA/GGML_MUSA to cmake in Makefile

Signed-off-by: Xiaodong Ye <redacted>
* Minor updates for PATH ENV in Dockerfiles

Signed-off-by: Xiaodong Ye <redacted>
* Address comments

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
2 months agoexamples : fix deprecated FFmpeg functions (#3073)
Pedro [Mon, 28 Apr 2025 04:16:50 +0000 (01:16 -0300)]
examples : fix deprecated FFmpeg functions (#3073)

* Fix deprecated FFmpeg functions and free packet

* avcodec_free_context

2 months agoruby : add encoder begin callback related methods (#3076) upstream/latest
KITAITI Makoto [Fri, 25 Apr 2025 19:33:11 +0000 (04:33 +0900)]
ruby : add encoder begin callback related methods (#3076)

* Lazy run TestBase.whisper

* Fix indentation

* Remove disused GGML_HIP_UMA from Ruby

* Add encoder_begin_callback

* Comment out existing abort mechanism

* Add test for encoder_begin_callback

* Add signatures for encoder_begin_callback related methods

* Update gem date

2 months agoci : enable bindings java job (#3070)
Daniel Bevenius [Fri, 25 Apr 2025 12:56:06 +0000 (14:56 +0200)]
ci : enable bindings java job (#3070)

* ci : re-enable bindings-java (java) job

This commit re-enables the job previously name `java` which was
disabled in the build.yml file.

The motivation for this is that we recently fixed a few issue in the
java bindings and it should be possible to build them on windows.

Refs: https://github.com/ggerganov/whisper.cpp/pull/2949
Resolves: https://github.com/ggerganov/whisper.cpp/issues/2781

2 months agoruby : add cmake option (#0)
Georgi Gerganov [Thu, 24 Apr 2025 17:38:43 +0000 (20:38 +0300)]
ruby : add cmake option (#0)

2 months agocuda : fix unused variable compile warning (#0)
Georgi Gerganov [Thu, 24 Apr 2025 15:59:06 +0000 (18:59 +0300)]
cuda : fix unused variable compile warning (#0)

ggml-ci

2 months agosync : ggml upstream/1.7.5+105
Georgi Gerganov [Thu, 24 Apr 2025 15:41:48 +0000 (18:41 +0300)]
sync : ggml

ggml-ci

2 months agoopencl : remove obsolete files (skip) (ggml/1200)
Georgi Gerganov [Thu, 24 Apr 2025 15:41:17 +0000 (18:41 +0300)]
opencl : remove obsolete files (skip) (ggml/1200)

2 months agosync : ggml
Georgi Gerganov [Thu, 24 Apr 2025 15:41:36 +0000 (18:41 +0300)]
sync : ggml

2 months agoopencl: split ggml-opencl.cl into multiple files and cleanup (llama/12886)
lhez [Thu, 24 Apr 2025 14:46:49 +0000 (17:46 +0300)]
opencl: split ggml-opencl.cl into multiple files and cleanup (llama/12886)

---------

Co-authored-by: Shangqing Gu <redacted>
2 months agoggml : fix trailing whitespaces (llama/0)
Georgi Gerganov [Thu, 24 Apr 2025 14:22:27 +0000 (17:22 +0300)]
ggml : fix trailing whitespaces (llama/0)

2 months agoCUDA: use switch statements in constexpr functions (llama/13095)
Johannes Gäßler [Thu, 24 Apr 2025 13:57:10 +0000 (15:57 +0200)]
CUDA: use switch statements in constexpr functions (llama/13095)

2 months agometal : fix floating-point range of attention scores in FA kernels (llama/13090)
Georgi Gerganov [Thu, 24 Apr 2025 07:38:30 +0000 (10:38 +0300)]
metal : fix floating-point range of attention scores in FA kernels (llama/13090)

ggml-ci

2 months agovulkan: matmul gcn tuning (llama/13016)
Eve [Thu, 24 Apr 2025 07:18:33 +0000 (07:18 +0000)]
vulkan: matmul gcn tuning (llama/13016)

* tune matmul for gcn

* this one is more power efficient

* Update ggml/src/ggml-vulkan/ggml-vulkan.cpp

Co-authored-by: 0cc4m <redacted>
* disable this tune for the proprietary driver

---------

Co-authored-by: 0cc4m <redacted>
2 months agoCUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (llama/13014)
Johannes Gäßler [Tue, 22 Apr 2025 19:27:40 +0000 (21:27 +0200)]
CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (llama/13014)

* CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID

* fix logic for RoPE support, CUDA graphs

2 months agoggml : add SSE 4.2 and x64 base variant for CPUs without AVX (llama/12871)
Diego Devesa [Mon, 21 Apr 2025 16:13:51 +0000 (18:13 +0200)]
ggml : add SSE 4.2 and x64 base variant for CPUs without AVX (llama/12871)

* ggml : add SSE 4.2 variant for CPUs without AVX

* ggml : add x64 base ABI variant

2 months agoSYCL: Add non-contiguous support in ROPE (llama/12993)
Akarshan Biswas [Mon, 21 Apr 2025 13:43:30 +0000 (19:13 +0530)]
SYCL: Add non-contiguous support in ROPE (llama/12993)

ggml-ci

2 months agovulkan: support noncontiguous rms_norm (llama/13031)
Jeff Bolz [Sun, 20 Apr 2025 08:50:02 +0000 (03:50 -0500)]
vulkan: support noncontiguous rms_norm (llama/13031)

2 months agometal: add neg operator (llama/13029)
Jeffrey Morgan [Sun, 20 Apr 2025 05:28:40 +0000 (22:28 -0700)]
metal: add neg operator (llama/13029)

2 months agoSYCL: Refactor and enable FP16 in binary broadcast OPs (llama/12975)
Akarshan Biswas [Fri, 18 Apr 2025 13:57:56 +0000 (19:27 +0530)]
SYCL: Refactor and enable FP16 in binary broadcast OPs (llama/12975)

* SYCL: refactor move to a separate file

* Fix binbcast

* Remove duplicates

* fix include formatting

* fix typo

2 months agorpc : add RPC_CMD_HELLO (llama/12955)
Radoslav Gerganov [Fri, 18 Apr 2025 07:13:42 +0000 (10:13 +0300)]
rpc : add RPC_CMD_HELLO (llama/12955)

Add RPC_CMD_HELLO for getting the version of the protocol implemend by
the server. Follow the semantic versioning rules at https://semver.org

Hopefully this bring better user experience when we make breaking
changes at the protocol level and avoid issues like #12465

2 months agograph : make FA compatible with MLA + add initial Metal kernels (llama/12953)
Georgi Gerganov [Thu, 17 Apr 2025 15:16:36 +0000 (18:16 +0300)]
graph : make FA compatible with MLA + add initial Metal kernels (llama/12953)

* graph : make mla compatible with FA

* metal : add exp FA kernels for DeepSeek models

ggml-ci

* llama : minor naming updates

ggml-ci

* ggml : disable FA for DS head sizes

* tests : add FA tests for MLA shapes

ggml-ci

2 months agoggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (llama/12970)
Alan Gray [Thu, 17 Apr 2025 13:19:42 +0000 (14:19 +0100)]
ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (llama/12970)

2 months agoCANN: Add support for async operator submission (llama/12864)
hipudding [Thu, 17 Apr 2025 12:34:16 +0000 (20:34 +0800)]
CANN: Add support for async operator submission (llama/12864)

Submit operators using asynchronous threads to improve performance.

Use the environment variable GGML_CANN_ASYNC_MODE to control whether
asynchronous submission is enabled. It is disabled by default.

Testing shows a 10%–20% performance improvement in scenarios with
small parameter sizes, especially in quantized models.

2 months agoopencl: fix incorrect local_size index in profiling log (llama/12868)
kimminsu [Wed, 16 Apr 2025 21:25:57 +0000 (06:25 +0900)]
opencl: fix incorrect local_size index in profiling log (llama/12868)

2 months agovulkan: enable coopmat2 FA gqa and split_k optimizations more often (llama/12931)
Jeff Bolz [Wed, 16 Apr 2025 18:37:25 +0000 (13:37 -0500)]
vulkan: enable coopmat2 FA gqa and split_k optimizations more often (llama/12931)

The grouped query attention optmization doesn't require a power of two ratio,
the only thing relying on it was the modulo operation written as bitwise &.

split_k need not depend on gqa_ratio - enable it any time there's only one
workgroup in the X dimension. The shader gets the split index from the x coord,
and multiple workgroups in the X dimension (pre-split) indicates a larger
FA operation that wouldn't need splitting.

2 months agoCANN: Add 310P operator support check (llama/12962)
Chenguang Li [Wed, 16 Apr 2025 08:21:05 +0000 (16:21 +0800)]
CANN: Add 310P operator support check (llama/12962)

2 months agometal : add FA-vec kernels for head size 96 (llama/12952)
Georgi Gerganov [Tue, 15 Apr 2025 11:45:05 +0000 (14:45 +0300)]
metal : add FA-vec kernels for head size 96 (llama/12952)

ggml-ci

2 months agoCANN: Add x86 build ci (llama/12950)
hipudding [Tue, 15 Apr 2025 11:08:55 +0000 (19:08 +0800)]
CANN: Add x86 build ci (llama/12950)

* CANN: Add x86 build ci

* CANN: fix code format

2 months agoCUDA/HIP: Share the same unified memory allocation logic. (llama/12934)
David Huang [Tue, 15 Apr 2025 09:20:38 +0000 (17:20 +0800)]
CUDA/HIP: Share the same unified memory allocation logic. (llama/12934)

Replace compile-time `GGML_HIP_UMA` with environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY`. This unifies the usage on NVIDIA and AMD GPUs, and allows a single binary to be shared between integrated and dedicated GPUs.

2 months agoSYCL: Add ROPE vision kernel (llama/12887)
Akarshan Biswas [Tue, 15 Apr 2025 08:37:42 +0000 (14:07 +0530)]
SYCL: Add ROPE vision kernel (llama/12887)

* SYCL: Add ROPE vision kernel

* Add comment about rope mode

2 months agoggml : Add AVX512 implementation of GEMM - Q4_Kx8 (llama/12829)
Srihari-mcw [Tue, 15 Apr 2025 06:22:36 +0000 (11:52 +0530)]
ggml : Add AVX512 implementation of GEMM - Q4_Kx8 (llama/12829)

* Add AVX512 implementation of GEMM - q4kx8

* Update changes to remove unnecessary whitespaces

2 months agoCANN: Opt ROPE optimization (llama/12865)
Chenguang Li [Tue, 15 Apr 2025 02:09:35 +0000 (10:09 +0800)]
CANN: Opt ROPE optimization (llama/12865)

* [CANN]Opt ROPE optimization

* [CANN]Codestyle adjustment

* [CANN]Fix the ROPE precision issue

* [CANN]codestyle fix

* [CANN]add rope unsupport case

Signed-off-by: noemotiovon <redacted>
2 months agoCANN: Optimize CANN buffer pool memory management (llama/12875)
Xinpeng Dou [Tue, 15 Apr 2025 02:04:24 +0000 (10:04 +0800)]
CANN: Optimize CANN buffer pool memory management (llama/12875)

Multiple optional memory pools are provided for CANN, including VMM,
priority queue-based, and traditional memory pools.
1.When the memory pool is available and GGML_CANN_DISABLE_VMM_POOL
   is not defined, the VMM pool is selected by default.
2.Otherwise, if GGML_CANN_ENABLE_BUF_PRIO_POOL is defined,
   the priority queue-based memory pool is used.
3.If neither condition is met, the default memory pool is used.

2 months agoSYCL: Fix im2col (llama/12910)
Akarshan Biswas [Mon, 14 Apr 2025 12:23:53 +0000 (17:53 +0530)]
SYCL: Fix im2col (llama/12910)

* SYCL: Fix im2col

* restore local workgroup size adjustments for large inputs

* restore format

2 months agorpc : use ggml_context_ptr (llama/12938)
Radoslav Gerganov [Mon, 14 Apr 2025 10:59:34 +0000 (13:59 +0300)]
rpc : use ggml_context_ptr (llama/12938)

2 months agoggml : Depthwise 2D convolution (ggml/1152)
Acly [Thu, 17 Apr 2025 12:16:45 +0000 (14:16 +0200)]
ggml : Depthwise 2D convolution (ggml/1152)

* ggml-cpu : kernels for faster depthwise 2D convolution

* fix compile: remove static after moving to ops.cpp

* add dilation for depthwise_conv_2d

* review: rename to ggml_conv_2d_dw_direct, remove redundant struct keywords, pass by ref, whitespace

* review: rename depthwise_conv_2d -> conv_2d_dw everywhere

2 months agoggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result...
SXX [Mon, 14 Apr 2025 05:47:55 +0000 (13:47 +0800)]
ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register (llama/12773)

* ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register

* simplifies the codebase by removing redundant functions

2 months agoggml: disable CUDA graphs for unsupported DUP and CONT node types (llama/12891)
Alan Gray [Sun, 13 Apr 2025 21:12:21 +0000 (22:12 +0100)]
ggml: disable CUDA graphs for unsupported DUP and CONT node types (llama/12891)

Fixes #12798

2 months agovulkan: use aligned loads for flash attention mask (llama/12853)
Jeff Bolz [Sat, 12 Apr 2025 08:44:48 +0000 (03:44 -0500)]
vulkan: use aligned loads for flash attention mask (llama/12853)

Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.

2 months agosycl: Support sycl_ext_oneapi_limited_graph (llama/12873)
Ewan Crawford [Fri, 11 Apr 2025 13:32:14 +0000 (15:32 +0200)]
sycl: Support sycl_ext_oneapi_limited_graph (llama/12873)

The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update

2 months agoSYCL: Add fp16 type support to unary op kernels (llama/12788)
Akarshan Biswas [Fri, 11 Apr 2025 08:03:50 +0000 (13:33 +0530)]
SYCL: Add fp16 type support to unary op kernels (llama/12788)

* SYCL: Add fp16 support to some elementwise OP kernels

* remove comment

ggml-ci

* Use static_cast directly

* remove not needed cast from tanh

* Use static cast and remove unneeded castings

* Adjust device_support_op for unary OPs

* Use cast_data and typed_data struct to deduplicate casting code

2 months agoggml: fix compilation error s390x (llama/12848)
Aaron Teo [Fri, 11 Apr 2025 05:20:07 +0000 (13:20 +0800)]
ggml: fix compilation error s390x (llama/12848)

* ggml: fixes #12846 compilation error

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: add documentation for code change

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: refactor to type-cast and update documentation

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: update documentation to provide full issue link

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
---------

Co-authored-by: Aleksei Nikiforov <redacted>
2 months agocpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running...
cmdr2 [Fri, 11 Apr 2025 06:44:19 +0000 (12:14 +0530)]
cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (ggml/1190)

2 months agoCANN: Support more ops (llama/12841)
Chenguang Li [Thu, 10 Apr 2025 00:51:52 +0000 (08:51 +0800)]
CANN: Support more ops (llama/12841)

* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D

* [CANN]Support COUNT_EQUAL && STEP && SGN

* [CANN]codestyle adjustment

* [CANN]codestyle adjustment

---------

Signed-off-by: noemotiovon <redacted>
2 months agoFixes #12823 (llama/12830)
Prajwal B Mehendarkar [Wed, 9 Apr 2025 23:18:01 +0000 (04:48 +0530)]
Fixes #12823 (llama/12830)

* Including limits file on AIX

* Fixes #12823

2 months agoggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:34 +0000 (23:00 +0000)]
ggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)

error: unknown type name '_Bool'

2 months agoggml-impl.h: fix build on POWER9 (llama/12855)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:25 +0000 (23:00 +0000)]
ggml-impl.h: fix build on POWER9 (llama/12855)

error: ISO C++17 does not allow 'register' storage class specifier

2 months agoCANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)
Chenguang Li [Wed, 9 Apr 2025 06:04:14 +0000 (14:04 +0800)]
CANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)

* [CANN] Support ELU and CONV_TRANSPOSE_1D

* [CANN]Modification review comments

* [CANN]Modification review comments

* [CANN]name adjustment

* [CANN]remove lambda used in template

* [CANN]Use std::func instead of template

* [CANN]Modify the code according to the review comments

---------

Signed-off-by: noemotiovon <redacted>
2 months agovulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)
Jeff Bolz [Wed, 9 Apr 2025 05:25:08 +0000 (00:25 -0500)]
vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)

q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.

2 months agovulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
Jeff Bolz [Wed, 9 Apr 2025 05:12:57 +0000 (00:12 -0500)]
vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)

This is consistent with the ggml-cuda behavior and the mul_mat fallback.

2 months agocuda : add f32 to bf16 copy op (llama/12806)
Sigbjørn Skjæret [Tue, 8 Apr 2025 21:21:31 +0000 (23:21 +0200)]
cuda : add f32 to bf16 copy op (llama/12806)

This allows BF16 KV-cache on CUDA.

2 months agollama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)
Georgi Gerganov [Tue, 8 Apr 2025 16:54:51 +0000 (19:54 +0300)]
llama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)

* ggml : FA supports F32 V

* graph : cast KV to F16 when the KV cache is not used

ggml-ci

* server : add test that exercises embeddings with FA enabled

ggml-ci

2 months agoggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
cmdr2 [Thu, 10 Apr 2025 12:23:08 +0000 (17:53 +0530)]
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)

fix #1186

2 months agoggml : add bilinear upscale support (ggml/1185)
Diego Devesa [Wed, 9 Apr 2025 10:32:13 +0000 (12:32 +0200)]
ggml : add bilinear upscale support (ggml/1185)

2 months agoggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
Diego Devesa [Wed, 9 Apr 2025 10:31:34 +0000 (12:31 +0200)]
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)

* ggml : add more generic ggml_custom op

* ggml : remove deprecated custom ops

2 months agoRevert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor...
Neo Zhang Jianyu [Tue, 8 Apr 2025 07:03:21 +0000 (15:03 +0800)]
Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (llama/12812)

* Revert "sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_s…"

This reverts commit 518a01480eb3a7c80a4951b430db9dee55428310.

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* rm tail space

2 months agoopencl: better identify Adreno GPU (llama/12760)
lhez [Mon, 7 Apr 2025 20:22:54 +0000 (13:22 -0700)]
opencl: better identify Adreno GPU (llama/12760)

2 months agocuda : fix HIP and MUSA BF16 (llama/0)
Georgi Gerganov [Mon, 7 Apr 2025 10:18:07 +0000 (13:18 +0300)]
cuda : fix HIP and MUSA BF16 (llama/0)

ggml-ci

2 months agosycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (llama...
zhouwg [Mon, 7 Apr 2025 15:22:57 +0000 (23:22 +0800)]
sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (llama/12734)

2 months agoCANN: fix typo in ggml-cann (llama/12733)
zhouwg [Mon, 7 Apr 2025 11:34:14 +0000 (19:34 +0800)]
CANN: fix typo in ggml-cann (llama/12733)

2 months agoCANN: Refactor to reduce duplicate code (llama/12731)
hipudding [Mon, 7 Apr 2025 09:10:36 +0000 (17:10 +0800)]
CANN: Refactor to reduce duplicate code (llama/12731)

* CANN: Refactor to reduce duplicate code

* CANN: fix review comment

2 months agomusa: fix compilation warnings in mp_22/31 (llama/12780)
R0CKSTAR [Sun, 6 Apr 2025 13:23:54 +0000 (21:23 +0800)]
musa: fix compilation warnings in mp_22/31 (llama/12780)

Signed-off-by: Xiaodong Ye <redacted>
2 months agovulkan: fix NaN issue in flash attention shader (llama/12776)
Jeff Bolz [Sun, 6 Apr 2025 09:03:47 +0000 (04:03 -0500)]
vulkan: fix NaN issue in flash attention shader (llama/12776)

Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.

2 months agovulkan: Use unclamped loads for flash attention mask (llama/12720)
Jeff Bolz [Sun, 6 Apr 2025 08:47:13 +0000 (03:47 -0500)]
vulkan: Use unclamped loads for flash attention mask (llama/12720)

nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.

2 months agoVulkan: Tune Vulkan mmq int dot shader for performance (llama/12767)
0cc4m [Sat, 5 Apr 2025 16:04:03 +0000 (18:04 +0200)]
Vulkan: Tune Vulkan mmq int dot shader for performance (llama/12767)

2 months agosycl: allow ggml-sycl configuration and compilation using Visual Studio project/solut...
Nicolò Scipione [Fri, 4 Apr 2025 14:00:46 +0000 (16:00 +0200)]
sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (llama/12625)

2 months agocmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)
Ronny Brendel [Fri, 4 Apr 2025 13:12:40 +0000 (15:12 +0200)]
cmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)

fixes error for compiler paths with spaces

2 months agovulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)
Jeff Bolz [Fri, 4 Apr 2025 05:54:35 +0000 (00:54 -0500)]
vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)

There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.

2 months agovulkan: set cmake minimum and project name in vulkan-shaders (llama/12744)
Jeff Bolz [Fri, 4 Apr 2025 05:53:20 +0000 (00:53 -0500)]
vulkan: set cmake minimum and project name in vulkan-shaders (llama/12744)

2 months agoCUDA: Prefer vector flash decoding kernel for Gemma models (llama/12738)
Gaurav Garg [Thu, 3 Apr 2025 16:20:29 +0000 (21:50 +0530)]
CUDA: Prefer vector flash decoding kernel for Gemma models (llama/12738)

* Prefer vector flash decoding kernel for Gemma models

Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.

* Update ggml/src/ggml-cuda/fattn.cu

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
2 months agovulkan: Fix missing cmake logic for dot product extension (llama/12721)
Jeff Bolz [Thu, 3 Apr 2025 15:08:26 +0000 (10:08 -0500)]
vulkan: Fix missing cmake logic for dot product extension (llama/12721)

2 months agofix MUSA compiler warning (llama/12704)
a3sh [Thu, 3 Apr 2025 07:32:55 +0000 (15:32 +0800)]
fix MUSA compiler warning (llama/12704)

* fix MUSA compiler warning

* replace (void) with GGML_UNUSED

2 months agoCANN: Support operator SIN COS ARGMAX (llama/12709)
Chenguang Li [Thu, 3 Apr 2025 07:18:08 +0000 (15:18 +0800)]
CANN: Support operator SIN COS ARGMAX (llama/12709)

* [CANN]support sin cos argmax

Signed-off-by: noemotiovon <redacted>
* [CANN]codestyle adjustment

Signed-off-by: noemotiovon <redacted>
* [CANN]Remove redundant code

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
Co-authored-by: noemotiovon <redacted>
2 months agoSimplify and improve CUDA graphs through use of indirect copy pointers (llama/9017)
Alan Gray [Thu, 3 Apr 2025 01:31:15 +0000 (02:31 +0100)]
Simplify and improve CUDA graphs through use of indirect copy pointers (llama/9017)

* CUDA: Simplify and improve CUDA graphs through use of indirect copy pointers

Previously there was complexity in the CUDA graphs implementation due
frequently changing parameters to copy kernels associated with K and V
cache pointers. This patch simplifies by using indirection to avoid
such parameters frequently changing, avoiding the need for frequent
graph updates.

Fixes #12152

* Addressed comments

* fix HIP builds

* properly sync to stream

* removed ggml_cuda_cpy_fn_ptrs

* move stream sync before free

* guard to only use indirection with graphs

* style fixes

* check for errors

---------

Co-authored-by: slaren <redacted>
2 months agoCANN: Fix failed test cases (llama/12708)
hipudding [Thu, 3 Apr 2025 00:49:51 +0000 (08:49 +0800)]
CANN: Fix failed test cases (llama/12708)

* CANN: Fix memory waste in aclnn_tensor

* CANN: fix backend ops fail

* CANN: fix acl_tensor memory alloc.

* CANN: format

* CANN: remove trailing whitespace

2 months agoopencl: use `max_alloc_size` in backend ctx instead of querying again (llama/12705)
lhez [Thu, 3 Apr 2025 00:01:42 +0000 (17:01 -0700)]
opencl: use `max_alloc_size` in backend ctx instead of querying again (llama/12705)

2 months agovulkan: Implement split_k for coopmat2 flash attention. (llama/12627)
Jeff Bolz [Wed, 2 Apr 2025 19:25:08 +0000 (14:25 -0500)]
vulkan: Implement split_k for coopmat2 flash attention. (llama/12627)

When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.

2 months agocmake: remove caching from vulkan coopmat checks (llama/12719)
bandoti [Wed, 2 Apr 2025 17:56:26 +0000 (14:56 -0300)]
cmake: remove caching from vulkan coopmat checks (llama/12719)

2 months agovulkan: Implement grouped query attention in the coopmat2 FA shader (llama/12559)
Jeff Bolz [Wed, 2 Apr 2025 17:40:32 +0000 (12:40 -0500)]
vulkan: Implement grouped query attention in the coopmat2 FA shader (llama/12559)

When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:

dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))

previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.

This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.

2 months agoVulkan: Fix mmq int dot float cache size (llama/12722)
0cc4m [Wed, 2 Apr 2025 17:12:30 +0000 (19:12 +0200)]
Vulkan: Fix mmq int dot float cache size (llama/12722)

2 months agollama : add option to override model tensor buffers (llama/11397)
Diego Devesa [Wed, 2 Apr 2025 12:52:01 +0000 (14:52 +0200)]
llama : add option to override model tensor buffers (llama/11397)

* llama : add option to override tensor buffers

* ggml : fix possible underflow in ggml_nbytes

2 months agoggml : simplify Arm fp16 CPU logic (ggml/1177)
Georgi Gerganov [Mon, 7 Apr 2025 09:25:15 +0000 (12:25 +0300)]
ggml : simplify Arm fp16 CPU logic (ggml/1177)

* ggml : simlpify Arm fp16 CPU logic

ggml-ci

* cont : bring back CUDA/MUSA checks

ggml-ci

2 months agoCUDA: don't convert BF16 weights to FP32 (ggml/1174)
Sigbjørn Skjæret [Fri, 4 Apr 2025 19:05:12 +0000 (21:05 +0200)]
CUDA: don't convert BF16 weights to FP32 (ggml/1174)

* add bf16 support

* use convert_from_bf16_cuda instead of convert_unary_cuda for f32

* revert 7ec5085

* move functionality into convert_unary with constexpr

2 months agocoreml : set convert_to="mlprogram" in convert
Daniel Bevenius [Wed, 23 Apr 2025 06:24:38 +0000 (08:24 +0200)]
coreml : set convert_to="mlprogram" in convert

* coreml : skip model load in convert-whisper-to-coreml.py

This commit updates the conversion process for Whisper models to use the
"mlprogram" format instead of "neuralnetwork".

The motivation for this change is that when using the "neuralnetwork"
format the underlying model produced is based on protobuf and my
understanding is that there are limitations to this format, such as
sizes of strings and the complexity of the model.

Currently when trying to convert larger models such as large-v3 the
conversion fails but succeeds for smaller models.

The "mlprogram" format is a more recent addition to CoreML and is
designed to be more flexible and powerful, allowing for more complex
models and larger data types. This seems to work for larger and smaller
models alike and unless I'm there are considerations that I'm not aware
of I think this is what we should be using moving forward.
The error that is generated for large models is the following:
```console
Running MIL backend_neuralnetwork pipeline: 100%|█████████| 9/9 [00:00<00:00, 35.44 passes/s]
Translating MIL ==> NeuralNetwork Ops: 100%|███████████| 5641/5641 [03:31<00:00, 26.65 ops/s]
Traceback (most recent call last):
  File "/Users/danbev/work/ai/whisper-work/models/convert-whisper-to-coreml.py", line 322, in <module>
    encoder = convert_encoder(hparams, encoder, quantize=args.quantize)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/models/convert-whisper-to-coreml.py", line 255, in convert_encoder
    model = ct.convert(
            ^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py", line 635, in convert
    mlmodel = mil_convert(
              ^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/mil/converter.py", line 186, in mil_convert
    return _mil_convert(
           ^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/mil/converter.py", line 245, in _mil_convert
    return modelClass(
           ^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/models/model.py", line 489, in __init__
    self.__proxy__, self._spec, self._framework_error = self._get_proxy_and_spec(
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/models/model.py", line 550, in _get_proxy_and_spec
    _MLModelProxy(
ValueError: basic_string
```

Refs: https://github.com/ggml-org/whisper.cpp/issues/3012

2 months agoci : disable freeBSD job in build.yml (#3064)
Daniel Bevenius [Tue, 22 Apr 2025 09:07:54 +0000 (11:07 +0200)]
ci : disable freeBSD job in build.yml (#3064)

This commit disables the FreeBSD job in build.yml of the GitHub Actions
workflow.

The motivation for this is that this job seems to stall and timeout from
time to time, taking up to 6 hours to complete/cancel.

2 months agoexamples : add HEAPU8 to exported runtime methods (#3062)
Daniel Bevenius [Sun, 20 Apr 2025 17:40:25 +0000 (19:40 +0200)]
examples : add HEAPU8 to exported runtime methods (#3062)

This commit adds `HEAPU8` to the list of exported methods.

The motivation for this commit is that currently this is causing an
error on Window systems where HEAPU8 in undefined, which results in the
following error message in the web console:
```console
main.js:1 Uncaught TypeError:
Cannot read properties of undefined (reading 'buffer') at __emval_get_property
(main.js:1:1363125) at 003a453a:0xc4a47 at 003a453a:0xc51cd at
Object.full_default (eval at craftInvokerFunction (main.js:1:1347011),
<anonymous>:9:10) at whisper.cpp/:647:42
```

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3059

2 months agoruby : make Ruby bindings installed with build options (#3056)
KITAITI Makoto [Thu, 17 Apr 2025 09:49:58 +0000 (18:49 +0900)]
ruby : make Ruby bindings installed with build options (#3056)

* Fix signature of URI.new7s return value

* Use path instead of string | _ToPath

* Add document comment to RBS

* Remove unnecessary build flags

* Remove unnecessary line

* Remove files have become unnecessary

* Make gem install accept build options for whisper.cpp

* Add instraction for build options in README

* Add methods for check to Options

* Test build options

* Rename: configs -> options

* Add assert_installed assertion

* Use assert_installed

* Remove unused attribute

* Extract dependency check logic as Dependencies class

* Update README

* Add WHISPER_FFMPEG option

* Test extra build options only on local test

* Bump version to 1.3.2 [skip ci]