]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 months agollama : add option to override model tensor buffers (#11397) upstream/0.0.5028
Diego Devesa [Wed, 2 Apr 2025 12:52:01 +0000 (14:52 +0200)]
llama : add option to override model tensor buffers (#11397)

* llama : add option to override tensor buffers

* ggml : fix possible underflow in ggml_nbytes

2 months agollama : refactor kv cache guard (#12695)
Georgi Gerganov [Wed, 2 Apr 2025 11:32:59 +0000 (14:32 +0300)]
llama : refactor kv cache guard (#12695)

* llama : refactor kv cache guard

ggml-ci

* cont : fix comment [no ci]

* llama : fix kv_cache restore logic

ggml-ci

* context : simplify kv cache updates

ggml-ci

* cont : better name [no ci]

* llama : fix llama_decode return code when could not find KV slot

ggml-ci

* context : change log err -> warn [no ci]

* kv-cache : add comment + warning

2 months agovocab : BailingMoE : change possessive quantifiers to greedy (#12677)
Sigbjørn Skjæret [Wed, 2 Apr 2025 09:21:48 +0000 (11:21 +0200)]
vocab : BailingMoE : change possessive quantifiers to greedy (#12677)

2 months agocommon : remove json.hpp from common.cpp (#12697)
Xuan-Son Nguyen [Wed, 2 Apr 2025 07:58:34 +0000 (09:58 +0200)]
common : remove json.hpp from common.cpp (#12697)

* common : remove json.hpp from common.cpp

* fix comment

2 months ago[CANN] get_rows and dup optimization (#12671)
Chenguang Li [Wed, 2 Apr 2025 07:22:13 +0000 (15:22 +0800)]
[CANN] get_rows and dup optimization (#12671)

* [CANN]get_rows and dup optimization.

Co-authored-by: hipudding <redacted>
Signed-off-by: noemotiovon <redacted>
* [CANN]GET_ROWS and CPY/DUP optimization

Co-authored-by: hipudding <redacted>
Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment

Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment

Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment

Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
Co-authored-by: noemotiovon <redacted>
Co-authored-by: hipudding <redacted>
2 months agocommon : refactor downloading system, handle mmproj with -hf option (#12694)
Xuan-Son Nguyen [Tue, 1 Apr 2025 21:44:05 +0000 (23:44 +0200)]
common : refactor downloading system, handle mmproj with -hf option (#12694)

* (wip) refactor downloading system [no ci]

* fix all examples

* fix mmproj with -hf

* gemma3: update readme

* only handle mmproj in llava example

* fix multi-shard download

* windows: fix problem with std::min and std::max

* fix 2

2 months agoopencl : fix memory allocation size (#12649)
Junil Kim [Tue, 1 Apr 2025 16:54:34 +0000 (01:54 +0900)]
opencl : fix memory allocation size (#12649)

issue:
https://github.com/CodeLinaro/llama.cpp/pull/17#issuecomment-2760611283

This patch fixes the memory allocation size
not exceeding the maximum size of the OpenCL device.

2 months agollama : use LLM_KV_GENERAL_FILE_TYPE instead of gguf_find_key (#12672)
jklincn [Tue, 1 Apr 2025 12:54:28 +0000 (20:54 +0800)]
llama : use LLM_KV_GENERAL_FILE_TYPE instead of gguf_find_key (#12672)

2 months agoconvert : BailingMoE : fix qkv split when head_dim is 0 (#12687)
Sigbjørn Skjæret [Tue, 1 Apr 2025 12:37:13 +0000 (14:37 +0200)]
convert : BailingMoE : fix qkv split when head_dim is 0 (#12687)

NOTE: Ling-lite-base is broken, see https://huggingface.co/inclusionAI/Ling-lite-base/discussions/2

2 months agometal : use F32 prec in FA kernels (#12688)
Georgi Gerganov [Tue, 1 Apr 2025 11:57:19 +0000 (14:57 +0300)]
metal : use F32 prec in FA kernels (#12688)

* metal : use F32 prec in FA kernels

ggml-ci

* cont : fix FA vec kernel

ggml-ci

2 months agoFix clang warning in gguf_check_reserved_keys (#12686)
R0CKSTAR [Tue, 1 Apr 2025 11:12:53 +0000 (19:12 +0800)]
Fix clang warning in gguf_check_reserved_keys (#12686)

* Fix clang warning in gguf_check_reserved_keys

Signed-off-by: Xiaodong Ye <redacted>
* Fix typo

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
2 months agovulkan: fix build when glslc doesn't support coopmat (#12683)
Wagner Bruna [Tue, 1 Apr 2025 09:38:07 +0000 (06:38 -0300)]
vulkan: fix build when glslc doesn't support coopmat (#12683)

2 months agoSYCL: Rename oneMKL to oneMath (#12192)
Romain Biessy [Tue, 1 Apr 2025 08:24:29 +0000 (10:24 +0200)]
SYCL: Rename oneMKL to oneMath (#12192)

* Rename oneMKL Interface to oneMath

* Use oneMath for Intel vendor

* Rename occurences to mkl

* clang-format

* Silence verbose warnings

* Set oneMath HIP_TARGETS

* Fix silence warnings

* Remove step to build oneMath from build instructions

* Use fixed oneMath version

* Remove INTEL_CPU

* Fold CMake oneDNN conditions

* Use Intel oneMKL for Intel devices

* Improve CMake message

* Link against MKL::MKL_SYCL::BLAS only

* Move oneMath documentation to Nvidia and AMD sections

2 months agoSYCL: switch to SYCL namespace (#12674)
Akarshan Biswas [Tue, 1 Apr 2025 08:11:39 +0000 (13:41 +0530)]
SYCL: switch to SYCL namespace (#12674)

2 months agoconvert : BailingMoE : avoid setting rope_dim to 0 (#12678)
Sigbjørn Skjæret [Mon, 31 Mar 2025 21:09:48 +0000 (23:09 +0200)]
convert : BailingMoE : avoid setting rope_dim to 0 (#12678)

2 months agovocab : add special infill tokens for CodeLlama (#11850)
Daniel Bevenius [Mon, 31 Mar 2025 16:40:56 +0000 (18:40 +0200)]
vocab : add special infill tokens for CodeLlama (#11850)

* vocab : add special infill tokens for CodeLlama

The commit adds the following special tokens for CodeLlama infill:
- `▁<PRE>`
- `▁<SUF>`
- `▁<MID>`

The motivation for this is that currently the infill example uses
CodeLlama as a suggested model. But when using this model the following
error is generated:
```console
/llama.cpp-debug/examples/infill/infill.cpp:165: GGML_ASSERT(llama_vocab_fim_pre(vocab) >= 0) failed

Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
305251 Aborted                 (core dumped)
./build/bin/llama-infill -t 10 -ngl 0 -m models/codellama-13b.Q5_K_S.gguf \
  -c 4096 --temp 0.7 --repeat_penalty 1.1 -n 20 \
  --in-prefix "def helloworld():\n    print(\"hell" \
  --in-suffix "\n   print(\"goodbye world\")\n    "
```

* squash! vocab : add special infill tokens for CodeLlama

Add _<EOT> as well.

2 months agoggml : faster ssm scan (#10558)
a3sh [Mon, 31 Mar 2025 16:05:13 +0000 (00:05 +0800)]
ggml : faster ssm scan (#10558)

* faster ssm_scan

* delete unused commnet

* clang format

* add space

* modify unnecessary calculations

* faster ssm conv implementatioin

* modify file name with dash

2 months agoconvert : Qwerky : use lora_rank_tokenshift and lora_rank_decay if present (#12667)
Sigbjørn Skjæret [Mon, 31 Mar 2025 14:36:25 +0000 (16:36 +0200)]
convert : Qwerky : use lora_rank_tokenshift and lora_rank_decay if present (#12667)

2 months agoVulkan: Add DP4A MMQ and Q8_1 quantization shader (#12135)
0cc4m [Mon, 31 Mar 2025 12:37:01 +0000 (14:37 +0200)]
Vulkan: Add DP4A MMQ and Q8_1 quantization shader (#12135)

* Vulkan: Add DP4A MMQ and Q8_1 quantization shader

* Add q4_0 x q8_1 matrix matrix multiplication support

* Vulkan: Add int8 coopmat MMQ support

* Vulkan: Add q4_1, q5_0 and q5_1 quants, improve integer dot code

* Add GL_EXT_integer_dot_product check

* Remove ggml changes, fix mmq pipeline picker

* Remove ggml changes, restore Intel coopmat behaviour

* Fix glsl compile attempt when integer vec dot is not supported

* Remove redundant code, use non-saturating integer dot, enable all matmul sizes for mmq

* Remove redundant comment

* Fix integer dot check

* Fix compile issue with unsupported int dot glslc

* Update Windows build Vulkan SDK version

2 months agocmake : fix whitespace (#0)
Georgi Gerganov [Mon, 31 Mar 2025 12:05:30 +0000 (15:05 +0300)]
cmake : fix whitespace (#0)

2 months agosync : ggml
Georgi Gerganov [Mon, 31 Mar 2025 11:59:21 +0000 (14:59 +0300)]
sync : ggml

ggml-ci

2 months agocmake: improve Vulkan cooperative matrix support checks (whisper/2966)
Sandro Hanea [Mon, 31 Mar 2025 10:44:36 +0000 (12:44 +0200)]
cmake: improve Vulkan cooperative matrix support checks (whisper/2966)

Co-authored-by: Sandro Hanea <redacted>
2 months agollava : proper description fix (#12668)
Sigbjørn Skjæret [Mon, 31 Mar 2025 09:28:30 +0000 (11:28 +0200)]
llava : proper description fix (#12668)

2 months agoSYCL: Remove misleading ggml_sycl_op_flatten function (#12387)
Akarshan Biswas [Mon, 31 Mar 2025 09:25:24 +0000 (14:55 +0530)]
SYCL: Remove misleading ggml_sycl_op_flatten function (#12387)

* SYCL: Remove misleading ggml_sycl_op_flatten function

* remove trailing whitespace

* Fix L2 norm from rebase

* remove try catch block from element_wise.cpp

* remove comment from common.hp

* ggml-sycl.cpp: Add try catch sycl::exception block in compute_forward

* norm.cpp: remove try catch exception block

2 months agollava : fix clip loading GGUFs with missing description (#12660)
Sigbjørn Skjæret [Mon, 31 Mar 2025 09:07:07 +0000 (11:07 +0200)]
llava : fix clip loading GGUFs with missing description (#12660)

2 months agotts : remove printfs (#12640)
marcoStocchi [Mon, 31 Mar 2025 08:20:30 +0000 (10:20 +0200)]
tts : remove printfs (#12640)

* tts.cpp : llama tokens console output is done using LOG_INF instead of printf(). Therefore the options '--log-disable' and '--log-file' have now uniform impact on all output.

2 months agollama : support BailingMoE (Ling) (#12634)
Sigbjørn Skjæret [Sun, 30 Mar 2025 20:21:03 +0000 (22:21 +0200)]
llama : support BailingMoE (Ling) (#12634)

2 months agometal : use constexpr in FA kernels + fix typedef (#12659)
Georgi Gerganov [Sun, 30 Mar 2025 19:04:04 +0000 (22:04 +0300)]
metal : use constexpr in FA kernels + fix typedef (#12659)

* metal : use constexpr in FA kernels

ggml-ci

* cont

ggml-ci

* cont : fix typedef

ggml-ci

2 months agollama : add Trillion 7B model support (#12556)
Juyoung Suk [Sun, 30 Mar 2025 18:38:33 +0000 (03:38 +0900)]
llama : add Trillion 7B model support (#12556)

* Support Trillion 7B

* Update llama.h

* Update llama.h

* Update llama-vocab.cpp for Trillion

* Update llama-vocab.cpp

2 months agollama-chat : Add Yandex instruct model template support (#12621)
Sergei Vorobyov [Sun, 30 Mar 2025 18:12:03 +0000 (21:12 +0300)]
llama-chat : Add Yandex instruct model template support (#12621)

* add yandex template

* update yandex chat template

* fix tests

* adjust chat template

* fix style

* fix tool macro in template

* add clarify comment

---------

Co-authored-by: Sergei Vorobev <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
2 months agomusa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc...
R0CKSTAR [Sun, 30 Mar 2025 08:59:38 +0000 (16:59 +0800)]
musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611)

* musa: fix all warnings

Signed-off-by: Xiaodong Ye <redacted>
* musa: enable -DLLAMA_FATAL_WARNINGS=ON in run.sh

Signed-off-by: Xiaodong Ye <redacted>
* musa: update ci doc (install ccache)

Signed-off-by: Xiaodong Ye <redacted>
* fix Windows build issue

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
2 months agosync : ggml
Georgi Gerganov [Sat, 29 Mar 2025 13:37:54 +0000 (15:37 +0200)]
sync : ggml

ggml-ci

2 months agocpu : rm unused variable (ggml/1166)
Xuan-Son Nguyen [Sat, 29 Mar 2025 10:59:56 +0000 (11:59 +0100)]
cpu : rm unused variable (ggml/1166)

2 months agocpu: de-duplicate some of the operators and refactor (ggml/1144)
cmdr2 [Sat, 29 Mar 2025 06:07:13 +0000 (11:37 +0530)]
cpu: de-duplicate some of the operators and refactor (ggml/1144)

* cpu: de-duplicate some of the operators and refactor

* Fix PR comments

* Fix PR comments

2 months agoggml : add logging for native build options/vars (whisper/2935)
Daniel Bevenius [Mon, 24 Mar 2025 08:53:38 +0000 (09:53 +0100)]
ggml : add logging for native build options/vars (whisper/2935)

This commit adds debug level logging for the native build options and
variables to ggml/CMakeLists.txt.

The motivation for this is that it can be useful to see the effective
result of `GGML_NATIVE`, `GGML_NATIVE_DEFAULT`, and `INS_ENB` for a
cmake build. I've found myself adding similar logging a few times now,
so I thought it might be a good idea to add this.

Example output, specifying `-DCMAKE_MESSAGE_LOG_LEVEL=DEBUG` when
running cmake produces the following output:
```console
-- GGML_NATIVE         : OFF
-- GGML_NATIVE_DEFAULT : OFF
-- INS_ENB             : OFF
```

2 months agoexamples : command.wasm updates (whisper/2904)
Daniel Bevenius [Thu, 20 Mar 2025 06:02:18 +0000 (07:02 +0100)]
examples : command.wasm updates (whisper/2904)

This commit updates the command.wasm example by adding a server.py script to make it easy to start a local http server to try out the example, updates the build instructions, and also addresses some of the compiler warnings that were being generated.

* emscripten : fix TOTAL_STACK for wasm

This commit moves the TOTAL_STACK setting from the compile flags to the
linker flags. This is because the TOTAL_STACK setting is a linker
setting.

The motivation for this change is that currently the following warnings
are generated when building:
```console
em++: warning: linker setting ignored during compilation: 'TOTAL_STACK' [-Wunused-command-line-argument]
em++: warning: linker setting ignored during compilation: 'TOTAL_STACK' [-Wunused-command-line-argument]
em++: warning: linker setting ignored during compilation: 'TOTAL_STACK' [-Wunused-command-line-argument]
em++: warning: linker setting ignored during compilation: 'TOTAL_STACK' [-Wunused-command-line-argument]
em++: warning: linker setting ignored during compilation: 'TOTAL_STACK' [-Wunused-command-line-argument]
em++: warning: linker setting ignored during compilation: 'TOTAL_STACK' [-Wunused-command-line-argument]
```

* examples : suppress C++17 deprecation warning for std::codecvt_utf8

This commit suppresses the C++17 deprecation warning for
std::codecvt_utf8 similar to what is done in
examples/talk-llama/unicode.cpp.

The motivation for this change is to suppress these warnings:
```console
/Users/danbev/work/ai/whisper-work/examples/common.cpp:251:31: warning: 'codecvt_utf8<wchar_t>' is deprecated [-Wdeprecated-declarations]
  251 |     std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
      |                               ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/codecvt:193:28: note: 'codecvt_utf8<wchar_t>' has been explicitly marked deprecated here
  193 | class _LIBCPP_TEMPLATE_VIS _LIBCPP_DEPRECATED_IN_CXX17 codecvt_utf8 : public __codecvt_utf8<_Elem> {
      |                            ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/__config:723:41: note: expanded from macro '_LIBCPP_DEPRECATED_IN_CXX17'
  723 | #    define _LIBCPP_DEPRECATED_IN_CXX17 _LIBCPP_DEPRECATED
      |                                         ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/__config:688:49: note: expanded from macro '_LIBCPP_DEPRECATED'
  688 | #      define _LIBCPP_DEPRECATED __attribute__((__deprecated__))
      |                                                 ^
/Users/danbev/work/ai/whisper-work/examples/common.cpp:251:10: warning: 'wstring_convert<std::codecvt_utf8<wchar_t>>' is deprecated [-Wdeprecated-declarations]
  251 |     std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
      |          ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/locale:3145:28: note: 'wstring_convert<std::codecvt_utf8<wchar_t>>' has been explicitly marked deprecated here
 3145 | class _LIBCPP_TEMPLATE_VIS _LIBCPP_DEPRECATED_IN_CXX17 wstring_convert {
      |                            ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/__config:723:41: note: expanded from macro '_LIBCPP_DEPRECATED_IN_CXX17'
  723 | #    define _LIBCPP_DEPRECATED_IN_CXX17 _LIBCPP_DEPRECATED
      |                                         ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/__config:688:49: note: expanded from macro '_LIBCPP_DEPRECATED'
  688 | #      define _LIBCPP_DEPRECATED __attribute__((__deprecated__))
      |                                                 ^
/Users/danbev/work/ai/whisper-work/examples/common.cpp:257:31: warning: 'codecvt_utf8<wchar_t>' is deprecated [-Wdeprecated-declarations]
  257 |     std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
      |                               ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/codecvt:193:28: note: 'codecvt_utf8<wchar_t>' has been explicitly marked deprecated here
  193 | class _LIBCPP_TEMPLATE_VIS _LIBCPP_DEPRECATED_IN_CXX17 codecvt_utf8 : public __codecvt_utf8<_Elem> {
      |                            ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/__config:723:41: note: expanded from macro '_LIBCPP_DEPRECATED_IN_CXX17'
  723 | #    define _LIBCPP_DEPRECATED_IN_CXX17 _LIBCPP_DEPRECATED
      |                                         ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/__config:688:49: note: expanded from macro '_LIBCPP_DEPRECATED'
  688 | #      define _LIBCPP_DEPRECATED __attribute__((__deprecated__))
      |                                                 ^
/Users/danbev/work/ai/whisper-work/examples/common.cpp:257:10: warning: 'wstring_convert<std::codecvt_utf8<wchar_t>>' is deprecated [-Wdeprecated-declarations]
  257 |     std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
      |          ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/locale:3145:28: note: 'wstring_convert<std::codecvt_utf8<wchar_t>>' has been explicitly marked deprecated here
 3145 | class _LIBCPP_TEMPLATE_VIS _LIBCPP_DEPRECATED_IN_CXX17 wstring_convert {
      |                            ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/__config:723:41: note: expanded from macro '_LIBCPP_DEPRECATED_IN_CXX17'
  723 | #    define _LIBCPP_DEPRECATED_IN_CXX17 _LIBCPP_DEPRECATED
      |                                         ^
/Users/danbev/work/wasm/emsdk/upstream/emscripten/cache/sysroot/include/c++/v1/__config:688:49: note: expanded from macro '_LIBCPP_DEPRECATED'
  688 | #      define _LIBCPP_DEPRECATED __attribute__((__deprecated__))
      |                                                 ^
4 warnings generated.
```

* ggml : suppress double-promotion warning in GGML_F16x4_REDUCE

This commit adds a cast to `ggml_float` in the `GGML_F16x4_REDUCE` macro
to suppress a double-promotion warning.

Currently the following warning is generated when compiling the
command.wasm example:
```console
/whisper-work/src/ggml-cpu/ggml-cpu.c:1592:5: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion]
 1592 |     GGML_F16_VEC_REDUCE(sumf, sum);
      |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/danbev/work/ai/whisper-work/src/ggml-cpu/ggml-cpu.c:932:37: note: expanded from macro 'GGML_F16_VEC_REDUCE'
  932 | #define GGML_F16_VEC_REDUCE         GGML_F16x4_REDUCE
      |                                     ^
/Users/danbev/work/ai/whisper-work/src/ggml-cpu/ggml-cpu.c:920:44: note: expanded from macro 'GGML_F16x4_REDUCE'
  918 |     res = wasm_f32x4_extract_lane(x[0], 0) +       \
      |         ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  919 |           wasm_f32x4_extract_lane(x[0], 1) +       \
      |           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  920 |           wasm_f32x4_extract_lane(x[0], 2) +       \
      |           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~
  921 |           wasm_f32x4_extract_lane(x[0], 3);        \
      |           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/whisper-work/src/ggml-cpu/ggml-cpu.c:1640:9: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion]
 1640 |         GGML_F16_VEC_REDUCE(sumf[k], sum[k]);
      |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/danbev/work/ai/whisper-work/src/ggml-cpu/ggml-cpu.c:932:37: note: expanded from macro 'GGML_F16_VEC_REDUCE'
  932 | #define GGML_F16_VEC_REDUCE         GGML_F16x4_REDUCE
      |                                     ^
/Users/danbev/work/ai/whisper-work/src/ggml-cpu/ggml-cpu.c:920:44: note: expanded from macro 'GGML_F16x4_REDUCE'
  918 |     res = wasm_f32x4_extract_lane(x[0], 0) +       \
      |         ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  919 |           wasm_f32x4_extract_lane(x[0], 1) +       \
      |           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  920 |           wasm_f32x4_extract_lane(x[0], 2) +       \
      |           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~
  921 |           wasm_f32x4_extract_lane(x[0], 3);        \
      |           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2 warnings generated.
```
wasm_f32x4_extract_lane returns a 32-bit float and this is what the
addition is performed on. But there is an implicit conversion from
32-bit float to 64-bit double when the result is assigned to `res`,
which is of type `ggml_float`. My understanding here is that this is
intentional and adding a cast to `ggml_float` should suppress the
warning.

* emscripten : add -Wno-deprecated to for emscripten

This commit adds -Wno-deprecated to the CMAKE_CXX_FLAGS for emscripten
builds.

The motivation for this is that currently there a number of warnings
generated like the following:
```console
warning: JS library symbol '$print' is deprecated. Please open a bug if you have a continuing need for this symbol [-Wdeprecated]
warning: JS library symbol '$printErr' is deprecated. Please open a bug if you have a continuing need for this symbol [-Wdeprecated]
em++: warning: warnings in JS library compilation [-Wjs-compiler]
em++: warning: linker setting ignored during compilation: 'ENVIRONMENT' [-Wunused-command-line-argument]
warning: JS library symbol '$print' is deprecated. Please open a bug if you have a continuing need for this symbol [-Wdeprecated]
warning: JS library symbol '$printErr' is deprecated. Please open a bug if you have a continuing need for this symbol [-Wdeprecated]
em++: warning: warnings in JS library compilation [-Wjs-compiler]
warning: JS library symbol '$print' is deprecated. Please open a bug if you have a continuing need for this symbol [-Wdeprecated]
warning: JS library symbol '$printErr' is deprecated. Please open a bug if you have a continuing need for this symbol [-Wdeprecated]
em++: warning: warnings in JS library compilation [-Wjs-compiler]
em++: warning: linker setting ignored during compilation: 'ENVIRONMENT' [-Wunused-command-line-argument]
em++: warning: linker setting ignored during compilation: 'ENVIRONMENT' [-Wunused-command-line-argument]
```

The downside of this is that we might miss other deprecation warnings
in the future so I'm not sure if this is acceptable. But it make the
wasm examples cleaner without the warnings.

* examples : fix tautological-compare warning in stb_vorbis.c [no ci]

This commit applies a fix to address a tautological-compare warning
in stb_vorbis.c.

The motivation for this is that currently the following warning is
generated when compiling the commmand-wasm example:
```console
/Users/danbev/work/ai/whisper-work/examples/stb_vorbis.c:1404:75: warning: pointer comparison always evaluates to false [-Wtautological-compare]
 1404 |       if (f->stream_start + loc >= f->stream_end || f->stream_start + loc < f->stream_start) {
      |                                                                           ^
1 warning generated.
```

This fix was taken from an open pull request on the stb repository
that addreses this issue:
https://github.com/nothings/stb/pull/1746

* squash! examples : update command.wasm instructions [no ci]

This commit adds a Python script to serve the the wasm examples build
in the `build-em` directory. Initially I thought that it would be enough
to start a simple python server but I did not notice that there was an
error in the browser console when I did that:
```console
command.js:1 Uncaught (in promise) DataCloneError: Failed to execute 'postMessage' on 'Worker': SharedArrayBuffer transfer requires self.crossOriginIsolated.
    at command.js:1:1206224
    at new Promise (<anonymous>)
    at loadWasmModuleToWorker (command.js:1:1204981)
    at Array.map (<anonymous>)
    at Object.loadWasmModuleToAllWorkers (command.js:1:1206428)
    at command.js:1:1204318
    at callRuntimeCallbacks (command.js:1:1202062)
    at preRun (command.js:1:6136)
    at run (command.js:1:1294094)
    at removeRunDependency (command.js:1:7046)
```
We need a few CORS headers to be set and in order hopefully make this
easy for users a Python script is added to the examples directory.
This should be able to server all the wasm examples provided they have
been built. command.wasm's README.md is updated to reflect this change.

* examples : remove unused functions

This commit removed the unused functions convert_to_utf8 and
convert_to_wstring from examples/common.cpp.

* Revert "examples : fix tautological-compare warning in stb_vorbis.c [no ci]"

This reverts commit 8e3c47d96141c7675c985562ebdc705e839e338a.

We should not make this change here and instead when the upstream PR is
merged we can sync with it.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2784

2 months agollama : fix non-causal mask for gemma 3 (#12615)
Xuan-Son Nguyen [Sat, 29 Mar 2025 23:07:37 +0000 (00:07 +0100)]
llama : fix non-causal mask for gemma 3 (#12615)

3 months agollama : change cpu_buft_list order: ACCEL -> GPU host -> CPU extra -> CPU (#12632)
Djip007 [Sat, 29 Mar 2025 13:07:37 +0000 (14:07 +0100)]
llama : change cpu_buft_list order: ACCEL -> GPU host -> CPU extra -> CPU (#12632)

this allow to use GPU host when possible over CPU repack.
this have the same effect to resolve this issues (#12498) without
completely disable CPU extra buffer.

Co-authored-by: philou <redacted>
3 months agocmake : fix ccache conflict (#12522)
Jay [Sat, 29 Mar 2025 10:04:58 +0000 (18:04 +0800)]
cmake : fix ccache conflict (#12522)

If users already set CMAKE_C_COMPILER_LAUNCHER globally, setting it in
cmake again will lead to conflict and compile fail.

Signed-off-by: Jay <redacted>
3 months agoCANN : remove clang-format in ggml-cann (#12607)
hipudding [Sat, 29 Mar 2025 10:03:28 +0000 (18:03 +0800)]
CANN : remove clang-format in ggml-cann (#12607)

3 months agollama : fix incorrect Qwen2Moe ffn_moe_out graph callback (#12631)
Sigbjørn Skjæret [Fri, 28 Mar 2025 21:13:02 +0000 (22:13 +0100)]
llama : fix incorrect Qwen2Moe ffn_moe_out graph callback (#12631)

3 months agometal : improve FA + improve MoE (#12612)
Georgi Gerganov [Fri, 28 Mar 2025 18:21:59 +0000 (20:21 +0200)]
metal : improve FA + improve MoE (#12612)

* ggml : FA with different K, V head sizes (CPU)

ggml-ci

* metal : add FA with HS=192

* metal : extend FA to support different K and V head sizes

ggml-ci

* metal : add FA vector kernels for heads K 192 and V 128

ggml-ci

* ggml : restrict op on other backends to equal head sizes

ggml-ci

* metal : optimize FA-vec kernel

ggml-ci

* metal : FA remove mq registers

* metal : improve MoE mul_mat_id condition

ggml-ci

* metal : fix comments + remove unnecessary addition

ggml-ci

* metal : avoid too much shared memory usage with mul_mat_id

ggml-ci

3 months agovulkan: fix coopmat shader generation when cross-compiling (#12272)
Icenowy Zheng [Fri, 28 Mar 2025 17:51:06 +0000 (01:51 +0800)]
vulkan: fix coopmat shader generation when cross-compiling (#12272)

* vulkan: fix coopmat shader generation when cross-compiling

Previously the status of coopmat{,2} support isn't passed to the
vulkan-shaders-gen project building on the host, which leads to build
failure because of the cross-compiling code expecting coopmat{,2}
shaders that didn't get generated.

Fix this by passing the coopmat{,2} support status to vulkan-shaders
subproject.

Signed-off-by: Icenowy Zheng <redacted>
* Only call coop-mat shaders once

* Fix whitespace

---------

Signed-off-by: Icenowy Zheng <redacted>
Co-authored-by: bandoti <redacted>
3 months agollama: fix error on bad grammar (#12628)
Johannes Gäßler [Fri, 28 Mar 2025 17:08:52 +0000 (18:08 +0100)]
llama: fix error on bad grammar (#12628)

3 months agoserver : include speculative decoding stats when timings_per_token is enabled (#12603)
Benson Wong [Fri, 28 Mar 2025 08:05:44 +0000 (01:05 -0700)]
server : include speculative decoding stats when timings_per_token is enabled (#12603)

* Include speculative decoding stats when timings_per_token is true

New fields added to the `timings` object:

  - draft_n           : number of draft tokens generated
  - draft_accepted_n  : number of draft tokens accepted
  - draft_accept_ratio: ratio of accepted/generated

* Remove redundant draft_accept_ratio var

* add draft acceptance rate to server console output

3 months agorpc : update README for cache usage (#12620)
Radoslav Gerganov [Fri, 28 Mar 2025 07:44:13 +0000 (09:44 +0200)]
rpc : update README for cache usage (#12620)

3 months agollamafile : ppc64le GEMV forwarding for FP32. (#12594)
amritahs-ibm [Fri, 28 Mar 2025 07:43:22 +0000 (13:13 +0530)]
llamafile : ppc64le GEMV forwarding for FP32. (#12594)

This patch enables usage of MMA when one of the
dimensions of the matrix(ie either M or N) is 1. This
is useful in case of token generation where N < 2.

The concept of 'GEMV Forwarding' is used where when one
of the matrix has a single row/column, the elements are
broadcasted, instead of using packing routine to prepack
the matrix elements.

This change results in 5% - 15% improvement in total
speed(ie all tokens/total time), across various batch
sizes. This is in comparision with the corresponding
dot product implementation.

The patch is tested with FP32 models of Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf on a IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
3 months agorpc : send hash when tensor data is above some fixed threshold (#12496)
Radoslav Gerganov [Fri, 28 Mar 2025 06:18:04 +0000 (08:18 +0200)]
rpc : send hash when tensor data is above some fixed threshold (#12496)

* rpc : send hash when tensor data is above some fixed threshold

ref #10095

* rpc : put cache under $HOME/.cache/llama.cpp

* try to fix win32 build

* another try to fix win32 build

* remove llama as dependency

3 months agoserver : Support listening on a unix socket (#12613)
Piotr [Thu, 27 Mar 2025 22:41:04 +0000 (23:41 +0100)]
server : Support listening on a unix socket (#12613)

* server : Bump cpp-httplib to include AF_UNIX windows support

Signed-off-by: Piotr Stankiewicz <redacted>
* server : Allow running the server example on a unix socket

Signed-off-by: Piotr Stankiewicz <redacted>
---------

Signed-off-by: Piotr Stankiewicz <redacted>
3 months agomedia : add SVG logo [no ci] (#12616)
Georgi Gerganov [Thu, 27 Mar 2025 21:09:05 +0000 (23:09 +0200)]
media : add SVG logo [no ci] (#12616)

3 months agoopencl: add multi and vision rope, `gelu_quick` and `im2col` (#12600)
lhez [Thu, 27 Mar 2025 15:08:08 +0000 (08:08 -0700)]
opencl: add multi and vision rope, `gelu_quick` and `im2col` (#12600)

* opencl: add `im2col`

* opencl: add `gelu_quick`

* opencl: add mrope

* opencl: add vision rope

3 months agollama : add PLM GGUF Conversion & Inference Support (#12457)
Si1w [Thu, 27 Mar 2025 10:49:15 +0000 (10:49 +0000)]
llama : add PLM GGUF Conversion & Inference Support (#12457)

* add edgellm model arch[conversation feature doesn't work]

* remove output.weight layer for edgellm arch

* [Model] update the name of the model

* update the name of model arch in convert gguf

* [Model] Refarctor the model arch into llama-model

* [Bug] Fix the bug in create attn kv

* [Code] Fix editorconfig erros

* [Code] Remove Trailing whitespace

* [Code] Remove Trailing whitespace

* [Code] Change the order of model arch in list

* [Code] Fix flake8 Lint errors

* Remove trailing white space

* [Code] Remove  call in model arch

3 months agomodel : restore support for T5Encoder (#12590)
HighDoping [Thu, 27 Mar 2025 10:43:33 +0000 (18:43 +0800)]
model : restore support for T5Encoder (#12590)

3 months agoconvert : Support Qwen2_5_VLForConditionalGeneration (#12595)
Csaba Kecskemeti [Thu, 27 Mar 2025 10:11:23 +0000 (03:11 -0700)]
convert : Support Qwen2_5_VLForConditionalGeneration (#12595)

3 months agosync : ggml
Georgi Gerganov [Thu, 27 Mar 2025 07:36:13 +0000 (09:36 +0200)]
sync : ggml

ggml-ci

3 months agoscripts : update sync + fix cmake merge
Georgi Gerganov [Thu, 27 Mar 2025 07:22:30 +0000 (09:22 +0200)]
scripts : update sync + fix cmake merge

ggml-ci

3 months agosync : ggml
Georgi Gerganov [Thu, 27 Mar 2025 07:01:21 +0000 (09:01 +0200)]
sync : ggml

ggml-ci

3 months agocmake : sync/merge PowerPC build commands (#0)
Georgi Gerganov [Thu, 27 Mar 2025 07:00:57 +0000 (09:00 +0200)]
cmake : sync/merge PowerPC build commands (#0)

3 months agollamafile : ppc64le MMA implementation for Q4_0. (#12489)
amritahs-ibm [Thu, 27 Mar 2025 06:51:47 +0000 (12:21 +0530)]
llamafile : ppc64le MMA implementation for Q4_0. (#12489)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le ISA using MMA
builtins. This patch handles matrix multiplication
between quantised datatypes, block_q4_0 and
block_q8_0.

This change results in 5% - 50% improvement
in total speed(ie all tokens/total time), across
various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
3 months agoggml : riscv: add 128-bit RVV support (#12530)
xctan [Thu, 27 Mar 2025 06:38:34 +0000 (14:38 +0800)]
ggml : riscv: add 128-bit RVV support (#12530)

* ggml : add 128-bit RVV support

* ggml : revert to old RVV 256+ q2_K, q3_K, q4_K, q6_K impl

* remove trailing whitespaces

* restructure vector length selection code

3 months agollama : make loras compatible with repacking (#12593)
Georgi Gerganov [Thu, 27 Mar 2025 06:24:10 +0000 (08:24 +0200)]
llama : make loras compatible with repacking (#12593)

* llama : make loras compatible with repacking

ggml-ci

* cont : simplify

ggml-ci

* cont : add TODO [no ci]

3 months agoSYCL: implement memset ggml backend buffer interface (#12580)
Akarshan Biswas [Thu, 27 Mar 2025 01:46:00 +0000 (07:16 +0530)]
SYCL: implement memset ggml backend buffer interface (#12580)

* SYCL: implement memset ggml backend buffer interface

* use GGML_ABORT macro

* Do not wait for all queues to finish for memset operation

3 months agoHIP: Add support for RDNA4 targets (#12372)
Slobodan Josic [Wed, 26 Mar 2025 22:46:30 +0000 (23:46 +0100)]
HIP: Add support for RDNA4 targets (#12372)

3 months agometal : refactor mat-vec code (#12569)
Georgi Gerganov [Wed, 26 Mar 2025 19:38:38 +0000 (21:38 +0200)]
metal : refactor mat-vec code (#12569)

* metal : refactor mat-vec code

ggml-ci

* metal : rename all_sum -> sum_all

ggml-ci

* metal : fix comments [no ci]

* metal : fix nr constant [no ci]

* metal : mv q6_K support nr0 > 1

ggml-ci

* metal : reduce register pressure

ggml-ci

* metal : fix typo [no ci]

* metal : reduce register pressure

ggml-ci

3 months agoupgrade to llguidance 0.7.10 (#12576)
Michał Moskal [Wed, 26 Mar 2025 18:06:09 +0000 (11:06 -0700)]
upgrade to llguidance 0.7.10 (#12576)

3 months agoclip: Fix llama-llava-clip-quantize-cli quantization error under CUDA backend (#12566)
Ivy233 [Wed, 26 Mar 2025 14:06:04 +0000 (22:06 +0800)]
clip: Fix llama-llava-clip-quantize-cli quantization error under CUDA backend (#12566)

* [Fix] Compiling clip-quantize-cli and running it in a CUDA environment will cause ggml_fp16_to_fp32 to report an error when trying to access video memory. You need to switch to the CPU backend to run quantize.
After the fix, it will automatically run in the CPU backend and will no longer be bound to CUDA.

* [Fix]Roll back the signature and implementation of clip_model_load, and change the call in clip_model_quantize to clip_init.

3 months agoconvert : fix squeeze for ssm_conv tensors (#12573)
Georgi Gerganov [Wed, 26 Mar 2025 12:21:05 +0000 (14:21 +0200)]
convert : fix squeeze for ssm_conv tensors (#12573)

* convert : fix squeeze for ssm_conv tensors

* convert : match ssm_conv tensors by type

---------

Co-authored-by: Francis Couture-Harpin <redacted>
3 months agoggml : fix MUL_MAT_ID repack with Q8_K (#12544)
Georgi Gerganov [Wed, 26 Mar 2025 11:02:00 +0000 (13:02 +0200)]
ggml : fix MUL_MAT_ID repack with Q8_K (#12544)

* ggml : fix MUL_MAT_ID repack with Q8_K

ggml-ci

* ggml : improve repack templates

ggml-ci

3 months agodoc: [MUSA] minor changes (#12583)
R0CKSTAR [Wed, 26 Mar 2025 07:09:48 +0000 (15:09 +0800)]
doc: [MUSA] minor changes (#12583)

Signed-off-by: Xiaodong Ye <redacted>
3 months agoconvert: fix Mistral3/Gemma3 model hparams init (#12571)
Sigbjørn Skjæret [Tue, 25 Mar 2025 22:03:10 +0000 (23:03 +0100)]
convert: fix Mistral3/Gemma3 model hparams init (#12571)

* Fix Mistral3/Gemma3 model hparams init

* set positional args correctly

* use existing hparams if passed

3 months agorun: de-duplicate fmt and format functions and optimize (#11596)
Eric Curtin [Tue, 25 Mar 2025 17:46:11 +0000 (17:46 +0000)]
run: de-duplicate fmt and format functions and optimize (#11596)

3 months agoggml-cpu : update KleidiAI to v1.5.0 (#12568)
Dan Johansson [Tue, 25 Mar 2025 11:10:18 +0000 (12:10 +0100)]
ggml-cpu : update KleidiAI to v1.5.0 (#12568)

ggml-cpu : bug fix related to KleidiAI LHS packing

Signed-off-by: Dan Johansson <redacted>
3 months agoSYCL: disable Q4_0 reorder optimization (#12560)
Akarshan Biswas [Tue, 25 Mar 2025 10:40:18 +0000 (16:10 +0530)]
SYCL: disable Q4_0 reorder optimization (#12560)

ggml-ci

3 months agodocs : add build instructions for KleidiAI (#12563)
Dan Johansson [Tue, 25 Mar 2025 09:35:20 +0000 (10:35 +0100)]
docs : add build instructions for KleidiAI (#12563)

Signed-off-by: Dan Johansson <redacted>
3 months agoci: [MUSA] add CI and update doc (#12562)
R0CKSTAR [Tue, 25 Mar 2025 07:45:08 +0000 (15:45 +0800)]
ci: [MUSA] add CI and update doc (#12562)

Signed-off-by: Xiaodong Ye <redacted>
3 months agocontext : fix worst-case reserve outputs (#12545)
Georgi Gerganov [Tue, 25 Mar 2025 07:19:23 +0000 (09:19 +0200)]
context : fix worst-case reserve outputs (#12545)

ggml-ci

3 months agoci: [SYCL] ggml-ci Use main GPU and enable sysman (#12547)
Akarshan Biswas [Mon, 24 Mar 2025 17:35:38 +0000 (23:05 +0530)]
ci: [SYCL] ggml-ci Use main GPU and enable sysman (#12547)

3 months agoopencl: simplify kernel embedding logic in cmakefile (#12503)
lhez [Mon, 24 Mar 2025 16:20:47 +0000 (09:20 -0700)]
opencl: simplify kernel embedding logic in cmakefile (#12503)

Co-authored-by: Max Krasnyansky <redacted>
3 months agoCI: fix SYCL build (#12546)
Akarshan Biswas [Mon, 24 Mar 2025 12:58:32 +0000 (18:28 +0530)]
CI: fix SYCL build (#12546)

3 months agodocs: update: improve the Fedoa CUDA guide (#12536)
Tei Home [Mon, 24 Mar 2025 11:02:26 +0000 (19:02 +0800)]
docs: update: improve the Fedoa CUDA guide (#12536)

* docs: update fedora-cuda guide

- Rename and place into Backend Folder.
- Update Host-Supplied Packages.
- Expand Recommended Users Section.

* docs: improve the flow of CUDA-FEDORA.md

3 months agollama-vocab : add SuperBPE pre-tokenizer (#12532)
compilade [Mon, 24 Mar 2025 10:47:24 +0000 (06:47 -0400)]
llama-vocab : add SuperBPE pre-tokenizer (#12532)

3 months agoCUDA: Fix clang warnings (#12540)
R0CKSTAR [Mon, 24 Mar 2025 10:28:34 +0000 (18:28 +0800)]
CUDA: Fix clang warnings (#12540)

Signed-off-by: Xiaodong Ye <redacted>
3 months agommap : skip resource limit checks on AIX (#12541)
Prajwal B Mehendarkar [Mon, 24 Mar 2025 10:17:10 +0000 (15:47 +0530)]
mmap : skip resource limit checks on AIX (#12541)

3 months agovulkan: fix mul_mat_vec failure in backend tests (#12529)
Jeff Bolz [Mon, 24 Mar 2025 06:56:17 +0000 (01:56 -0500)]
vulkan: fix mul_mat_vec failure in backend tests (#12529)

The OOB calculation could be wrong if the last iteration was during one of
the unrolled loops. Adjust the unrolling counts to avoid this. Add a couple
new backend tests that hit this failure on NVIDIA GPUs.

3 months agoserver : Add verbose output to OAI compatible chat endpoint. (#12246)
Marius Gerdes [Sun, 23 Mar 2025 18:30:26 +0000 (19:30 +0100)]
server : Add verbose output to OAI compatible chat endpoint. (#12246)

Add verbose output to server_task_result_cmpl_final::to_json_oaicompat_chat_stream, making it conform with server_task_result_cmpl_final::to_json_oaicompat_chat, as well as the other to_json methods.

3 months agoinstall : add macports (#12518)
Lars Sonchocky-Helldorf [Sun, 23 Mar 2025 08:21:48 +0000 (09:21 +0100)]
install : add macports (#12518)

MacPorts section added

3 months agollama : gemma3 : use output tensor if it exists in model weight (#12506)
Xuan-Son Nguyen [Sat, 22 Mar 2025 22:28:19 +0000 (23:28 +0100)]
llama : gemma3 : use output tensor if it exists in model weight (#12506)

* llama : gemma3 : use output tensor if it exists in model weight

* also add to the llm_tensor_names

3 months agoggml : fix quantized cpy op (#12310)
Georgi Gerganov [Sat, 22 Mar 2025 14:23:26 +0000 (16:23 +0200)]
ggml : fix quantized cpy op (#12310)

* ggml : fix quantized cpy op

ggml-ci

* tests : add cpy tests for all types

ggml-ci

* tests : add BF16 copy tests

ggml-ci

* tests : fix loop for same-type copy

ggml-ci

* tests : add option to permute the dst tensor

ggml-ci

3 months agomusa: refine compute capability (#12493)
R0CKSTAR [Sat, 22 Mar 2025 09:11:37 +0000 (17:11 +0800)]
musa: refine compute capability (#12493)

* musa: refine compute capability

Signed-off-by: Xiaodong Ye <redacted>
* Address review comments

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
3 months agovulkan: Optimize mul_mat_vec p021 and nc shaders (#12505)
Jeff Bolz [Sat, 22 Mar 2025 08:40:11 +0000 (03:40 -0500)]
vulkan: Optimize mul_mat_vec p021 and nc shaders (#12505)

* tests: add mul_mat perf/functional tests for p021/nc vulkan shaders

* vulkan: Optimize mul_mat_vec p021 and nc shaders.

These shaders are used in attention calculations, and when the KV cache grows
large they start to dominate the run time. For the nc shader (which is called
with large 'k' dimension), use unrolling and vector loads. For the p021 shader
(which is called with large 'm' and small 'k' dimensions), take advantage of
grouped query attention to reuse loads from the A matrix for the whole group,
and reduce the number of workgroups (too much overhead from tiny dispatches).

Using subgroupAdd in the p021 shader also helps, use that conditionally.

3 months agoVulkan: RTE rounding for cpy to quant (#12480)
stduhpf [Fri, 21 Mar 2025 19:34:50 +0000 (20:34 +0100)]
Vulkan: RTE rounding for cpy to quant (#12480)

* Vulkan: RTE rounding for cpy to quant

Co-Authored-By: Jeff Bolz <redacted>
* remove trailing whitespace

* avoid duplicating pipeline_cpy_f32_quant

* fix copypasting issue

* remove duplicated code

---------

Co-authored-by: Jeff Bolz <redacted>
3 months agovulkan: workaround for AMD Windows driver 16 bit unpack8 bug (#12472)
Eve [Fri, 21 Mar 2025 19:27:47 +0000 (19:27 +0000)]
vulkan: workaround for AMD Windows driver 16 bit unpack8 bug (#12472)

3 months agomodel : do not repack if a GPU device is present (#12498)
Georgi Gerganov [Fri, 21 Mar 2025 14:14:29 +0000 (16:14 +0200)]
model : do not repack if a GPU device is present (#12498)

ggml-ci

3 months agochore : cleanup llama_model_loader::TENSOR_ usage (#12492)
Sigbjørn Skjæret [Fri, 21 Mar 2025 09:21:36 +0000 (10:21 +0100)]
chore : cleanup llama_model_loader::TENSOR_ usage (#12492)

3 months agollama-tts : avoid crashes related to bad model file paths (#12482)
marcoStocchi [Fri, 21 Mar 2025 09:12:45 +0000 (10:12 +0100)]
llama-tts : avoid crashes related to bad model file paths (#12482)

3 months ago[SYCL] Fix build on Windows when ccache enabled (#9954) (#9976)
蕭澧邦 [Fri, 21 Mar 2025 06:58:47 +0000 (14:58 +0800)]
[SYCL] Fix build on Windows when ccache enabled (#9954) (#9976)

* [SYCL] Fix build on Windows when ccache enabled (#9954)

* take effect only on windows and force it to icl

---------

Co-authored-by: Romain Biessy <redacted>
3 months agosycl: cleanup oneDNN related code (#12097)
Svetlozar Georgiev [Fri, 21 Mar 2025 02:15:56 +0000 (02:15 +0000)]
sycl: cleanup oneDNN related code (#12097)

3 months agowebui : Prevent rerendering on textarea input (#12299)
Woof Dog [Thu, 20 Mar 2025 14:57:43 +0000 (14:57 +0000)]
webui : Prevent rerendering on textarea input (#12299)

* webui: Make textarea uncontrolled to eliminate devastating lag

* Update index.html.gz

* use signal-style implementation

* rm console log

* no duplicated savedInitValue set

---------

Co-authored-by: Xuan Son Nguyen <redacted>
3 months agollama : make Qwen2MoE QKV bias optional (#12477)
Sigbjørn Skjæret [Thu, 20 Mar 2025 11:49:59 +0000 (12:49 +0100)]
llama : make Qwen2MoE QKV bias optional (#12477)

3 months agoggml : block interleaving support for Q4_K quantization for x86 AVX2 architecture...
Srihari-mcw [Thu, 20 Mar 2025 11:35:34 +0000 (17:05 +0530)]
ggml : block interleaving support for Q4_K quantization for x86 AVX2 architecture (#12332)

* Add block interleaving support for Q4_K quantization

* Remove whitespaces and fix CI/CD issues

* Update pointer of bsums from int16_t to const int16_t

* Add vector version of quantize_q8_K_4x8 function

* Update code formatting based on review comments