]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
Bernhard M. Wiedemann [Fri, 24 Jan 2025 11:21:35 +0000 (12:21 +0100)]
cmake : avoid -march=native when reproducible build is wanted (llama/11366)
See https://reproducible-builds.org/ for why this is good
and https://reproducible-builds.org/specs/source-date-epoch/
for the definition of this variable.
Without this patch, compiling on different machines produced different binaries, which made verification of results difficult.
Fixes: #11317
This patch was done while working on reproducible builds for openSUSE.
amd-dwang [Thu, 23 Jan 2025 07:14:28 +0000 (15:14 +0800)]
Vulkan-run-test: fix mmq_wg_denoms (llama/11343)
There should be a copy-and-paste error here.
*mmq_wg_denoms should be used together with *warptile_mmq, instead of
wg_denoms.
Jeff Bolz [Thu, 23 Jan 2025 07:07:50 +0000 (01:07 -0600)]
vulkan: sort shaders for more deterministic binary (llama/11315)
Fixes #11306.
Jeff Bolz [Thu, 23 Jan 2025 07:01:17 +0000 (01:01 -0600)]
vulkan: fix diag_mask_inf (llama/11323)
With robustbufferaccess disabled, this shader was showing OOB stores. There
is a bounds check in the code, but the workgrouop dimensions were reversed vs
CUDA and it was running the wrong number of threads. So fix the workgroup
dimensions and disable robustness for this pipeline.
Radoslav Gerganov [Tue, 21 Jan 2025 13:06:41 +0000 (15:06 +0200)]
rpc : better caching of the base buffer pointer (llama/11331)
There is no need to use map, just store the base pointer in the buffer
context.
Georgi Gerganov [Tue, 21 Jan 2025 06:48:13 +0000 (08:48 +0200)]
metal : fix out-of-bounds write (llama/11314)
ggml-ci
Jeff Bolz [Mon, 20 Jan 2025 16:38:32 +0000 (10:38 -0600)]
vulkan: fix coopmat2 validation failures (llama/11284)
mul mat and flash attention shaders were loading f32 types directly into
A/B matrices, which happens to work but is technically invalid usage.
For FA, we can load it as an Accumulator matrix and convert and this
is not in the inner loop and is cheap enough. For mul mat, it's more
efficient to do this conversion in a separate pass and have the input(s)
be f16.
coopmat2 requires SPIR-V 1.6 (related using to LocalSizeId). LocalSizeId
requires maintenance4 be enabled, and SPIR-V 1.6 requires Vulkan 1.3.
Nicolò Scipione [Sun, 19 Jan 2025 13:33:34 +0000 (14:33 +0100)]
SYCL: Introducing memory host pool (llama/11251)
* Implement host pool for matrix_info
Creating a new memory pool on the host to store memory location for
matrix_info needed to launch gemm_batch from oneMKL/oneMath.
Removing complex support in gemm_batch since it is not used in llama.cpp
* Remove unnecessary headers and cast
* Reorder member variable to avoid warning on initialization
* Formatting
* Remove unused variable
* Address PR review feedback - remove warning
---------
Signed-off-by: nscipione <redacted>
Georgi Gerganov [Sat, 18 Jan 2025 14:18:15 +0000 (16:18 +0200)]
cmake : add sanitizer flags for llama.cpp (llama/11279)
* cmake : add sanitizer flags for llama.cpp
ggml-ci
* tests : fix compile warnings
ggml-ci
* cmake : move sanitizer flags to llama_add_compile_flags
ggml-ci
* cmake : move llama.cpp compile flags to top level lists
ggml-ci
* cmake : apply only sanitizer flags at top level
ggml-ci
* tests : fix gguf context use in same_tensor_data
* gguf-test: tensor data comparison
* dummy : trigger ggml-ci
* unicode : silence gcc warnings
ggml-ci
* ci : use sanitizer builds only in Debug mode
ggml-ci
* cmake : add status messages [no ci]
---------
Co-authored-by: Johannes Gäßler <redacted>
Jeff Bolz [Sat, 18 Jan 2025 08:26:50 +0000 (02:26 -0600)]
vulkan: fix coopmat2 flash attention for non-contiguous inputs (llama/11281)
Add code similar to mul_mm_cm2 to force alignment of strides, to avoid
a performance regression.
Add noncontiguous FA tests in test-backend-ops.
Fixes #11268.
Radoslav Gerganov [Fri, 17 Jan 2025 08:57:09 +0000 (10:57 +0200)]
rpc : early register backend devices (llama/11262)
Early register RPC devices and do not propagate RPC specifics in the
llama model structures.
ref: #10609
Jeff Bolz [Thu, 16 Jan 2025 21:47:10 +0000 (15:47 -0600)]
vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (llama/11166)
* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl
Shaders are based on cpy.cu.
* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32
* ggml: copy q->f32 assumes some contiguity in the destination
Jeff Bolz [Thu, 16 Jan 2025 21:23:49 +0000 (15:23 -0600)]
vulkan: optimize coopmat2 q4_k/q5_k dequant functions. (llama/11206)
Do masking on whole dwords, fetch all scales at once.
Jeff Bolz [Thu, 16 Jan 2025 21:16:39 +0000 (15:16 -0600)]
vulkan: optimize coopmat2 q2_k dequant function (llama/11130)
Johannes Gäßler [Thu, 16 Jan 2025 15:43:38 +0000 (16:43 +0100)]
CUDA: backwards pass for misc. ops, add tests (llama/11257)
* CUDA: backwards pass for misc. ops, add tests
* remove restrict from pointers
fj-y-saito [Thu, 16 Jan 2025 09:11:49 +0000 (18:11 +0900)]
ggml: aarch64: implement SVE kernels for q4_K_q8_K vector dot (llama/11227)
* Add SVE support for q4_K_q8_K
* Update ggml/src/ggml-cpu/ggml-cpu-quants.c
change to use K_SCALE_SIZE
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Eve [Wed, 15 Jan 2025 19:50:13 +0000 (19:50 +0000)]
vulkan: scale caching for k quants + misc fixes (llama/11081)
* q6_k scale caching
* 16 bit unpack
* q4_k test (slow)
* revert it
* q3_k
* q2_k
* little stuff
* try precalculating products of a and q2_k scales
* Revert "try precalculating products of a and q2_k scales"
This reverts commit
65110b81f23f66331a50c6e889a7c1ab9470a86b .
* unpack should be u16, add vim swap to gitignore (about time)
* better q4_k scales
* q5_k
* better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations
* q2_k better dequant
* q3_k optimizations
* q3_k use hmask simd from cpu avx version
* make the caches happy
* q3_k separate out calculation
* q2_k separate out
* little stuff
* use calc_superblock everywhere
* q2_k optimize scale calculation
* more barriers
Junil Kim [Wed, 15 Jan 2025 13:17:42 +0000 (22:17 +0900)]
fix: ggml: fix vulkan-shaders-gen build (llama/10448)
* fix: ggml: fix vulkan-shaders-gen build
The vulkan-shaders-gen target was not being built correctly
in case of cross-compilation.
Other outputs need to be built for the cross compile target,
but vulkan-shaders-gen needs to be built for the host.
* refactor: ggml: Improve vulkan-shaders-gen toolchain setup
- Add GGML_SHADERS_GEN_TOOLCHAIN CMake option.
- Auto-detect host toolchain if not set.
* refactor: ggml: Improve vulkan-shaders-gen toolchain setup
Use configure_file to generate host_toolchain.cmake from template
* fix: ggml: Fix compile error
Fix compile error not finding vulkan-shaders-gen
* fix: vulkan-shaders-gen build and path handling
Fix build issues with vulkan-shaders-gen:
- Add target dependency for correct build order
- Use CMAKE_HOST_SYSTEM_NAME for executable suffix
- Fix MSVC output directory in host toolchain
- Normalize path handling for cross-compilation
* fix: improve host compiler detection in vulkan shader build
Improve host compiler detection for vulkan shader generation:
- Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches
- Consolidate compiler detection logic
- Fix Windows-specific MSVC detection
- Ensure correct compiler search in cross-compilation
* refactor: Simplify CMake function for detecting host compiler
Simplified the CMake function to improve the process of detecting the host compiler.
* fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt
Since `vulkan-shader-gen.cpp` only requires the `glslc` executable
and not the Vulkan headers or libraries, CMakeLists.txt needs to
be corrected.
(See:
ecc93d0558fc3ecb8a5af69d2ece02fae4710ade )
* refactor: Rename host_toolchain.cmake.in
- Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in
* refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
Johannes Gäßler [Wed, 15 Jan 2025 11:51:37 +0000 (12:51 +0100)]
RoPE: fix back, CUDA support for back + noncont. (llama/11240)
* RoPE: fix back, CUDA support for back + noncont.
* fix comments reg. non-cont. RoPE support [no-ci]
Akarshan Biswas [Wed, 15 Jan 2025 03:20:17 +0000 (08:50 +0530)]
SYCL: Add gated linear attention kernel (llama/11175)
* SYCL: Add Gated Linear attention kernel
* glahpp: add a space at the end of file
* gla: Put the barrier inside the main logic loop
William Tambellini [Thu, 23 Jan 2025 19:59:08 +0000 (11:59 -0800)]
ggml : add option to not print stack on abort (ggml/1081)
* Add option to not print stack on abort
Add option/envvar to disable stack printing on abort.
Also link some unittests with Threads to fix link errors on
ubuntu/g++11.
* Update ggml/src/ggml.c
---------
Co-authored-by: Diego Devesa <redacted>
issixx [Fri, 17 Jan 2025 12:29:08 +0000 (21:29 +0900)]
ggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (ggml/1065)
some threads kept looping and failed to terminate properly after an abort during CPU execution.
Co-authored-by: issi <redacted>
Georgi Gerganov [Mon, 3 Feb 2025 14:32:48 +0000 (16:32 +0200)]
ci : dummy commit to trigger CI
KITAITI Makoto [Tue, 21 Jan 2025 07:39:54 +0000 (16:39 +0900)]
ruby : Make context accept initial parameters, API to retrieve a segment and more (#2749)
* Fix type signature for Whisper.log_set
* Use cache file for model when offline
* Extract ruby_whisper_transcribe() into a file
* Extract Whisper::Error
* Use FileList for ext/*.{c,cpp,h}
* Extract Whisper::Segment
* Extract Whisper::Model
* Extract Whisper::Params
* Extract Whisper::Context
* Extract log_callback function
* Write base code in C rather than C++
* Use chdir instead of Dir.chdir in Rakefile
* Define alloc func for Whisper::Model
* Define Whisper::Params' calback and user data reader
* Add test for Whisper::Params.new with keyword arguments
* Make Whisper::Params.new accept keyword arguments
* Update type signatures
* Update README
* Update CLEAN targets
* Fix document comment for Whisper::Params#new_segment_callback=
* Use macro to define params
* Fix dependency of build task
* Set Whisper.finalize_log_callback visibility to private
* Make Whisper::Context#full and full_parallel return self
* Add test for Whisper::Context#full_get_segment
* Add Whisper::Context#full_get_segment
* Update signatures
* Update README
* Fix signature
* Resplace #initialize with .new in signature file [skip ci]
* Fix potential overflow
Corey Earwood [Sat, 18 Jan 2025 10:06:06 +0000 (03:06 -0700)]
whisper.objc : fix build and CI
Georgi Gerganov [Tue, 14 Jan 2025 07:53:50 +0000 (09:53 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Tue, 14 Jan 2025 07:50:06 +0000 (09:50 +0200)]
sync : ggml
Johannes Gäßler [Tue, 14 Jan 2025 07:31:07 +0000 (09:31 +0200)]
GGUF: C++ refactor, backend support, misc fixes (skip) (llama/11030)
ggml-ci
lhez [Tue, 14 Jan 2025 07:24:03 +0000 (09:24 +0200)]
ggml : add opencl backend (skip) (llama/10693)
---------
Co-authored-by: Skyler Szot <redacted>
Co-authored-by: Shangqing Gu <redacted>
Co-authored-by: Alexander Angus <redacted>
Co-authored-by: Hongqiang Wang <redacted>
Co-authored-by: Max Krasnyansky <redacted>
Andreas Kieslinger [Mon, 13 Jan 2025 15:45:53 +0000 (16:45 +0100)]
cuda : CUDA Graph Compute Function Refactor (precursor for performance improvements) (llama/11042)
* Refactor: Moves cuda graph executable update step to separate function.
* Refactor: Moves cuda graph update check to separate function.
* Refactor: Moves cuda graph maintenance (update or adjusting copy parameters) to separate function for improved readability.
* Fix: Adds missing reference to maintain_cuda_graph() definition.
* Refactor: Improves structure and abstractions by moving CUDA graph evaluation and capture to its own function.
* Refactor: Moves node graph checks and copy ops into individual function for improved readability.
* Refactor: Removes code permanently excluded from compilation to increase readability.
* Style: Adds missing newline
* Style: Consolidates several neighboring '#ifdef USE_CUDA_GRAPH' into a single one
* Refactor: Makes 'cuda_graph_update_required' a local variable
* remove double lines between functions
---------
Co-authored-by: slaren <redacted>
Radoslav Gerganov [Mon, 13 Jan 2025 11:31:41 +0000 (13:31 +0200)]
ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (llama/11211)
Build fails when using HIP and GGML_BACKEND_DL:
```
/usr/bin/ld: ../ggml/src/libggml.so: undefined reference to `ggml_backend_cuda_reg'
collect2: error: ld returned 1 exit status
```
This patch fixes this.
0cc4m [Fri, 10 Jan 2025 05:39:33 +0000 (06:39 +0100)]
Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (llama/11161)
* Vulkan: Remove float16 use in shaders
* Fix validation error about subgroup_size_control extension
Molly Sophia [Fri, 10 Jan 2025 01:58:08 +0000 (09:58 +0800)]
llama: add support for QRWKV6 model architecture (llama/11001)
llama: add support for QRWKV6 model architecture (llama/11001)
* WIP: Add support for RWKV6Qwen2
Signed-off-by: Molly Sophia <redacted>
* RWKV: Some graph simplification
Signed-off-by: Molly Sophia <redacted>
* Add support for RWKV6Qwen2 with cpu and cuda GLA
Signed-off-by: Molly Sophia <redacted>
* RWKV6[QWEN2]: Concat lerp weights together to reduce cpu overhead
Signed-off-by: Molly Sophia <redacted>
* Fix some typos
Signed-off-by: Molly Sophia <redacted>
* code format changes
Signed-off-by: Molly Sophia <redacted>
* Fix wkv test & add gla test
Signed-off-by: Molly Sophia <redacted>
* Fix cuda warning
Signed-off-by: Molly Sophia <redacted>
* Update README.md
Signed-off-by: Molly Sophia <redacted>
* Update ggml/src/ggml-cuda/gla.cu
Co-authored-by: Georgi Gerganov <redacted>
* Fix fused lerp weights loading with RWKV6
Signed-off-by: Molly Sophia <redacted>
* better sanity check skipping for QRWKV6 in llama-quant
thanks @compilade
Signed-off-by: Molly Sophia <redacted>
Co-authored-by: compilade <redacted>
---------
Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: compilade <redacted>
Akarshan Biswas [Fri, 10 Jan 2025 00:13:03 +0000 (05:43 +0530)]
SYCL: Refactor ggml_sycl_compute_forward (llama/11121)
* SYCL: refactor ggml_sycl_compute_forward
* SYCL: add back GGML_USED(dst) to ggml_sycl_cpy
* SYCL: add function name to noop debug
* SYCL: Some device info print refactoring and add details of XMX availability
hydai [Wed, 8 Jan 2025 20:03:28 +0000 (04:03 +0800)]
fix: add missing msg in static_assert (llama/11143)
Signed-off-by: hydai <redacted>
amritahs-ibm [Wed, 8 Jan 2025 10:54:19 +0000 (16:24 +0530)]
llamafile : ppc64le MMA INT8 implementation (llama/10912)
This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for quantised int8 datatype.
This change results in 10% - 70% improvement
in total speed(ie all tokens/total time), across
various batch sizes.
The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.
Signed-off-by: Amrita H S <redacted>
Mathieu Baudier [Wed, 8 Jan 2025 08:18:13 +0000 (09:18 +0100)]
Disable GL_KHR_cooperative_matrix Vulkan extension if not available. (llama/11117)
* Disable GL_KHR_cooperative_matrix Vulkan extension if not available.
* Perform Vulkan extensions checks in a more sensible order
* Remove unnecessary #ifdef directive
ag2s20150909 [Wed, 8 Jan 2025 08:17:29 +0000 (16:17 +0800)]
fix: Vulkan shader gen binary path when Cross-compiling (llama/11096)
* fix: Vulkan shader gen binary path when cross compiling
Johannes Gäßler [Tue, 7 Jan 2025 17:01:58 +0000 (18:01 +0100)]
GGUF: C++ refactor, backend support, misc fixes (llama/11030)
* GGUF: C++ refactor, backend support, misc fixes
remove ggml_tensor.backend
update CODEOWNERS [no ci]
remove gguf_get_data from API
revise GGUF API data types
Diego Devesa [Tue, 7 Jan 2025 15:11:57 +0000 (16:11 +0100)]
ggml-backend : only offload from host buffers (fix) (llama/11124)
Diego Devesa [Tue, 7 Jan 2025 11:38:05 +0000 (12:38 +0100)]
ggml-backend : only offload from host buffers (llama/11120)
Radoslav Gerganov [Tue, 7 Jan 2025 06:37:02 +0000 (08:37 +0200)]
rpc : code cleanup (llama/11107)
Remove duplicated macros, use GGML_LOG_ERROR for errors
Akarshan Biswas [Tue, 7 Jan 2025 06:26:07 +0000 (11:56 +0530)]
SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6 (llama/11087)
* SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6
* Revert "SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6"
This reverts commit
f62dc45f318e48d375e7734b34cbddee81deed52 .
* Reland: Use get_multi_ptr instead of deprecated get_pointer in wkv6
Johannes Gäßler [Mon, 6 Jan 2025 01:33:52 +0000 (02:33 +0100)]
CUDA: add BF16 support (llama/11093)
* CUDA: add BF16 support
0cc4m [Sat, 4 Jan 2025 20:09:59 +0000 (21:09 +0100)]
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (llama/11074)
* Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver
* Add (TM) to AMD name check
matt23654 [Sat, 4 Jan 2025 16:10:30 +0000 (16:10 +0000)]
Support for models with non-512-aligned tensors over RPC. (llama/11047)
* Added init tensor calling code
* Added get_alloc_size forwarding
* Cleaned up and improved type/error handling.
* fix: remove trailing whitespaces.
* Cleanup and use GGML error logging functions.
* Handle potentially dangerous edge cases.
* Apply suggestions from code review
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: Diego Devesa <redacted>
Gilad S. [Sat, 4 Jan 2025 08:17:31 +0000 (10:17 +0200)]
fix: Vulkan shader gen binary path (llama/11037)
Radoslav Gerganov [Sun, 5 Jan 2025 07:50:37 +0000 (09:50 +0200)]
ggml : allow loading backend with env variable (ggml/1059)
ref: #1058
Georgi Gerganov [Tue, 14 Jan 2025 07:42:16 +0000 (09:42 +0200)]
scripts : sync opencl, gguf
Georgi Gerganov [Mon, 13 Jan 2025 11:11:37 +0000 (13:11 +0200)]
whisper : fix gpu device selection (#2728)
Georgi Gerganov [Mon, 13 Jan 2025 06:57:33 +0000 (08:57 +0200)]
server : fix build (#2718)
Georgi Gerganov [Mon, 13 Jan 2025 06:55:48 +0000 (08:55 +0200)]
talk-llama : sync llama.cpp (#2709)
NETZkultur GmbH [Mon, 13 Jan 2025 06:55:21 +0000 (07:55 +0100)]
server : generate unique tmp filenames (#2718)
#Summary
This Merge Request adds a mechanism to generate unique filenames for FFmpeg conversions in whisper_server.cpp. Previously, a single fixed filename was used (e.g., whisper-server-tmp.wav), which could result in unexpected file overwrites under certain circumstances. By generating a unique filename per request, any risk of overwriting temporary files is eliminated.
#Background / Motivation
• Problem: Relying on a static filename for temporary audio files may lead to overwrites if multiple operations occur simultaneously or if the same file name is reused.
• Goal: Dynamically generate unique filenames, ensuring each request or operation uses an isolated temporary file.
Sandro Hanea [Thu, 9 Jan 2025 14:21:07 +0000 (15:21 +0100)]
whisper : add whisper_full_get_segment_no_speech_prob_from_state (#2716)
Jayant [Tue, 7 Jan 2025 11:20:51 +0000 (12:20 +0100)]
readme : add docker instructions (#2711)
I found the docker instructions to be useful in the README.md and the differences in docker variants such as ffmpeg and cuda support. However, this section was removed in v1.7.4 and I would vote to bring it back.
This is a pull request to add that section back.
Adam Jones [Mon, 6 Jan 2025 13:17:57 +0000 (13:17 +0000)]
docs: Fix main -> whisper-cli in download scripts (#2707)
Georgi Gerganov [Mon, 6 Jan 2025 13:13:48 +0000 (15:13 +0200)]
release : v1.7.4
Georgi Gerganov [Mon, 6 Jan 2025 08:46:10 +0000 (10:46 +0200)]
ci : cont
Georgi Gerganov [Mon, 6 Jan 2025 07:29:10 +0000 (09:29 +0200)]
ci : fix ubuntu runner names
Yusuf Redžić [Sat, 4 Jan 2025 08:47:41 +0000 (09:47 +0100)]
cli : fix segfault on missing argument (#2700)
Georgi Gerganov [Fri, 3 Jan 2025 14:24:02 +0000 (16:24 +0200)]
ci : fix arm builds
Georgi Gerganov [Fri, 3 Jan 2025 12:11:23 +0000 (14:11 +0200)]
sync : ggml
ggml-ci
Georgi Gerganov [Fri, 3 Jan 2025 12:11:20 +0000 (14:11 +0200)]
ggml : do not install metal source when embed library (ggml/1054)
Georgi Gerganov [Fri, 3 Jan 2025 09:26:14 +0000 (11:26 +0200)]
metal : avoid uint (llama/11019)
Srihari-mcw [Tue, 31 Dec 2024 14:23:33 +0000 (19:53 +0530)]
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (llama/11027)
* Fixes for clang AVX VNNI
* enable AVX VNNI and alder lake build for MSVC
* Apply suggestions from code review
---------
Co-authored-by: slaren <redacted>
Jeff Bolz [Mon, 30 Dec 2024 17:27:11 +0000 (11:27 -0600)]
vulkan: optimize mul_mat for small values of N (llama/10991)
Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.
Share some code for reducing the result values to memory in mul_mat_vec_base.
Jeff Bolz [Sun, 29 Dec 2024 09:16:34 +0000 (03:16 -0600)]
vulkan: im2col and matmul optimizations for stable diffusion (llama/10942)
* tests: Add im2col perf tests
* vulkan: optimize im2col, more elements per thread
* vulkan: increase small tile size for NV_coopmat2
* vulkan: change im2col to 512 elements per workgroup
Jeff Bolz [Sun, 29 Dec 2024 08:35:11 +0000 (02:35 -0600)]
vulkan: Use push constant offset to handle misaligned descriptors (llama/10987)
Eve [Thu, 26 Dec 2024 15:54:44 +0000 (10:54 -0500)]
vulkan: multi-row k quants (llama/10846)
* multi row k quant shaders!
* better row selection
* more row choices
* readjust row selection
* rm_kq=2 by default
Peter [Thu, 26 Dec 2024 13:59:11 +0000 (00:59 +1100)]
examples, ggml : fix GCC compiler warnings (llama/10983)
Warning types fixed (observed under MSYS2 GCC 14.2.0):
* format '%ld' expects argument of type 'long int', but argument has type 'size_t'
* llama.cpp/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers] (emitted for all struct field except first)
Djip007 [Tue, 24 Dec 2024 17:54:49 +0000 (18:54 +0100)]
ggml : more perfo with llamafile tinyblas on x86_64 (llama/10714)
* more perfo with llamafile tinyblas on x86_64.
- add bf16 suport
- change dispache strategie (thanks:
https://github.com/ikawrakow/ik_llama.cpp/pull/71 )
- reduce memory bandwidth
simple tinyblas dispache and more cache freindly
* tinyblas dynamic dispaching
* sgemm: add M blocs.
* - git 2.47 use short id of len 9.
- show-progress is not part of GNU Wget2
* remove not stable test
Diego Devesa [Tue, 24 Dec 2024 03:05:27 +0000 (04:05 +0100)]
ggml : use wstring for backend search paths (llama/10960)
ggml-ci
Diego Devesa [Tue, 24 Dec 2024 03:05:17 +0000 (04:05 +0100)]
ggml : fix arm enabled features check (llama/10961)
Diego Devesa [Mon, 23 Dec 2024 19:25:52 +0000 (20:25 +0100)]
ggml : fix const usage in SSE path (llama/10962)
yuri@FreeBSD [Mon, 23 Dec 2024 00:20:11 +0000 (16:20 -0800)]
ggml : fix run-time on FreeBSD in get_executable_path() (llama/10948)
Jeff Bolz [Sun, 22 Dec 2024 09:44:01 +0000 (03:44 -0600)]
vulkan: build fixes for 32b (llama/10927)
* vulkan: build fixes for 32b
Should fix #10923
* vulkan: initialize some buffer/offset variables
Jeff Bolz [Sat, 21 Dec 2024 07:04:45 +0000 (01:04 -0600)]
vulkan: optimize coopmat2 dequant functions (llama/10855)
Change the code to do 16b loads when possible and extract the appropriate
component late, so the code is effectively decoding a pair of elements and
then selecting one. This can allow more commoning to happen in the compiler
when neighboring elements are loaded.
Adrien Gallouët [Fri, 20 Dec 2024 23:33:37 +0000 (00:33 +0100)]
ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0() (llama/10874)
* ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0()
Signed-off-by: Adrien Gallouët <redacted>
* ggml-cpu: format code
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Akarshan Biswas [Fri, 20 Dec 2024 15:31:28 +0000 (21:01 +0530)]
SYCL: Migrate away from deprecated ggml_tensor->backend (llama/10840)
* Migrate to tensor->buffer for checking backend buffer type: 1
* SYCL: common.cpp try to migrate away from tensor->backend
* SYCL: fix assertions and add proper comments
* SYCL: remove extra space
* SYCL: Add back static to ggml_backend_buffer_is_sycl_split function
* SYCL: Add pragma directive to suppress warning spam
* SYCL: Integrate debug logs with GGML_LOG and other fixes
* Revert "SYCL: Integrate debug logs with GGML_LOG and other fixes"
This reverts commit
2607b7de0f0d2f4f1f690226f86fa861aa39cb97 .
Let's keep the current SYCL specific logging mechanism for now
* SYCL: Use GGML_SYCL_DEBUG after reverting
* SYCL: reg_get_proc_address func, update to the current func signature
* SYCL: Refactor SYCL buffer checks in ggml_sycl_cpy_tensor_2d
Diego Devesa [Fri, 20 Dec 2024 12:31:28 +0000 (13:31 +0100)]
ggml : add test for SVE and disable when it fails (llama/10906)
Adrien Gallouët [Thu, 19 Dec 2024 13:20:41 +0000 (14:20 +0100)]
ggml: fix arm build with gcc (llama/10895)
Signed-off-by: Adrien Gallouët <redacted>
Diego Devesa [Wed, 18 Dec 2024 22:21:42 +0000 (23:21 +0100)]
ggml : fix arm build (llama/10890)
* ggml: GGML_NATIVE uses -mcpu=native on ARM
Signed-off-by: Adrien Gallouët <redacted>
* ggml: Show detected features with GGML_NATIVE
Signed-off-by: Adrien Gallouët <redacted>
* remove msvc support, add GGML_CPU_ARM_ARCH option
* disable llamafile in android example
* march -> mcpu, skip adding feature macros
ggml-ci
---------
Signed-off-by: Adrien Gallouët <redacted>
Co-authored-by: Adrien Gallouët <redacted>
Georgi Gerganov [Wed, 18 Dec 2024 17:27:21 +0000 (19:27 +0200)]
tts : add OuteTTS support (llama/10784)
* server : add "tokens" output
ggml-ci
* server : output embeddings for all tokens when pooling = none
ggml-ci
* server : be explicit about the pooling type in the tests
ggml-ci
* server : do not normalize embeddings when there is no pooling
ggml-ci
* llama : add OuteTTS support (wip)
* wip
* extract features
* first conv
* group norm
* resnet conv
* resnet
* attn
* pos net
* layer norm
* convnext
* head
* hann window
* fix n_embd + remove llama.cpp hacks
* compute hann window
* fft
* spectrum processing
* clean-up
* tts : receive input text and generate codes
* clip : fix new conv name
* tts : minor fix
* tts : add header + minor fixes
ggml-ci
* tts : add matchematical constant
ggml-ci
* tts : fix sampling + cut initial noise
* tts : fixes
* tts : update default samplers
ggml-ci
* tts : text pre-processing
* tts : outetts-voc -> wavtokenizer-dec
* tts : remove hardcoded constants
ggml-ci
* tts : fix tensor shapes
* llama : refactor wavtokenizer tensors
ggml-ci
* cont
ggml-ci
* cont [no ci]
* llama : update WavTokenizer to non-causal attn
* llama : handle no-vocab detokenization
* tts : add Python example for OuteTTS (wip)
* tts : extend python example to generate spectrogram
ggml-ci
* server : fix rebase artifacts
* tts : enable "return_tokens" in Python example
ggml-ci
* tts : minor fixes
* common : support HF download for vocoder
Johannes Gäßler [Tue, 17 Dec 2024 18:09:35 +0000 (19:09 +0100)]
tests: add tests for GGUF (llama/10830)
Daniel Bevenius [Thu, 19 Dec 2024 02:50:12 +0000 (03:50 +0100)]
ggml : improve inputs log sched_print_assignments (ggml/1053)
This commit attempts to improve the log message for the inputs of the
splits in the sched_print_assignments function.
The motivation for this change is that currently even if there are no
inputs a colon is displayed at the end of the line, which can make it a
little confusing when reading the output as it could be interpreted as
the line below are inputs when they are in fact nodes. With this change
the colon will only be printed if there actually are inputs.
Samuel Durante [Thu, 2 Jan 2025 10:05:38 +0000 (07:05 -0300)]
readme : fix real-time audio input example build instructions (#2692)
Alter [Thu, 2 Jan 2025 10:05:09 +0000 (10:05 +0000)]
objc : rename ggml-cpu-aarch64.c to .cpp (#2687)
Konosuke Sakai [Thu, 2 Jan 2025 10:03:02 +0000 (19:03 +0900)]
docs : replace Core ML with OpenVINO (#2686)
Georgi Gerganov [Tue, 31 Dec 2024 09:46:17 +0000 (11:46 +0200)]
make : fix "main" -> "whisper-cli"
Nikolaj Olsson [Tue, 31 Dec 2024 09:11:42 +0000 (10:11 +0100)]
ci : re-enable Windows cublas build (#2676)
* Enable Windows cublas build
* Re-add v12 cuda
KITAITI Makoto [Mon, 30 Dec 2024 12:26:35 +0000 (21:26 +0900)]
ruby : Fix of C++ header guard name, model URI support, type signature and more (#2683)
* Add test to make Whisper::Context.new accept URI string
* Add test to make Whisper::Context.new accept URI
* Make Whisper::Context.new accept URI string and URI
* Update README
Revert "Fix argument of rb_undefine_finalizer"
* Fix typos
* Add type signature file
* Assign literarl to const variable
* Load Whisper::Model::URI from Init_whisper
* Simplify .gitignore
* Don't load whisper.so from whisper/model/uri.rb
* Use each_with_object instead of each
* Add Development section to README
* Rename header guard to conform to C++ naming convention
Georgi Gerganov [Mon, 30 Dec 2024 11:00:18 +0000 (13:00 +0200)]
examples : handle "main.exe" deprecation
Andreas Lubbe [Tue, 24 Dec 2024 07:30:07 +0000 (08:30 +0100)]
cli : add --suppress_nst support (#2664)
Andreas Lubbe [Tue, 24 Dec 2024 07:29:19 +0000 (08:29 +0100)]
cli : add no_speech_thold (#2663)
Georgi Gerganov [Mon, 23 Dec 2024 19:22:10 +0000 (21:22 +0200)]
cmake : remove hardcoded install rpath
Georgi Gerganov [Sun, 22 Dec 2024 13:32:05 +0000 (15:32 +0200)]
server : fix help print
KITAITI Makoto [Sat, 21 Dec 2024 19:52:06 +0000 (04:52 +0900)]
ruby : bug fix on callbacks and no_speech_prob (#2656)
* Don't generate documentation on test
* Move .startup to TestBase class
* Extract new_segment_callback as a function
* Extract progress_callback as a function
* Extract abort_callback as a function
* Extract register_callbacks as a function
* Call callbacks in Whiser::Context#full and #full_parallel
* Fix README
* Care about the cases content-size is nil and TTY is not available
* Add tests for no_speech_prob
* Add Whisper::Context#full_get_segment_no_speech_prob and Whisper::Segment#no_speech_prob
Sacha Arbonel [Sat, 21 Dec 2024 15:00:08 +0000 (16:00 +0100)]
server : add no-speech threshold parameter and functionality (#2654)
Georgi Gerganov [Sat, 21 Dec 2024 10:54:35 +0000 (12:54 +0200)]
whisper : rename suppress_non_speech_tokens to suppress_nst (#2653)
Sacha Arbonel [Sat, 21 Dec 2024 10:05:05 +0000 (11:05 +0100)]
server : add option to suppress non-speech tokens (#2649)
* The parameter will suppress non-speech tokens like [LAUGH], [SIGH], etc. from the output when enabled.
* add to whisper_params_parse
* add missing param