]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
Georgi Gerganov [Thu, 23 May 2024 14:17:43 +0000 (17:17 +0300)]
ggml : silence UB sanitizer error during iq2_xxs quantization (llama/0)
Georgi Gerganov [Thu, 23 May 2024 07:00:44 +0000 (10:00 +0300)]
ggml : remove ggml_flash_attn and ggml_flash_ff (llama/7463)
ggml-ci
Georgi Gerganov [Thu, 23 May 2024 07:00:21 +0000 (10:00 +0300)]
ggml : drop support for QK_K=64 (llama/7473)
* ggml : drop support for QK_K=64
ggml-ci
* opencl : restore QK_K=256 define
0cc4m [Thu, 23 May 2024 06:59:59 +0000 (08:59 +0200)]
Update vulkan rope implementation to support frequency factors (llama/7475)
Johannes Gäßler [Wed, 22 May 2024 22:31:20 +0000 (00:31 +0200)]
CUDA: fix FA out-of-bounds reads (llama/7479)
Johannes Gäßler [Wed, 22 May 2024 15:58:25 +0000 (17:58 +0200)]
CUDA: fix FA out-of-bounds writes (llama/7465)
Georgi Gerganov [Wed, 22 May 2024 09:36:37 +0000 (12:36 +0300)]
cuda : fix compile warning (llama/7454)
Johannes Gäßler [Wed, 22 May 2024 08:24:29 +0000 (10:24 +0200)]
CUDA: remove incorrect precision check (llama/7454)
Georgi Gerganov [Wed, 22 May 2024 08:01:35 +0000 (11:01 +0300)]
cuda : fix rope + add tests (llama/7452)
* cuda : fix rope pos data
ggml-ci
* ggml : drop mode & 1 == 1 support for ggml_rope
ggml-ci
* ggml : support freq_factors for f16 rope (CPU)
ggml-ci
* tests : add rope tests using frequency factors
ggml-ci
liuwei-git [Tue, 21 May 2024 20:28:32 +0000 (04:28 +0800)]
llama : add phi3 128K model support (llama/7225)
* add phi3 128k support in convert-hf-to-gguf
* add phi3 128k support in cuda
* address build warnings on llama.cpp
* adjust index value in cuda long rope freq factors
* add long rope support in ggml cpu backend
* make freq factors only depend on ctx size
* remove unused rope scaling type 'su' frin gguf converter
* fix flint warnings on convert-hf-to-gguf.py
* set to the short freq factor when context size is small than trained context size
* add one line of comments
* metal : support rope freq_factors
* ggml : update ggml_rope_ext API to support freq. factors
* backends : add dev messages to support rope freq. factors
* minor : style
* tests : update to use new rope API
* backends : fix pragma semicolons
* minor : cleanup
* llama : move rope factors from KV header to tensors
* llama : remove tmp assert
* cuda : fix compile warning
* convert : read/write n_head_kv
* llama : fix uninitialized tensors
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 21 May 2024 20:03:42 +0000 (23:03 +0300)]
metal : handle F16 inf values, fix FA partial offload (llama/7434)
ggml-ci
Johannes Gäßler [Tue, 21 May 2024 17:27:12 +0000 (19:27 +0200)]
CUDA: fix unused warning in mmq.cu (llama/7442)
Johannes Gäßler [Tue, 21 May 2024 14:02:12 +0000 (16:02 +0200)]
CUDA: deduplicate mmq code (llama/7397)
Radoslav Gerganov [Mon, 20 May 2024 13:36:55 +0000 (16:36 +0300)]
rpc : track allocated buffers (llama/7411)
* rpc : track allocated buffers
ref: #7407
* rpc : pack rpc_tensor tightly
AidanBeltonS [Mon, 20 May 2024 11:08:23 +0000 (12:08 +0100)]
Update SYCL upscale operation (llama/7321)
* Update SYCL upscale operation
* Formatting
* Remove messages
Herman Semenov [Mon, 20 May 2024 07:33:21 +0000 (07:33 +0000)]
ggml-opencl, llama: using reserve() if count already known (llama/7272)
junchao-loongson [Mon, 20 May 2024 07:19:21 +0000 (15:19 +0800)]
ggml : add loongarch lsx and lasx support (llama/6454)
* add loongarch lsx and lasx optimize code
* Add loongarch compilation support to makefile
* revert stb_image.h
* opt bytes_from_nibbles_32 and sum_i16_pairs_float
* fix undeclared
* format code
* update
* update 2
---------
Co-authored-by: Jinyang He <redacted>
Srihari-mcw [Mon, 20 May 2024 02:18:39 +0000 (19:18 -0700)]
Add provisions for windows support for BF16 code including CMake provision for enabling AVX512_BF16 (llama/7258)
0cc4m [Sun, 19 May 2024 15:19:53 +0000 (17:19 +0200)]
Vulkan Embedding Fix (llama/7360)
* Fix empty Vulkan host buffers
Add fp32 fp16 matmul shader
Fix matmul shader alignment
* Remove deprecated tensor->backend uses
* Fix Vulkan validation errors on embedding models with no offloaded layers
* Fix Vulkan llava segfault when not offloading layers
slaren [Sun, 19 May 2024 15:08:46 +0000 (17:08 +0200)]
ggml : fix another case of quants nans (llama/7387)
Johannes Gäßler [Sun, 19 May 2024 14:46:13 +0000 (16:46 +0200)]
ggml: implement quantized KV cache for FA (llama/7372)
slaren [Sun, 19 May 2024 12:19:37 +0000 (14:19 +0200)]
cuda : clear error after buffer allocation failure (llama/7376)
fraxy-v [Sat, 18 May 2024 22:44:42 +0000 (01:44 +0300)]
Capture CUDA logging output (llama/7298)
* logging: output capture in cuda module
* fix compile error
* fix: vsnprintf terminates with 0, string use not correct
* post review
* Update llama.cpp
Co-authored-by: slaren <redacted>
* Update llama.cpp
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Sat, 18 May 2024 10:40:39 +0000 (13:40 +0300)]
android : use "ci-android" branch for CI (llama/7341)
* android : use "ci-android" branch for CI
* ggml : disable SIMD exp and silu for 32-bit ARM
ggml-ci
* android : do not fetch, use add_subdirectory instead
* cmake : provide binary dir
Johannes Gäßler [Sat, 18 May 2024 10:36:25 +0000 (12:36 +0200)]
CUDA: deduplicate FlashAttention code (llama/7352)
Engininja2 [Sat, 18 May 2024 08:05:17 +0000 (02:05 -0600)]
cuda : add half2 __shfl_xor() for ROCm 5.5 (llama/7263)
0cc4m [Sat, 18 May 2024 06:10:58 +0000 (08:10 +0200)]
Update and fix Vulkan soft_max and argsort implementations (llama/7237)
* Update and fix Vulkan softmax implementation
* Update and fix Vulkan argsort implementation
slaren [Sat, 18 May 2024 00:39:54 +0000 (02:39 +0200)]
ggml : fix quants nans when all the group weights are very close to zero (llama/7313)
Johannes Gäßler [Fri, 17 May 2024 16:54:52 +0000 (18:54 +0200)]
CUDA: faster large batch FA without tensor cores (llama/7314)
Radoslav Gerganov [Fri, 17 May 2024 14:25:44 +0000 (17:25 +0300)]
rpc : set SO_REUSEADDR for the server socket (llama/7320)
ref: #7293
Herman Semenov [Fri, 17 May 2024 07:08:49 +0000 (07:08 +0000)]
ggml-quants, llama : removed excess checks (llama/7274)
Justine Tunney [Fri, 17 May 2024 06:58:52 +0000 (02:58 -0400)]
ggml : rewrite silu and softmax for cpu (llama/7154)
This change upstreams llamafile's vectorized expf() functions. This lets
us compute softmax and silu more accurately than the short[65536] lookup
table that GGML previously used to make this operation go faster. We can
support aarch64 and sse2+ with the worst case rounding error of 2ulp. It
makes make -j8 tests && ./tests/test-backend-ops -o SOFT_MAX -b CPU perf
go 1.5x faster for SSE2+FMA, 1.9x faster for AVX2+FMA and 2.1x on AVX512
Radoslav Gerganov [Wed, 15 May 2024 12:29:07 +0000 (15:29 +0300)]
rpc : add command line arg for specifying backend memory
ref: #7293
Max Krasnyansky [Thu, 16 May 2024 02:47:36 +0000 (19:47 -0700)]
Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (llama/7191)
* logging: add proper checks for clang to avoid errors and warnings with VA_ARGS
* build: add CMake Presets and toolchian files for Windows ARM64
* matmul-int8: enable matmul-int8 with MSVC and fix Clang warnings
* ci: add support for optimized Windows ARM64 builds with MSVC and LLVM
* matmul-int8: fixed typos in q8_0_q8_0 matmuls
Co-authored-by: Georgi Gerganov <redacted>
* matmul-int8: remove unnecessary casts in q8_0_q8_0
---------
Co-authored-by: Georgi Gerganov <redacted>
kunnis [Wed, 15 May 2024 17:59:12 +0000 (12:59 -0500)]
ggml : use dynamic thread scheduling for matrix multiplication (llama/6915)
* Just reordering some structs.
* Adding in the calls to mm_pause
* Passing around the state
* Renaming and moving a bunch of variables around.
* Extracting the logic to it's own function.
* Moving some variable definitions into the chunk function.
* Moving some variables around
* moving src1_cont inside
* Moving row_size
* adding the current_chunk
* Reorg the code.
* Formatting to match the orig patch
* starting to setup the chunking variables
* Starting the buildup of the loop
* The yield shouldn't be necessary.
* adding the looping structure based on the chunk configuration.
* Add in the re-chunking code.
* Making it much more likely to rechunk.
* disable resizing if numa is enabled.
* Updating comments with what we've learned.
* Fix formatting
* Couple more formatting fixes.
* More style fixes.
* Fix Warnings
* Going with unused because there's conditional logic that needs it.
* Update ggml.c
* Update ggml.c
---------
agray3 [Wed, 15 May 2024 13:44:49 +0000 (14:44 +0100)]
Avoid unnecessarily disabling CUDA graphs (llama/7302)
As discussed in PR #6766, CUDA graphs were being disabled in the presence of long prompts.
This fixes the issue by avoiding the consective update counter from incrementing unnecessarily
for tokens in which cuda graphs are disabled due to batch size > 1.
slaren [Wed, 15 May 2024 13:08:48 +0000 (15:08 +0200)]
ggml : tag ggml_tensor::backend as deprecated (llama/7290)
AidanBeltonS [Wed, 15 May 2024 12:26:30 +0000 (13:26 +0100)]
Add missing " (llama/7303)
John Balis [Wed, 15 May 2024 08:52:33 +0000 (03:52 -0500)]
ggml : add `ggml_upscale_ext` (ggml/814)
* initial commit with CPU implementation of upscale to shape and test, cuda implementation next
* experimental commit to see if dst shape is correct
* test version
* test
* removed unnecessary params
* refactor
* fixed tests
* ggml : metal impl + cleanup + sycl dev warnings
* patched ggml_upscale cuda op to handle non-contiguous tensors, added test for non-contiguous behavior
* metal : fix upsacle op to support nb00 + style
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sun, 16 Jun 2024 09:41:42 +0000 (12:41 +0300)]
scripts : update sync
Borislav Stanimirov [Thu, 13 Jun 2024 10:16:07 +0000 (13:16 +0300)]
whisper : use ggml-cuda in mel calc, set appropriate device (#2236)
* whisper : use ggml-cuda in mel calc, set appropriate device
* whisper : forbid cuda mel calc on devices with compute < 600, workaround for #2230
Georgi Gerganov [Tue, 11 Jun 2024 16:14:38 +0000 (19:14 +0300)]
cuda : fix HIPBLAS build (#2234)
Georgi Gerganov [Tue, 11 Jun 2024 14:39:01 +0000 (17:39 +0300)]
cuda : fix bounds check for src0 rows in MMVQ kernel (#2231)
* cuda : fix bounds check for src0 rows in MMVQ kernel
* Update ggml-cuda/mmvq.cu
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Georgi Gerganov [Tue, 11 Jun 2024 14:21:30 +0000 (17:21 +0300)]
ci : fix CUDA builds (#2232)
Borislav Stanimirov [Mon, 10 Jun 2024 18:51:32 +0000 (21:51 +0300)]
whisper : auto-grow working areas for mel_calc_cuda (#2227)
* whisper : auto-grow working areas for mel_calc_cuda, fixes #2226
* whisper : only calculate mel spectrogram on GPU if audio is <= 5 min
Georgi Gerganov [Mon, 10 Jun 2024 07:59:36 +0000 (10:59 +0300)]
whisper : free whisper_mel instances (#2220)
Georgi Gerganov [Thu, 6 Jun 2024 15:51:36 +0000 (18:51 +0300)]
whisper : whisper_state/backend fixes (#2217)
* whisper : fixes
* ci : WHISPER_CUBLAS -> WHISPER_CUDA
Borislav Stanimirov [Thu, 6 Jun 2024 13:20:46 +0000 (16:20 +0300)]
whisper : calculate mel spectrogram directly into a ggml_tensor (#2208)
* whisper : calculate mel spectrogram directly into a ggml_tensor
* whisper : remove unused temp buffer from state
* whisper : fix not initializing wstate.embd_enc
Borislav Stanimirov [Tue, 4 Jun 2024 06:32:23 +0000 (09:32 +0300)]
whisper : add CUDA-specific computation mel spectrograms (#2206)
* whisper : use polymorphic class to calculate mel spectrogram
* whisper : add cuda-specific mel spectrogram calculation
* whisper : conditionally compile cufftGetErrorString to avoid warnings
* build : add new files to makefile
* ruby : add new files to conf script
* build : fix typo in makefile
* whisper : suppress cub warning for deprecated C++ std in whisper-mel-cuda
Borislav Stanimirov [Fri, 31 May 2024 08:37:29 +0000 (11:37 +0300)]
whisper : remove `speed_up` and `phase_vocoder*` functions (#2198)
* whisper : fix cast warning
* whisper : remove phase_vocoder functions, ref #2195
* whisper : remove speed_up from whisper_full_params, closes #2195
Martin Delille [Thu, 30 May 2024 12:43:28 +0000 (14:43 +0200)]
readme : add conan badge (#2196)
* Add conan badge
* Fix markdown formating
Carlos Zoido [Thu, 30 May 2024 12:06:15 +0000 (14:06 +0200)]
readme : add install instructions for Conan (#2189)
Borislav Stanimirov [Wed, 29 May 2024 16:09:21 +0000 (19:09 +0300)]
whisper: use global cache for sin/cos vals and Hann window (#2194)
- also rename Hanning to Hann as it's named after Julius von Hann
as per Wikipedia
Georgi Gerganov [Mon, 27 May 2024 07:35:09 +0000 (10:35 +0300)]
release : v1.6.2
Georgi Gerganov [Mon, 27 May 2024 07:20:25 +0000 (10:20 +0300)]
Revert "whisper : remove extra backend instance (huh?)" (#2182)
This reverts commit
4caa64b73ed4c0e71097c865b0f6a9c136b007c6 .
Daniel Valdivia [Sat, 25 May 2024 07:46:22 +0000 (00:46 -0700)]
server : fix typo (#2181)
A simple comment typo, PR can be dismissed
Todd [Wed, 22 May 2024 20:02:52 +0000 (16:02 -0400)]
ruby : update bindings (#2154)
* update library files
* update whispercpp
* not needed for gem
Georgi Gerganov [Tue, 21 May 2024 15:44:37 +0000 (18:44 +0300)]
release : v1.6.1
William Tambellini [Tue, 21 May 2024 15:31:41 +0000 (08:31 -0700)]
examples : add support for decoding input with ffmpeg (Linux) (#2133)
- search for ffmpeg libs/headers at cmake time
- added ffmpeg-transcode.cpp into libcommon if ffmpeg on
- hooked ffmpeg trancoding in common read_wav(...)
- passed test:
./main -m ggml-base.en.bin -f samples/jfk.mp3
Pedro Probst [Mon, 20 May 2024 06:08:48 +0000 (03:08 -0300)]
node : add flash_attn param (#2170)
Tamotsu Takahashi [Sun, 19 May 2024 08:49:26 +0000 (17:49 +0900)]
ci: Update build.yml to suppress warnings about node.js versions (#2166)
* Update actions to suppress warnings about old node.js
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
* Update actions/upload-artifact, specify android cmdline-tools-version
* Use java 20
gradle 8.1 complains against 21
https://docs.gradle.org/current/userguide/compatibility.html
Georgi Gerganov [Wed, 15 May 2024 06:59:48 +0000 (09:59 +0300)]
release : v1.6.0
Georgi Gerganov [Wed, 15 May 2024 06:38:19 +0000 (09:38 +0300)]
whisper : use flash attention (#2152)
* whisper : use flash attention in the encoder
* whisper : add kv_pad
* whisper : remove extra backend instance (huh?)
* whisper : use FA for cross-attention
* whisper : use FA for self-attention
* whisper : simplify encoder FA
* whisper : add flash_attn runtime parameter
* scripts : add bench log
* scripts : add M1 Pro bench log
petterreinholdtsen [Tue, 14 May 2024 18:32:41 +0000 (20:32 +0200)]
talk-llama : reject runs without required arguments (#2153)
* Extended talk-llama example to reject runs without required arguments.
Print warning and exit if models are not specified on the command line.
* Update examples/talk-llama/talk-llama.cpp
* Update examples/talk-llama/talk-llama.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 14 May 2024 16:16:32 +0000 (19:16 +0300)]
sync : ggml
Georgi Gerganov [Tue, 14 May 2024 16:09:30 +0000 (19:09 +0300)]
metal : support FA without mask + add asserts (llama/7278)
* ggml : fa without mask + add asserts
ggml-ci
* metal : support non-contiguous KV
ggml-ci
Radoslav Gerganov [Tue, 14 May 2024 11:27:19 +0000 (14:27 +0300)]
ggml : add RPC backend (llama/6829)
* ggml : add RPC backend
The RPC backend proxies all operations to a remote server which runs a
regular backend (CPU, CUDA, Metal, etc).
* set TCP_NODELAY
* add CI workflows
* Address review comments
* fix warning
* implement llama_max_devices() for RPC
* Address review comments
* Address review comments
* wrap sockfd into a struct
* implement get_alignment and get_max_size
* add get_device_memory
* fix warning
* win32 support
* add README
* readme : trim trailing whitespace
* Address review comments
* win32 fix
* Address review comments
* fix compile warnings on macos
Neo Zhang [Mon, 13 May 2024 10:11:26 +0000 (18:11 +0800)]
rm wait() (llama/7233)
Johannes Gäßler [Sun, 12 May 2024 17:40:45 +0000 (19:40 +0200)]
CUDA: add FP32 FlashAttention vector kernel (llama/7188)
* CUDA: add FP32 FlashAttention vector kernel
* fixup! CUDA: add FP32 FlashAttention vector kernel
* fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
* fixup! fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
Georgi Gerganov [Tue, 14 May 2024 16:15:35 +0000 (19:15 +0300)]
scripts : sync ggml-rpc
thewh1teagle [Tue, 14 May 2024 06:43:41 +0000 (09:43 +0300)]
whisper : fix model path encoding in windows (#2086)
* fix: model path encoding in windows
* fix: convert model path to wide string only for MSVC compiler
Georgi Gerganov [Mon, 13 May 2024 12:33:46 +0000 (15:33 +0300)]
server : return utf-8 (#2138)
Pedro Probst [Mon, 13 May 2024 12:22:23 +0000 (09:22 -0300)]
node : add audio_ctx and audio buffer params (#2123)
* node : add audio_ctx param
* node : support passing audio buffer directly
* node : parse audio_ctx in index.js
---------
Co-authored-by: Georgi Gerganov <redacted>
aldorof [Mon, 13 May 2024 12:18:43 +0000 (08:18 -0400)]
cmake : fix HIP/ROCm build (#2102)
valVk [Mon, 13 May 2024 12:15:43 +0000 (15:15 +0300)]
node : add additional params (#2000)
* Add additional params to addon.node
* Add comma_in_time as parameter
* Fix tests
Mark Karpelès [Mon, 13 May 2024 12:13:19 +0000 (21:13 +0900)]
js : remove un-needed request header from fetchRemote (#2119)
Georgi Gerganov [Mon, 13 May 2024 12:09:35 +0000 (15:09 +0300)]
cmake : fix metal embed sources path (#2110)
Daniel Ziegenberg [Mon, 13 May 2024 12:00:19 +0000 (14:00 +0200)]
main : dont print timings with --no-prints (#2108)
Signed-off-by: Daniel Ziegenberg <redacted>
Daniel Ziegenberg [Mon, 13 May 2024 11:59:44 +0000 (13:59 +0200)]
main : add options for temperature control (#2088)
Add two options:
```
-tp, --temperature N [0.00 ] The sampling temperature, between 0 and 1
-tpi, --temperature-inc N [0.20 ] The increment of temperature, between 0 and 1
```
The sampling temperature, between 0 and 1. Higher values like 0.8 will
make the output more random, while lower values like 0.2 will make it
more focused and deterministic. If set to 0, the model will use log
probability to automatically increase the temperature until certain
thresholds are hit.
Signed-off-by: Daniel Ziegenberg <redacted>
Georgi Gerganov [Mon, 13 May 2024 11:43:43 +0000 (14:43 +0300)]
whisper : switch back to F32 mask (#0)
zhangjixiong [Mon, 13 May 2024 11:30:03 +0000 (19:30 +0800)]
whisper.android : update example, add field to print timestamp (#2072)
Xingchen Song(宋星辰) [Mon, 13 May 2024 11:29:39 +0000 (19:29 +0800)]
cmake : fix json INTERFACE library (#2069)
mashizora [Mon, 13 May 2024 08:55:32 +0000 (16:55 +0800)]
main : fix double quote escaping in csv output (#2090)
Georgi Gerganov [Mon, 13 May 2024 08:01:07 +0000 (11:01 +0300)]
metal : tune soft_max number of threads (#0)
Georgi Gerganov [Mon, 13 May 2024 07:41:33 +0000 (10:41 +0300)]
whisper : remove old flash attn code (#0)
Georgi Gerganov [Sun, 12 May 2024 17:36:31 +0000 (20:36 +0300)]
ggml : try fix ppc64 (#0)
Georgi Gerganov [Sun, 12 May 2024 17:55:57 +0000 (20:55 +0300)]
ggml : remove oboslete alibi code (skipme) (#0)
Georgi Gerganov [Sun, 12 May 2024 17:12:46 +0000 (20:12 +0300)]
talk-llama : sync llama.cpp
Georgi Gerganov [Sun, 12 May 2024 16:23:22 +0000 (19:23 +0300)]
sync : ggml
Hong Bo PENG [Sun, 12 May 2024 09:17:18 +0000 (17:17 +0800)]
ggml : optimize for ppc64le using VSX intrinsics (ggml/784)
* optimize for ppc64le using VSX intrinsics
* 1. code clean up by removing comments about overflow concern.
2. fix typo in suffix of scaling.
* Continue to fix typo in suffix of scaling for QK_K <> 256
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sat, 11 May 2024 13:57:53 +0000 (16:57 +0300)]
metal : fix indent (ggml/0)
Georgi Gerganov [Sat, 11 May 2024 13:50:54 +0000 (16:50 +0300)]
ggml : restore sigmoid decl order (ggml/0)
Georgi Gerganov [Sat, 11 May 2024 13:25:50 +0000 (16:25 +0300)]
ggml : resolve merge (ggml/0)
ggml-ci
Georgi Gerganov [Sat, 11 May 2024 07:32:41 +0000 (10:32 +0300)]
ggml : full ALiBi support (llama/7192)
* ggml : full ALiBi support
* ggml : update ggml_soft_max_ext() CUDA, SYCL
* ggml : ggml_flash_attn_ext() support ALiBi (CPU)
* ggml : ggml_flash_attn_ext() support ALiBi (Metal)
* ggml : fix warning
* ggml : ggml_flash_attn_ext() support ALiBi (CUDA)
ggml-ci
* ggml : fix assert message
* vulkan : add dev notes
* ggml : require mask when using ALiBi
ggml-ci
* convert : fix convert for refact models
Georgi Gerganov [Fri, 10 May 2024 15:20:10 +0000 (18:20 +0300)]
metal : fix flash attention kernel requirements (llama/7169)
* metal : fix flash attention kernel requirements
ggml-ci
* metal : fix ggml_metal_supports_op
ggml-ci
Ouadie EL FAROUKI [Fri, 10 May 2024 00:32:15 +0000 (01:32 +0100)]
Minor arithmetic improvement to mmvq wrapper kernel (llama/7172)
0cc4m [Thu, 9 May 2024 18:39:54 +0000 (20:39 +0200)]
Vulkan Bugfixes and Improvements (llama/7084)
* Modify mat mat mul shader for mul_mat_id, modify mat vec mul shaders for single call batch operation
* Further work towards MoE, disabled for now
* Disable MoE code (not ready yet), fix a number of bugs in shaders and Vulkan code
* Add softmax with f16 mask and pos buffer support
* Disable mul_mat_id shaders for now
* Fix flake8
* Fix validation errors caused by empty buffers on larger batch sizes
Johannes Gäßler [Thu, 9 May 2024 12:32:02 +0000 (14:32 +0200)]
CUDA: generalize FP16 fattn vec kernel (llama/7061)
* CUDA: generalize FP16 fattn vec kernel
* disable unsupported head sizes for AMD in test
* try AMD fix
* fix batch size 2-8
* partially revert changes
Albert Jin [Thu, 9 May 2024 09:34:37 +0000 (17:34 +0800)]
opencl : alignment size converted from bits to bytes (llama/7090)
* opencl alignment size should be converted from bits to bytes
Reference: https://registry.khronos.org/OpenCL/specs/3.0-unified/html/OpenCL_API.html#CL_DEVICE_MEM_BASE_ADDR_ALIGN
> Alignment requirement (in bits) for sub-buffer offsets.
* Update ggml-opencl.cpp for readability using division instead of shift
Co-authored-by: Jared Van Bortel <redacted>
---------
Co-authored-by: Jared Van Bortel <redacted>
agray3 [Wed, 8 May 2024 20:55:49 +0000 (21:55 +0100)]
Introduction of CUDA Graphs to LLama.cpp (llama/6766)
* DRAFT: Introduction of CUDA Graphs to LLama.cpp
* FIx issues raised in comments
* Tidied to now only use CUDA runtime (not mixed with driver calls)
* disable for multi-gpu and batch size > 1
* Disable CUDA graphs for old GPU arch and with env var
* added missing CUDA_CHECKs
* Addressed comments
* further addressed comments
* limit to GGML_ALLOW_CUDA_GRAPHS defined in llama.cpp cmake
* Added more comprehensive graph node checking
* With mechanism to fall back if graph capture fails
* Revert "With mechanism to fall back if graph capture fails"
This reverts commit
eb9f15fb6fcb81384f732c4601a5b25c016a5143 .
* Fall back if graph capture fails and address other comments
* - renamed GGML_ALLOW_CUDA_GRAPHS to GGML_CUDA_USE_GRAPHS
- rename env variable to disable CUDA graphs to GGML_CUDA_DISABLE_GRAPHS
- updated Makefile build to enable CUDA graphs
- removed graph capture failure checking in ggml_cuda_error
using a global variable to track this is not thread safe, but I am also not safistied with checking an error by string
if this is necessary to workaround some issues with graph capture with eg. cuBLAS, we can pass the ggml_backend_cuda_context to the error checking macro and store the result in the context
- fixed several resource leaks
- fixed issue with zero node graphs
- changed fixed size arrays to vectors
- removed the count of number of evaluations before start capturing, and instead changed the capture mode to relaxed
- removed the check for multiple devices so that it is still possible to use a single device, instead checks for split buffers to disable cuda graphs with -sm row
- changed the op for checking batch size to GGML_OP_ADD, should be more reliable than GGML_OP_SOFT_MAX
- code style fixes
- things to look into
- VRAM usage of the cudaGraphExec_t, if it is significant we may need to make it optional
- possibility of using cudaStreamBeginCaptureToGraph to keep track of which ggml graph nodes correspond to which cuda graph nodes
* fix build without cuda graphs
* remove outdated comment
* replace minimum cc value with a constant
---------
Co-authored-by: slaren <redacted>