]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Georgi Gerganov [Wed, 29 May 2024 09:58:18 +0000 (12:58 +0300)]
sync : llama.cpp
ggml-ci
Georgi Gerganov [Wed, 29 May 2024 09:58:00 +0000 (12:58 +0300)]
examples : adapt to new ggml_concat (#0)
zhouwg [Wed, 29 May 2024 02:09:31 +0000 (10:09 +0800)]
ggml : fix typo in ggml.c (llama/7603)
Meng, Hengyu [Tue, 28 May 2024 23:00:24 +0000 (07:00 +0800)]
Align GEMM dispatch (llama/7566)
* align GEMM dispatch
Georgi Gerganov [Tue, 28 May 2024 19:22:50 +0000 (22:22 +0300)]
sycl : fix assert (llama/7563)
k.h.lai [Tue, 28 May 2024 17:25:08 +0000 (01:25 +0800)]
vulkan: properly initialize vulkan devices for LLAMA_SPLIT_MODE_NONE (llama/7552)
Radoslav Gerganov [Tue, 28 May 2024 15:13:36 +0000 (18:13 +0300)]
rpc : resource management rework (llama/7562)
* rpc : resource management rework
* address review comments
Neo Zhang [Tue, 28 May 2024 09:53:37 +0000 (17:53 +0800)]
fix ggml_sycl_mul_mat_id() to match the change of api (llama/7436)
* fix mul_mat_id to match the change of api
* rm comment
* rm unused or duplicated code, rename as review comment
Georgi Gerganov [Tue, 28 May 2024 08:04:19 +0000 (11:04 +0300)]
ggml : generalize GGML_OP_CONCAT (llama/7563)
* ggml : generalize GGML_OP_CONCAT (WIP)
ggml-ci
* tests : add dim != 2 tests
* metal : generalize concat kernel
* tests : naming
* cuda : generalize concat kernel
ggml-ci
* sycl : add warning and assert
* ggml : fix op params handling
* metal : bugfix kernel
ggml-ci
* ggml : reimplement CPU and Metal
* cuda : add asserts
ggml-ci
* ggml : fix ptrs
ggml-ci
Djip007 [Mon, 27 May 2024 23:40:47 +0000 (01:40 +0200)]
update HIP_UMA #7399 (llama/7414)
* update HIP_UMA #7399
add use of hipMemAdviseSetCoarseGrain when LLAMA_HIP_UMA is enable.
- get x2 on prompte eval and x1.5 on token gen with rocm6.0 on ryzen 7940HX iGPU (780M/gfx1103)
* simplify code, more consistent style
---------
Co-authored-by: slaren <redacted>
agray3 [Mon, 27 May 2024 17:33:42 +0000 (18:33 +0100)]
Allow multiple copy function pointers for CUDA graph kernel param updates (llama/7565)
CUDA graphs require parameter updates to kernels associated with
GGML_OP_CPY nodes. Previously the implementation only checked for a
single CUDA kernel in such nodes, but this caused a bug in cases where
2 such kernels exist. This fixes the issue by using a vector to allow
multiple function pointers to be stored and checked against.
Fixes #7942
AidanBeltonS [Mon, 27 May 2024 16:34:51 +0000 (17:34 +0100)]
Fix q_xxs using mul_mat_q (llama/7459)
AidanBeltonS [Mon, 27 May 2024 12:34:09 +0000 (13:34 +0100)]
Add freq factors (llama/7495)
Georgi Gerganov [Mon, 27 May 2024 09:10:19 +0000 (12:10 +0300)]
metal : add GGML_OP_REPEAT kernels (llama/7557)
ggml-ci
Georgi Gerganov [Mon, 27 May 2024 07:38:39 +0000 (10:38 +0300)]
metal : disable FA kernel for HS=256 (llama/7556)
ggml-ci
Georgi Gerganov [Sun, 26 May 2024 15:35:23 +0000 (18:35 +0300)]
ggml : restore ggml_rope_xpos_inplace (#0)
ggml-ci
Georgi Gerganov [Sun, 26 May 2024 15:00:48 +0000 (18:00 +0300)]
sync : llama.cpp
ggml-ci
Masaya, Kato [Sat, 25 May 2024 08:42:31 +0000 (17:42 +0900)]
ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (llama/7433)
* Add SVE support for q4_0_q8_0 q8_0_q8_0
* remove ifdef
Georgi Gerganov [Thu, 23 May 2024 14:17:43 +0000 (17:17 +0300)]
ggml : silence UB sanitizer error during iq2_xxs quantization (llama/0)
Georgi Gerganov [Thu, 23 May 2024 07:00:44 +0000 (10:00 +0300)]
ggml : remove ggml_flash_attn and ggml_flash_ff (llama/7463)
ggml-ci
Georgi Gerganov [Thu, 23 May 2024 07:00:21 +0000 (10:00 +0300)]
ggml : drop support for QK_K=64 (llama/7473)
* ggml : drop support for QK_K=64
ggml-ci
* opencl : restore QK_K=256 define
0cc4m [Thu, 23 May 2024 06:59:59 +0000 (08:59 +0200)]
Update vulkan rope implementation to support frequency factors (llama/7475)
Johannes Gäßler [Wed, 22 May 2024 22:31:20 +0000 (00:31 +0200)]
CUDA: fix FA out-of-bounds reads (llama/7479)
Johannes Gäßler [Wed, 22 May 2024 15:58:25 +0000 (17:58 +0200)]
CUDA: fix FA out-of-bounds writes (llama/7465)
Georgi Gerganov [Wed, 22 May 2024 09:36:37 +0000 (12:36 +0300)]
cuda : fix compile warning (llama/7454)
Johannes Gäßler [Wed, 22 May 2024 08:24:29 +0000 (10:24 +0200)]
CUDA: remove incorrect precision check (llama/7454)
Georgi Gerganov [Wed, 22 May 2024 08:01:35 +0000 (11:01 +0300)]
cuda : fix rope + add tests (llama/7452)
* cuda : fix rope pos data
ggml-ci
* ggml : drop mode & 1 == 1 support for ggml_rope
ggml-ci
* ggml : support freq_factors for f16 rope (CPU)
ggml-ci
* tests : add rope tests using frequency factors
ggml-ci
liuwei-git [Tue, 21 May 2024 20:28:32 +0000 (04:28 +0800)]
llama : add phi3 128K model support (llama/7225)
* add phi3 128k support in convert-hf-to-gguf
* add phi3 128k support in cuda
* address build warnings on llama.cpp
* adjust index value in cuda long rope freq factors
* add long rope support in ggml cpu backend
* make freq factors only depend on ctx size
* remove unused rope scaling type 'su' frin gguf converter
* fix flint warnings on convert-hf-to-gguf.py
* set to the short freq factor when context size is small than trained context size
* add one line of comments
* metal : support rope freq_factors
* ggml : update ggml_rope_ext API to support freq. factors
* backends : add dev messages to support rope freq. factors
* minor : style
* tests : update to use new rope API
* backends : fix pragma semicolons
* minor : cleanup
* llama : move rope factors from KV header to tensors
* llama : remove tmp assert
* cuda : fix compile warning
* convert : read/write n_head_kv
* llama : fix uninitialized tensors
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 21 May 2024 20:03:42 +0000 (23:03 +0300)]
metal : handle F16 inf values, fix FA partial offload (llama/7434)
ggml-ci
Johannes Gäßler [Tue, 21 May 2024 17:27:12 +0000 (19:27 +0200)]
CUDA: fix unused warning in mmq.cu (llama/7442)
Johannes Gäßler [Tue, 21 May 2024 14:02:12 +0000 (16:02 +0200)]
CUDA: deduplicate mmq code (llama/7397)
Radoslav Gerganov [Mon, 20 May 2024 13:36:55 +0000 (16:36 +0300)]
rpc : track allocated buffers (llama/7411)
* rpc : track allocated buffers
ref: #7407
* rpc : pack rpc_tensor tightly
AidanBeltonS [Mon, 20 May 2024 11:08:23 +0000 (12:08 +0100)]
Update SYCL upscale operation (llama/7321)
* Update SYCL upscale operation
* Formatting
* Remove messages
Herman Semenov [Mon, 20 May 2024 07:33:21 +0000 (07:33 +0000)]
ggml-opencl, llama: using reserve() if count already known (llama/7272)
junchao-loongson [Mon, 20 May 2024 07:19:21 +0000 (15:19 +0800)]
ggml : add loongarch lsx and lasx support (llama/6454)
* add loongarch lsx and lasx optimize code
* Add loongarch compilation support to makefile
* revert stb_image.h
* opt bytes_from_nibbles_32 and sum_i16_pairs_float
* fix undeclared
* format code
* update
* update 2
---------
Co-authored-by: Jinyang He <redacted>
Srihari-mcw [Mon, 20 May 2024 02:18:39 +0000 (19:18 -0700)]
Add provisions for windows support for BF16 code including CMake provision for enabling AVX512_BF16 (llama/7258)
0cc4m [Sun, 19 May 2024 15:19:53 +0000 (17:19 +0200)]
Vulkan Embedding Fix (llama/7360)
* Fix empty Vulkan host buffers
Add fp32 fp16 matmul shader
Fix matmul shader alignment
* Remove deprecated tensor->backend uses
* Fix Vulkan validation errors on embedding models with no offloaded layers
* Fix Vulkan llava segfault when not offloading layers
slaren [Sun, 19 May 2024 15:08:46 +0000 (17:08 +0200)]
ggml : fix another case of quants nans (llama/7387)
Johannes Gäßler [Sun, 19 May 2024 14:46:13 +0000 (16:46 +0200)]
ggml: implement quantized KV cache for FA (llama/7372)
slaren [Sun, 19 May 2024 12:19:37 +0000 (14:19 +0200)]
cuda : clear error after buffer allocation failure (llama/7376)
fraxy-v [Sat, 18 May 2024 22:44:42 +0000 (01:44 +0300)]
Capture CUDA logging output (llama/7298)
* logging: output capture in cuda module
* fix compile error
* fix: vsnprintf terminates with 0, string use not correct
* post review
* Update llama.cpp
Co-authored-by: slaren <redacted>
* Update llama.cpp
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Sat, 18 May 2024 10:40:39 +0000 (13:40 +0300)]
android : use "ci-android" branch for CI (llama/7341)
* android : use "ci-android" branch for CI
* ggml : disable SIMD exp and silu for 32-bit ARM
ggml-ci
* android : do not fetch, use add_subdirectory instead
* cmake : provide binary dir
Johannes Gäßler [Sat, 18 May 2024 10:36:25 +0000 (12:36 +0200)]
CUDA: deduplicate FlashAttention code (llama/7352)
Engininja2 [Sat, 18 May 2024 08:05:17 +0000 (02:05 -0600)]
cuda : add half2 __shfl_xor() for ROCm 5.5 (llama/7263)
0cc4m [Sat, 18 May 2024 06:10:58 +0000 (08:10 +0200)]
Update and fix Vulkan soft_max and argsort implementations (llama/7237)
* Update and fix Vulkan softmax implementation
* Update and fix Vulkan argsort implementation
slaren [Sat, 18 May 2024 00:39:54 +0000 (02:39 +0200)]
ggml : fix quants nans when all the group weights are very close to zero (llama/7313)
Johannes Gäßler [Fri, 17 May 2024 16:54:52 +0000 (18:54 +0200)]
CUDA: faster large batch FA without tensor cores (llama/7314)
Radoslav Gerganov [Fri, 17 May 2024 14:25:44 +0000 (17:25 +0300)]
rpc : set SO_REUSEADDR for the server socket (llama/7320)
ref: #7293
Herman Semenov [Fri, 17 May 2024 07:08:49 +0000 (07:08 +0000)]
ggml-quants, llama : removed excess checks (llama/7274)
Justine Tunney [Fri, 17 May 2024 06:58:52 +0000 (02:58 -0400)]
ggml : rewrite silu and softmax for cpu (llama/7154)
This change upstreams llamafile's vectorized expf() functions. This lets
us compute softmax and silu more accurately than the short[65536] lookup
table that GGML previously used to make this operation go faster. We can
support aarch64 and sse2+ with the worst case rounding error of 2ulp. It
makes make -j8 tests && ./tests/test-backend-ops -o SOFT_MAX -b CPU perf
go 1.5x faster for SSE2+FMA, 1.9x faster for AVX2+FMA and 2.1x on AVX512
Radoslav Gerganov [Wed, 15 May 2024 12:29:07 +0000 (15:29 +0300)]
rpc : add command line arg for specifying backend memory
ref: #7293
Max Krasnyansky [Thu, 16 May 2024 02:47:36 +0000 (19:47 -0700)]
Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (llama/7191)
* logging: add proper checks for clang to avoid errors and warnings with VA_ARGS
* build: add CMake Presets and toolchian files for Windows ARM64
* matmul-int8: enable matmul-int8 with MSVC and fix Clang warnings
* ci: add support for optimized Windows ARM64 builds with MSVC and LLVM
* matmul-int8: fixed typos in q8_0_q8_0 matmuls
Co-authored-by: Georgi Gerganov <redacted>
* matmul-int8: remove unnecessary casts in q8_0_q8_0
---------
Co-authored-by: Georgi Gerganov <redacted>
kunnis [Wed, 15 May 2024 17:59:12 +0000 (12:59 -0500)]
ggml : use dynamic thread scheduling for matrix multiplication (llama/6915)
* Just reordering some structs.
* Adding in the calls to mm_pause
* Passing around the state
* Renaming and moving a bunch of variables around.
* Extracting the logic to it's own function.
* Moving some variable definitions into the chunk function.
* Moving some variables around
* moving src1_cont inside
* Moving row_size
* adding the current_chunk
* Reorg the code.
* Formatting to match the orig patch
* starting to setup the chunking variables
* Starting the buildup of the loop
* The yield shouldn't be necessary.
* adding the looping structure based on the chunk configuration.
* Add in the re-chunking code.
* Making it much more likely to rechunk.
* disable resizing if numa is enabled.
* Updating comments with what we've learned.
* Fix formatting
* Couple more formatting fixes.
* More style fixes.
* Fix Warnings
* Going with unused because there's conditional logic that needs it.
* Update ggml.c
* Update ggml.c
---------
agray3 [Wed, 15 May 2024 13:44:49 +0000 (14:44 +0100)]
Avoid unnecessarily disabling CUDA graphs (llama/7302)
As discussed in PR #6766, CUDA graphs were being disabled in the presence of long prompts.
This fixes the issue by avoiding the consective update counter from incrementing unnecessarily
for tokens in which cuda graphs are disabled due to batch size > 1.
slaren [Wed, 15 May 2024 13:08:48 +0000 (15:08 +0200)]
ggml : tag ggml_tensor::backend as deprecated (llama/7290)
AidanBeltonS [Wed, 15 May 2024 12:26:30 +0000 (13:26 +0100)]
Add missing " (llama/7303)
Andrei [Sat, 25 May 2024 12:42:35 +0000 (08:42 -0400)]
cmake : add Vulkan build (#730)
* Add vulkan shaders from llama.cpp
* Add vulkan build to cmake
* remove autogenerated shaders file
* Update sync scripts
* Remove stale vulkan shaders file
* Add up-to-date shaders file
compilade [Fri, 24 May 2024 20:58:29 +0000 (16:58 -0400)]
gguf : use Qn_K for k-quants instead of KQn (#837)
Brian [Sun, 19 May 2024 08:05:26 +0000 (18:05 +1000)]
gguf.md: add sharding to naming convention (#826)
* gguf.md: add sharding to naming convention [no ci]
* gguf.md: Add note on using gguf metadata for model name, version and expert count [no ci]
* gguf.md: Tighten up wording and add regex example [no ci]
* gguf.md: json output for expertcount and shard is numerical [no ci]
Andrei [Fri, 17 May 2024 15:05:08 +0000 (11:05 -0400)]
Add ggml rpc to cmake (#827)
Brian [Fri, 17 May 2024 06:09:01 +0000 (16:09 +1000)]
gguf.md: Add GGUF Naming Convention Section (#822)
* gguf.md: Add GGUF Naming Convention Section
* gguf.md: add BF16
* gguf.md: GGUF Filename Parsing Strategy
* gguf.md: include tensor type table and historical context
* gguf.md: minor corrections
* gguf.md: more detailed breakdown of tensor type mapping
* gguf.md: use Encoding Scheme name instead
* gguf.md: minor correction to overall naming convention
* gguf.md: simplify GGUF Naming Convention
John Balis [Wed, 15 May 2024 08:52:33 +0000 (03:52 -0500)]
ggml : add `ggml_upscale_ext` (#814)
* initial commit with CPU implementation of upscale to shape and test, cuda implementation next
* experimental commit to see if dst shape is correct
* test version
* test
* removed unnecessary params
* refactor
* fixed tests
* ggml : metal impl + cleanup + sycl dev warnings
* patched ggml_upscale cuda op to handle non-contiguous tensors, added test for non-contiguous behavior
* metal : fix upsacle op to support nb00 + style
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 15 May 2024 07:37:39 +0000 (10:37 +0300)]
sync : whisper.cpp
Georgi Gerganov [Wed, 15 May 2024 06:38:19 +0000 (09:38 +0300)]
whisper : use flash attention (whisper/2152)
* whisper : use flash attention in the encoder
* whisper : add kv_pad
* whisper : remove extra backend instance (huh?)
* whisper : use FA for cross-attention
* whisper : use FA for self-attention
* whisper : simplify encoder FA
* whisper : add flash_attn runtime parameter
* scripts : add bench log
* scripts : add M1 Pro bench log
Georgi Gerganov [Tue, 14 May 2024 16:13:34 +0000 (19:13 +0300)]
sync : llama.cpp
Georgi Gerganov [Tue, 14 May 2024 16:09:30 +0000 (19:09 +0300)]
metal : support FA without mask + add asserts (llama/7278)
* ggml : fa without mask + add asserts
ggml-ci
* metal : support non-contiguous KV
ggml-ci
Radoslav Gerganov [Tue, 14 May 2024 11:27:19 +0000 (14:27 +0300)]
ggml : add RPC backend (llama/6829)
* ggml : add RPC backend
The RPC backend proxies all operations to a remote server which runs a
regular backend (CPU, CUDA, Metal, etc).
* set TCP_NODELAY
* add CI workflows
* Address review comments
* fix warning
* implement llama_max_devices() for RPC
* Address review comments
* Address review comments
* wrap sockfd into a struct
* implement get_alignment and get_max_size
* add get_device_memory
* fix warning
* win32 support
* add README
* readme : trim trailing whitespace
* Address review comments
* win32 fix
* Address review comments
* fix compile warnings on macos
Neo Zhang [Mon, 13 May 2024 10:11:26 +0000 (18:11 +0800)]
rm wait() (llama/7233)
Johannes Gäßler [Sun, 12 May 2024 17:40:45 +0000 (19:40 +0200)]
CUDA: add FP32 FlashAttention vector kernel (llama/7188)
* CUDA: add FP32 FlashAttention vector kernel
* fixup! CUDA: add FP32 FlashAttention vector kernel
* fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
* fixup! fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
Georgi Gerganov [Tue, 14 May 2024 16:12:05 +0000 (19:12 +0300)]
scripts : sync ggml-rpc
Georgi Gerganov [Tue, 14 May 2024 12:09:31 +0000 (15:09 +0300)]
sync : whisper.cpp
ggml-ci
thewh1teagle [Tue, 14 May 2024 06:43:41 +0000 (09:43 +0300)]
whisper : fix model path encoding in windows (whisper/2086)
* fix: model path encoding in windows
* fix: convert model path to wide string only for MSVC compiler
Daniel Ziegenberg [Mon, 13 May 2024 12:00:19 +0000 (14:00 +0200)]
main : dont print timings with --no-prints (whisper/2108)
Signed-off-by: Daniel Ziegenberg <redacted>
Daniel Ziegenberg [Mon, 13 May 2024 11:59:44 +0000 (13:59 +0200)]
main : add options for temperature control (whisper/2088)
Add two options:
```
-tp, --temperature N [0.00 ] The sampling temperature, between 0 and 1
-tpi, --temperature-inc N [0.20 ] The increment of temperature, between 0 and 1
```
The sampling temperature, between 0 and 1. Higher values like 0.8 will
make the output more random, while lower values like 0.2 will make it
more focused and deterministic. If set to 0, the model will use log
probability to automatically increase the temperature until certain
thresholds are hit.
Signed-off-by: Daniel Ziegenberg <redacted>
Georgi Gerganov [Mon, 13 May 2024 11:43:43 +0000 (14:43 +0300)]
whisper : switch back to F32 mask (whisper/0)
mashizora [Mon, 13 May 2024 08:55:32 +0000 (16:55 +0800)]
main : fix double quote escaping in csv output (whisper/2090)
Georgi Gerganov [Mon, 13 May 2024 08:01:07 +0000 (11:01 +0300)]
metal : tune soft_max number of threads (whisper/0)
Georgi Gerganov [Mon, 13 May 2024 07:41:33 +0000 (10:41 +0300)]
whisper : remove old flash attn code (whisper/0)
Georgi Gerganov [Sun, 12 May 2024 17:36:31 +0000 (20:36 +0300)]
ggml : try fix ppc64 (whisper/0)
Przemysław Pawełczyk [Wed, 8 May 2024 15:33:43 +0000 (17:33 +0200)]
ggml : expose SSE3 and SSSE3 for MSVC when AVX is available (whisper/2128)
goldwaving [Sun, 28 Apr 2024 17:36:12 +0000 (15:06 -0230)]
Remove unnecessary memory reallocation in fft (whisper/2080)
fft_out needs to be twice the frame_size, not the frame_step. It is resized in fft() anyway, but this change prevents an unnecessary reallocation.
n_fft must match the mel filter size, so it is best not to calculate it from the framesize.
We only need to get the magnitudes for half the spectrum since the other half is a mirror and not used in the mel filter loop later.
Georgi Gerganov [Wed, 24 Apr 2024 11:45:27 +0000 (14:45 +0300)]
whisper : more prominent log message for sub-1s audio (whisper/2065)
Georgi Gerganov [Wed, 17 Apr 2024 09:23:47 +0000 (12:23 +0300)]
main : pass nullptr when regex is empty (whisper/2070)
Ikko Eltociear Ashimine [Mon, 15 Apr 2024 16:40:27 +0000 (01:40 +0900)]
whisper : update grammar-parser.cpp (whisper/2058)
preceeding -> preceding
Hong Bo PENG [Sun, 12 May 2024 09:17:18 +0000 (17:17 +0800)]
ggml : optimize for ppc64le using VSX intrinsics (#784)
* optimize for ppc64le using VSX intrinsics
* 1. code clean up by removing comments about overflow concern.
2. fix typo in suffix of scaling.
* Continue to fix typo in suffix of scaling for QK_K <> 256
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sat, 11 May 2024 18:37:29 +0000 (21:37 +0300)]
cuda : remove old alibi sources (#0)
Georgi Gerganov [Sat, 11 May 2024 13:57:53 +0000 (16:57 +0300)]
metal : fix indent (#0)
Georgi Gerganov [Sat, 11 May 2024 13:50:54 +0000 (16:50 +0300)]
ggml : restore sigmoid decl order (#0)
Georgi Gerganov [Sat, 11 May 2024 13:47:45 +0000 (16:47 +0300)]
tests : restore unary tests (#0)
Georgi Gerganov [Sat, 11 May 2024 13:42:01 +0000 (16:42 +0300)]
mnist : clean whitespace
ggml-ci
Georgi Gerganov [Sat, 11 May 2024 13:25:50 +0000 (16:25 +0300)]
ggml : resolve merge (#0)
ggml-ci
Georgi Gerganov [Sat, 11 May 2024 13:20:42 +0000 (16:20 +0300)]
sync : llama.cpp
ggml-ci
Georgi Gerganov [Sat, 11 May 2024 07:32:41 +0000 (10:32 +0300)]
ggml : full ALiBi support (llama/7192)
* ggml : full ALiBi support
* ggml : update ggml_soft_max_ext() CUDA, SYCL
* ggml : ggml_flash_attn_ext() support ALiBi (CPU)
* ggml : ggml_flash_attn_ext() support ALiBi (Metal)
* ggml : fix warning
* ggml : ggml_flash_attn_ext() support ALiBi (CUDA)
ggml-ci
* ggml : fix assert message
* vulkan : add dev notes
* ggml : require mask when using ALiBi
ggml-ci
* convert : fix convert for refact models
Georgi Gerganov [Fri, 10 May 2024 15:20:10 +0000 (18:20 +0300)]
metal : fix flash attention kernel requirements (llama/7169)
* metal : fix flash attention kernel requirements
ggml-ci
* metal : fix ggml_metal_supports_op
ggml-ci
Ouadie EL FAROUKI [Fri, 10 May 2024 00:32:15 +0000 (01:32 +0100)]
Minor arithmetic improvement to mmvq wrapper kernel (llama/7172)
0cc4m [Thu, 9 May 2024 18:39:54 +0000 (20:39 +0200)]
Vulkan Bugfixes and Improvements (llama/7084)
* Modify mat mat mul shader for mul_mat_id, modify mat vec mul shaders for single call batch operation
* Further work towards MoE, disabled for now
* Disable MoE code (not ready yet), fix a number of bugs in shaders and Vulkan code
* Add softmax with f16 mask and pos buffer support
* Disable mul_mat_id shaders for now
* Fix flake8
* Fix validation errors caused by empty buffers on larger batch sizes
Johannes Gäßler [Thu, 9 May 2024 12:32:02 +0000 (14:32 +0200)]
CUDA: generalize FP16 fattn vec kernel (llama/7061)
* CUDA: generalize FP16 fattn vec kernel
* disable unsupported head sizes for AMD in test
* try AMD fix
* fix batch size 2-8
* partially revert changes
Albert Jin [Thu, 9 May 2024 09:34:37 +0000 (17:34 +0800)]
opencl : alignment size converted from bits to bytes (llama/7090)
* opencl alignment size should be converted from bits to bytes
Reference: https://registry.khronos.org/OpenCL/specs/3.0-unified/html/OpenCL_API.html#CL_DEVICE_MEM_BASE_ADDR_ALIGN
> Alignment requirement (in bits) for sub-buffer offsets.
* Update ggml-opencl.cpp for readability using division instead of shift
Co-authored-by: Jared Van Bortel <redacted>
---------
Co-authored-by: Jared Van Bortel <redacted>
agray3 [Wed, 8 May 2024 20:55:49 +0000 (21:55 +0100)]
Introduction of CUDA Graphs to LLama.cpp (llama/6766)
* DRAFT: Introduction of CUDA Graphs to LLama.cpp
* FIx issues raised in comments
* Tidied to now only use CUDA runtime (not mixed with driver calls)
* disable for multi-gpu and batch size > 1
* Disable CUDA graphs for old GPU arch and with env var
* added missing CUDA_CHECKs
* Addressed comments
* further addressed comments
* limit to GGML_ALLOW_CUDA_GRAPHS defined in llama.cpp cmake
* Added more comprehensive graph node checking
* With mechanism to fall back if graph capture fails
* Revert "With mechanism to fall back if graph capture fails"
This reverts commit
eb9f15fb6fcb81384f732c4601a5b25c016a5143 .
* Fall back if graph capture fails and address other comments
* - renamed GGML_ALLOW_CUDA_GRAPHS to GGML_CUDA_USE_GRAPHS
- rename env variable to disable CUDA graphs to GGML_CUDA_DISABLE_GRAPHS
- updated Makefile build to enable CUDA graphs
- removed graph capture failure checking in ggml_cuda_error
using a global variable to track this is not thread safe, but I am also not safistied with checking an error by string
if this is necessary to workaround some issues with graph capture with eg. cuBLAS, we can pass the ggml_backend_cuda_context to the error checking macro and store the result in the context
- fixed several resource leaks
- fixed issue with zero node graphs
- changed fixed size arrays to vectors
- removed the count of number of evaluations before start capturing, and instead changed the capture mode to relaxed
- removed the check for multiple devices so that it is still possible to use a single device, instead checks for split buffers to disable cuda graphs with -sm row
- changed the op for checking batch size to GGML_OP_ADD, should be more reliable than GGML_OP_SOFT_MAX
- code style fixes
- things to look into
- VRAM usage of the cudaGraphExec_t, if it is significant we may need to make it optional
- possibility of using cudaStreamBeginCaptureToGraph to keep track of which ggml graph nodes correspond to which cuda graph nodes
* fix build without cuda graphs
* remove outdated comment
* replace minimum cc value with a constant
---------
Co-authored-by: slaren <redacted>
Gilad S [Wed, 8 May 2024 19:08:10 +0000 (22:08 +0300)]
metal : use `vm_allocate` instead of `posix_memalign` on macOS (llama/7078)
* fix: use `malloc` instead of `posix_memalign` in `ggml-metal.m` to make it not crash Electron proccesses
* fix: typo
* fix: use `vm_allocate` instead of `posix_memalign`
* fix: don't call `newBufferWithBytesNoCopy` with `NULL` when `ggml_metal_host_malloc` returns `NULL`
* fix: use `vm_allocate` only on macOS