]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
Georgi Gerganov [Tue, 4 Jun 2024 18:23:20 +0000 (21:23 +0300)]
ggml : remove OpenCL (llama/7735)
ggml-ci
Georgi Gerganov [Tue, 4 Jun 2024 07:01:09 +0000 (10:01 +0300)]
ggml : prevent builds with -ffinite-math-only (llama/7726)
This enforces a check that -fno-finite-math-only was set and that the operating
compiling mode is not in finite maths mode. This is because during rewriting of
silu and softmax for cpu #7154 there emerged an issue where the result that was
observed when >1 slot was nondeterministic as found by @JohannesGaessler.
@LostRuins narrowed the problem down to -ffinite-math-only which was theorised
to be due to SiLU, instead of flushing small values to 0, returns NaN or some
other garbage. @jart proposed a fix that @ggerganov then implemented in this fix
ref https://github.com/ggerganov/llama.cpp/pull/7154#issuecomment-
2145661825
Radoslav Gerganov [Mon, 3 Jun 2024 17:03:26 +0000 (20:03 +0300)]
llama : offload to RPC in addition to other backends (llama/7640)
* llama : offload to RPC in addition to other backends
* - fix copy_tensor being called on the src buffer instead of the dst buffer
- always initialize views in the view_src buffer
- add RPC backend to Makefile build
- add endpoint to all RPC object names
* add rpc-server to Makefile
* Update llama.cpp
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Masaya, Kato [Mon, 3 Jun 2024 15:14:15 +0000 (00:14 +0900)]
ggml : use OpenMP as a thread pool (llama/7606)
* ggml: Added OpenMP for multi-threads processing
* ggml : Limit the number of threads used to avoid deadlock
* update shared state n_threads in parallel region
* clear numa affinity for main thread even with openmp
* enable openmp by default
* fix msvc build
* disable openmp on macos
* ci : disable openmp with thread sanitizer
* Update ggml.c
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
0cc4m [Mon, 3 Jun 2024 08:59:14 +0000 (10:59 +0200)]
Vulkan Mixture of Experts (MoE) support (llama/7628)
* Finish Vulkan mul_mat_id implementation
* Add Vulkan sum_rows and div ops
* Fix MUL_MAT_ID matrix matrix shader
* Fix MUL_MAT_ID matrix vector shader dispatch size
* Fix MUL_MAT_ID matrix vector shader and dispatch code
* Update Vulkan CPU offload for MUL_MAT_ID
* Fix crash when using split mode none and setting a main GPU
woachk [Mon, 3 Jun 2024 05:32:16 +0000 (07:32 +0200)]
kompute : implement op_getrows_f32 (llama/6403)
op_getrows_f32 is required since https://github.com/ggerganov/llama.cpp/pull/6122
for the Vulkan w/ Kompute backend to be functional.
As such, implement this op to make this backend functional again.
Dave Airlie [Sun, 2 Jun 2024 21:59:54 +0000 (07:59 +1000)]
fix bug introduced in using calloc (llama/7701)
compilade pointed this out on the previous MR
Johannes Gäßler [Sat, 1 Jun 2024 21:26:10 +0000 (23:26 +0200)]
Fix FlashAttention debug test, FP32 assert (llama/7684)
Johannes Gäßler [Sat, 1 Jun 2024 13:47:04 +0000 (15:47 +0200)]
CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (llama/7681)
Johannes Gäßler [Sat, 1 Jun 2024 06:44:14 +0000 (08:44 +0200)]
CUDA: quantized KV support for FA vec (llama/7527)
* CUDA: quantized KV support for FA vec
* try CI fix
* fix commented-out kernel variants
* add q8_0 q4_0 tests
* fix nwarps > batch size
* split fattn compile via extern templates
* fix flake8
* fix metal tests
* fix cmake
* make generate_cu_files.py executable
* add autogenerated .cu files
* fix AMD
* error if type_v != FP16 and not flash_attn
* remove obsolete code
Georgi Gerganov [Fri, 31 May 2024 11:17:10 +0000 (14:17 +0300)]
ggml : fix loongson compile warnings (llama/7537)
* ggml : fix loongson compile warnings
ggml-ci
* Fix loongarch quantize test fail.
Fix unexpected error introduced during rebase code.
* tests : disable json test due to lack of python on the CI node
ggml-ci
---------
Co-authored-by: junchao-loongson <redacted>
Chris Elrod [Thu, 30 May 2024 11:32:55 +0000 (07:32 -0400)]
faster avx512 exp implementation (llama/7551)
* faster avx512 exp implementation
* x->r
* improve accuracy, handle special cases
* remove `e`
junchao-loongson [Thu, 30 May 2024 09:30:10 +0000 (17:30 +0800)]
ggml : fix loongarch build (O2 issue) (llama/7636)
Georgi Gerganov [Wed, 29 May 2024 19:20:40 +0000 (22:20 +0300)]
metal : remove invalid asserts (llama/7617)
Georgi Gerganov [Wed, 29 May 2024 17:45:25 +0000 (20:45 +0300)]
metal : add missing asserts (llama/7617)
Georgi Gerganov [Wed, 29 May 2024 17:17:31 +0000 (20:17 +0300)]
ggml : fix YARN + add tests + add asserts (llama/7617)
* tests : add rope tests
ggml-ci
* ggml : fixes (hopefully)
ggml-ci
* tests : add non-cont tests
ggml-ci
* cuda : add asserts for rope/norm + fix DS2
ggml-ci
* ggml : assert contiguousness
* tests : reduce RoPE tests
ggml-ci
Georgi Gerganov [Wed, 29 May 2024 12:38:26 +0000 (15:38 +0300)]
cuda : non-cont concat support (llama/7610)
* tests : add non-cont concat tests
* cuda : non-cont concat support
ggml-ci
Radoslav Gerganov [Wed, 29 May 2024 11:45:44 +0000 (14:45 +0300)]
llama-bench : add support for the RPC backend (llama/7435)
slaren [Wed, 29 May 2024 11:36:39 +0000 (13:36 +0200)]
ggml : use atomic_flag for critical section (llama/7598)
* ggml : use atomic_flag for critical section
* add windows shims
Georgi Gerganov [Wed, 29 May 2024 09:58:00 +0000 (12:58 +0300)]
examples : adapt to new ggml_concat (ggml/0)
zhouwg [Wed, 29 May 2024 02:09:31 +0000 (10:09 +0800)]
ggml : fix typo in ggml.c (llama/7603)
Meng, Hengyu [Tue, 28 May 2024 23:00:24 +0000 (07:00 +0800)]
Align GEMM dispatch (llama/7566)
* align GEMM dispatch
Georgi Gerganov [Tue, 28 May 2024 19:22:50 +0000 (22:22 +0300)]
sycl : fix assert (llama/7563)
k.h.lai [Tue, 28 May 2024 17:25:08 +0000 (01:25 +0800)]
vulkan: properly initialize vulkan devices for LLAMA_SPLIT_MODE_NONE (llama/7552)
Radoslav Gerganov [Tue, 28 May 2024 15:13:36 +0000 (18:13 +0300)]
rpc : resource management rework (llama/7562)
* rpc : resource management rework
* address review comments
Neo Zhang [Tue, 28 May 2024 09:53:37 +0000 (17:53 +0800)]
fix ggml_sycl_mul_mat_id() to match the change of api (llama/7436)
* fix mul_mat_id to match the change of api
* rm comment
* rm unused or duplicated code, rename as review comment
Georgi Gerganov [Tue, 28 May 2024 08:04:19 +0000 (11:04 +0300)]
ggml : generalize GGML_OP_CONCAT (llama/7563)
* ggml : generalize GGML_OP_CONCAT (WIP)
ggml-ci
* tests : add dim != 2 tests
* metal : generalize concat kernel
* tests : naming
* cuda : generalize concat kernel
ggml-ci
* sycl : add warning and assert
* ggml : fix op params handling
* metal : bugfix kernel
ggml-ci
* ggml : reimplement CPU and Metal
* cuda : add asserts
ggml-ci
* ggml : fix ptrs
ggml-ci
Djip007 [Mon, 27 May 2024 23:40:47 +0000 (01:40 +0200)]
update HIP_UMA #7399 (llama/7414)
* update HIP_UMA #7399
add use of hipMemAdviseSetCoarseGrain when LLAMA_HIP_UMA is enable.
- get x2 on prompte eval and x1.5 on token gen with rocm6.0 on ryzen 7940HX iGPU (780M/gfx1103)
* simplify code, more consistent style
---------
Co-authored-by: slaren <redacted>
agray3 [Mon, 27 May 2024 17:33:42 +0000 (18:33 +0100)]
Allow multiple copy function pointers for CUDA graph kernel param updates (llama/7565)
CUDA graphs require parameter updates to kernels associated with
GGML_OP_CPY nodes. Previously the implementation only checked for a
single CUDA kernel in such nodes, but this caused a bug in cases where
2 such kernels exist. This fixes the issue by using a vector to allow
multiple function pointers to be stored and checked against.
Fixes #7942
AidanBeltonS [Mon, 27 May 2024 16:34:51 +0000 (17:34 +0100)]
Fix q_xxs using mul_mat_q (llama/7459)
AidanBeltonS [Mon, 27 May 2024 12:34:09 +0000 (13:34 +0100)]
Add freq factors (llama/7495)
Georgi Gerganov [Mon, 27 May 2024 09:10:19 +0000 (12:10 +0300)]
metal : add GGML_OP_REPEAT kernels (llama/7557)
ggml-ci
Georgi Gerganov [Mon, 27 May 2024 07:38:39 +0000 (10:38 +0300)]
metal : disable FA kernel for HS=256 (llama/7556)
ggml-ci
Georgi Gerganov [Sun, 26 May 2024 15:35:23 +0000 (18:35 +0300)]
ggml : restore ggml_rope_xpos_inplace (ggml/0)
ggml-ci
Masaya, Kato [Sat, 25 May 2024 08:42:31 +0000 (17:42 +0900)]
ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (llama/7433)
* Add SVE support for q4_0_q8_0 q8_0_q8_0
* remove ifdef
Georgi Gerganov [Thu, 23 May 2024 14:17:43 +0000 (17:17 +0300)]
ggml : silence UB sanitizer error during iq2_xxs quantization (llama/0)
Georgi Gerganov [Thu, 23 May 2024 07:00:44 +0000 (10:00 +0300)]
ggml : remove ggml_flash_attn and ggml_flash_ff (llama/7463)
ggml-ci
Georgi Gerganov [Thu, 23 May 2024 07:00:21 +0000 (10:00 +0300)]
ggml : drop support for QK_K=64 (llama/7473)
* ggml : drop support for QK_K=64
ggml-ci
* opencl : restore QK_K=256 define
0cc4m [Thu, 23 May 2024 06:59:59 +0000 (08:59 +0200)]
Update vulkan rope implementation to support frequency factors (llama/7475)
Johannes Gäßler [Wed, 22 May 2024 22:31:20 +0000 (00:31 +0200)]
CUDA: fix FA out-of-bounds reads (llama/7479)
Johannes Gäßler [Wed, 22 May 2024 15:58:25 +0000 (17:58 +0200)]
CUDA: fix FA out-of-bounds writes (llama/7465)
Georgi Gerganov [Wed, 22 May 2024 09:36:37 +0000 (12:36 +0300)]
cuda : fix compile warning (llama/7454)
Johannes Gäßler [Wed, 22 May 2024 08:24:29 +0000 (10:24 +0200)]
CUDA: remove incorrect precision check (llama/7454)
Georgi Gerganov [Wed, 22 May 2024 08:01:35 +0000 (11:01 +0300)]
cuda : fix rope + add tests (llama/7452)
* cuda : fix rope pos data
ggml-ci
* ggml : drop mode & 1 == 1 support for ggml_rope
ggml-ci
* ggml : support freq_factors for f16 rope (CPU)
ggml-ci
* tests : add rope tests using frequency factors
ggml-ci
liuwei-git [Tue, 21 May 2024 20:28:32 +0000 (04:28 +0800)]
llama : add phi3 128K model support (llama/7225)
* add phi3 128k support in convert-hf-to-gguf
* add phi3 128k support in cuda
* address build warnings on llama.cpp
* adjust index value in cuda long rope freq factors
* add long rope support in ggml cpu backend
* make freq factors only depend on ctx size
* remove unused rope scaling type 'su' frin gguf converter
* fix flint warnings on convert-hf-to-gguf.py
* set to the short freq factor when context size is small than trained context size
* add one line of comments
* metal : support rope freq_factors
* ggml : update ggml_rope_ext API to support freq. factors
* backends : add dev messages to support rope freq. factors
* minor : style
* tests : update to use new rope API
* backends : fix pragma semicolons
* minor : cleanup
* llama : move rope factors from KV header to tensors
* llama : remove tmp assert
* cuda : fix compile warning
* convert : read/write n_head_kv
* llama : fix uninitialized tensors
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 21 May 2024 20:03:42 +0000 (23:03 +0300)]
metal : handle F16 inf values, fix FA partial offload (llama/7434)
ggml-ci
Johannes Gäßler [Tue, 21 May 2024 17:27:12 +0000 (19:27 +0200)]
CUDA: fix unused warning in mmq.cu (llama/7442)
Johannes Gäßler [Tue, 21 May 2024 14:02:12 +0000 (16:02 +0200)]
CUDA: deduplicate mmq code (llama/7397)
Radoslav Gerganov [Mon, 20 May 2024 13:36:55 +0000 (16:36 +0300)]
rpc : track allocated buffers (llama/7411)
* rpc : track allocated buffers
ref: #7407
* rpc : pack rpc_tensor tightly
AidanBeltonS [Mon, 20 May 2024 11:08:23 +0000 (12:08 +0100)]
Update SYCL upscale operation (llama/7321)
* Update SYCL upscale operation
* Formatting
* Remove messages
Herman Semenov [Mon, 20 May 2024 07:33:21 +0000 (07:33 +0000)]
ggml-opencl, llama: using reserve() if count already known (llama/7272)
junchao-loongson [Mon, 20 May 2024 07:19:21 +0000 (15:19 +0800)]
ggml : add loongarch lsx and lasx support (llama/6454)
* add loongarch lsx and lasx optimize code
* Add loongarch compilation support to makefile
* revert stb_image.h
* opt bytes_from_nibbles_32 and sum_i16_pairs_float
* fix undeclared
* format code
* update
* update 2
---------
Co-authored-by: Jinyang He <redacted>
Srihari-mcw [Mon, 20 May 2024 02:18:39 +0000 (19:18 -0700)]
Add provisions for windows support for BF16 code including CMake provision for enabling AVX512_BF16 (llama/7258)
0cc4m [Sun, 19 May 2024 15:19:53 +0000 (17:19 +0200)]
Vulkan Embedding Fix (llama/7360)
* Fix empty Vulkan host buffers
Add fp32 fp16 matmul shader
Fix matmul shader alignment
* Remove deprecated tensor->backend uses
* Fix Vulkan validation errors on embedding models with no offloaded layers
* Fix Vulkan llava segfault when not offloading layers
slaren [Sun, 19 May 2024 15:08:46 +0000 (17:08 +0200)]
ggml : fix another case of quants nans (llama/7387)
Johannes Gäßler [Sun, 19 May 2024 14:46:13 +0000 (16:46 +0200)]
ggml: implement quantized KV cache for FA (llama/7372)
slaren [Sun, 19 May 2024 12:19:37 +0000 (14:19 +0200)]
cuda : clear error after buffer allocation failure (llama/7376)
fraxy-v [Sat, 18 May 2024 22:44:42 +0000 (01:44 +0300)]
Capture CUDA logging output (llama/7298)
* logging: output capture in cuda module
* fix compile error
* fix: vsnprintf terminates with 0, string use not correct
* post review
* Update llama.cpp
Co-authored-by: slaren <redacted>
* Update llama.cpp
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Sat, 18 May 2024 10:40:39 +0000 (13:40 +0300)]
android : use "ci-android" branch for CI (llama/7341)
* android : use "ci-android" branch for CI
* ggml : disable SIMD exp and silu for 32-bit ARM
ggml-ci
* android : do not fetch, use add_subdirectory instead
* cmake : provide binary dir
Johannes Gäßler [Sat, 18 May 2024 10:36:25 +0000 (12:36 +0200)]
CUDA: deduplicate FlashAttention code (llama/7352)
Engininja2 [Sat, 18 May 2024 08:05:17 +0000 (02:05 -0600)]
cuda : add half2 __shfl_xor() for ROCm 5.5 (llama/7263)
0cc4m [Sat, 18 May 2024 06:10:58 +0000 (08:10 +0200)]
Update and fix Vulkan soft_max and argsort implementations (llama/7237)
* Update and fix Vulkan softmax implementation
* Update and fix Vulkan argsort implementation
slaren [Sat, 18 May 2024 00:39:54 +0000 (02:39 +0200)]
ggml : fix quants nans when all the group weights are very close to zero (llama/7313)
Johannes Gäßler [Fri, 17 May 2024 16:54:52 +0000 (18:54 +0200)]
CUDA: faster large batch FA without tensor cores (llama/7314)
Radoslav Gerganov [Fri, 17 May 2024 14:25:44 +0000 (17:25 +0300)]
rpc : set SO_REUSEADDR for the server socket (llama/7320)
ref: #7293
Herman Semenov [Fri, 17 May 2024 07:08:49 +0000 (07:08 +0000)]
ggml-quants, llama : removed excess checks (llama/7274)
Justine Tunney [Fri, 17 May 2024 06:58:52 +0000 (02:58 -0400)]
ggml : rewrite silu and softmax for cpu (llama/7154)
This change upstreams llamafile's vectorized expf() functions. This lets
us compute softmax and silu more accurately than the short[65536] lookup
table that GGML previously used to make this operation go faster. We can
support aarch64 and sse2+ with the worst case rounding error of 2ulp. It
makes make -j8 tests && ./tests/test-backend-ops -o SOFT_MAX -b CPU perf
go 1.5x faster for SSE2+FMA, 1.9x faster for AVX2+FMA and 2.1x on AVX512
Radoslav Gerganov [Wed, 15 May 2024 12:29:07 +0000 (15:29 +0300)]
rpc : add command line arg for specifying backend memory
ref: #7293
Max Krasnyansky [Thu, 16 May 2024 02:47:36 +0000 (19:47 -0700)]
Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (llama/7191)
* logging: add proper checks for clang to avoid errors and warnings with VA_ARGS
* build: add CMake Presets and toolchian files for Windows ARM64
* matmul-int8: enable matmul-int8 with MSVC and fix Clang warnings
* ci: add support for optimized Windows ARM64 builds with MSVC and LLVM
* matmul-int8: fixed typos in q8_0_q8_0 matmuls
Co-authored-by: Georgi Gerganov <redacted>
* matmul-int8: remove unnecessary casts in q8_0_q8_0
---------
Co-authored-by: Georgi Gerganov <redacted>
kunnis [Wed, 15 May 2024 17:59:12 +0000 (12:59 -0500)]
ggml : use dynamic thread scheduling for matrix multiplication (llama/6915)
* Just reordering some structs.
* Adding in the calls to mm_pause
* Passing around the state
* Renaming and moving a bunch of variables around.
* Extracting the logic to it's own function.
* Moving some variable definitions into the chunk function.
* Moving some variables around
* moving src1_cont inside
* Moving row_size
* adding the current_chunk
* Reorg the code.
* Formatting to match the orig patch
* starting to setup the chunking variables
* Starting the buildup of the loop
* The yield shouldn't be necessary.
* adding the looping structure based on the chunk configuration.
* Add in the re-chunking code.
* Making it much more likely to rechunk.
* disable resizing if numa is enabled.
* Updating comments with what we've learned.
* Fix formatting
* Couple more formatting fixes.
* More style fixes.
* Fix Warnings
* Going with unused because there's conditional logic that needs it.
* Update ggml.c
* Update ggml.c
---------
agray3 [Wed, 15 May 2024 13:44:49 +0000 (14:44 +0100)]
Avoid unnecessarily disabling CUDA graphs (llama/7302)
As discussed in PR #6766, CUDA graphs were being disabled in the presence of long prompts.
This fixes the issue by avoiding the consective update counter from incrementing unnecessarily
for tokens in which cuda graphs are disabled due to batch size > 1.
slaren [Wed, 15 May 2024 13:08:48 +0000 (15:08 +0200)]
ggml : tag ggml_tensor::backend as deprecated (llama/7290)
AidanBeltonS [Wed, 15 May 2024 12:26:30 +0000 (13:26 +0100)]
Add missing " (llama/7303)
John Balis [Wed, 15 May 2024 08:52:33 +0000 (03:52 -0500)]
ggml : add `ggml_upscale_ext` (ggml/814)
* initial commit with CPU implementation of upscale to shape and test, cuda implementation next
* experimental commit to see if dst shape is correct
* test version
* test
* removed unnecessary params
* refactor
* fixed tests
* ggml : metal impl + cleanup + sycl dev warnings
* patched ggml_upscale cuda op to handle non-contiguous tensors, added test for non-contiguous behavior
* metal : fix upsacle op to support nb00 + style
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sun, 16 Jun 2024 09:41:42 +0000 (12:41 +0300)]
scripts : update sync
Borislav Stanimirov [Thu, 13 Jun 2024 10:16:07 +0000 (13:16 +0300)]
whisper : use ggml-cuda in mel calc, set appropriate device (#2236)
* whisper : use ggml-cuda in mel calc, set appropriate device
* whisper : forbid cuda mel calc on devices with compute < 600, workaround for #2230
Georgi Gerganov [Tue, 11 Jun 2024 16:14:38 +0000 (19:14 +0300)]
cuda : fix HIPBLAS build (#2234)
Georgi Gerganov [Tue, 11 Jun 2024 14:39:01 +0000 (17:39 +0300)]
cuda : fix bounds check for src0 rows in MMVQ kernel (#2231)
* cuda : fix bounds check for src0 rows in MMVQ kernel
* Update ggml-cuda/mmvq.cu
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Georgi Gerganov [Tue, 11 Jun 2024 14:21:30 +0000 (17:21 +0300)]
ci : fix CUDA builds (#2232)
Borislav Stanimirov [Mon, 10 Jun 2024 18:51:32 +0000 (21:51 +0300)]
whisper : auto-grow working areas for mel_calc_cuda (#2227)
* whisper : auto-grow working areas for mel_calc_cuda, fixes #2226
* whisper : only calculate mel spectrogram on GPU if audio is <= 5 min
Georgi Gerganov [Mon, 10 Jun 2024 07:59:36 +0000 (10:59 +0300)]
whisper : free whisper_mel instances (#2220)
Georgi Gerganov [Thu, 6 Jun 2024 15:51:36 +0000 (18:51 +0300)]
whisper : whisper_state/backend fixes (#2217)
* whisper : fixes
* ci : WHISPER_CUBLAS -> WHISPER_CUDA
Borislav Stanimirov [Thu, 6 Jun 2024 13:20:46 +0000 (16:20 +0300)]
whisper : calculate mel spectrogram directly into a ggml_tensor (#2208)
* whisper : calculate mel spectrogram directly into a ggml_tensor
* whisper : remove unused temp buffer from state
* whisper : fix not initializing wstate.embd_enc
Borislav Stanimirov [Tue, 4 Jun 2024 06:32:23 +0000 (09:32 +0300)]
whisper : add CUDA-specific computation mel spectrograms (#2206)
* whisper : use polymorphic class to calculate mel spectrogram
* whisper : add cuda-specific mel spectrogram calculation
* whisper : conditionally compile cufftGetErrorString to avoid warnings
* build : add new files to makefile
* ruby : add new files to conf script
* build : fix typo in makefile
* whisper : suppress cub warning for deprecated C++ std in whisper-mel-cuda
Borislav Stanimirov [Fri, 31 May 2024 08:37:29 +0000 (11:37 +0300)]
whisper : remove `speed_up` and `phase_vocoder*` functions (#2198)
* whisper : fix cast warning
* whisper : remove phase_vocoder functions, ref #2195
* whisper : remove speed_up from whisper_full_params, closes #2195
Martin Delille [Thu, 30 May 2024 12:43:28 +0000 (14:43 +0200)]
readme : add conan badge (#2196)
* Add conan badge
* Fix markdown formating
Carlos Zoido [Thu, 30 May 2024 12:06:15 +0000 (14:06 +0200)]
readme : add install instructions for Conan (#2189)
Borislav Stanimirov [Wed, 29 May 2024 16:09:21 +0000 (19:09 +0300)]
whisper: use global cache for sin/cos vals and Hann window (#2194)
- also rename Hanning to Hann as it's named after Julius von Hann
as per Wikipedia
Georgi Gerganov [Mon, 27 May 2024 07:35:09 +0000 (10:35 +0300)]
release : v1.6.2
Georgi Gerganov [Mon, 27 May 2024 07:20:25 +0000 (10:20 +0300)]
Revert "whisper : remove extra backend instance (huh?)" (#2182)
This reverts commit
4caa64b73ed4c0e71097c865b0f6a9c136b007c6 .
Daniel Valdivia [Sat, 25 May 2024 07:46:22 +0000 (00:46 -0700)]
server : fix typo (#2181)
A simple comment typo, PR can be dismissed
Todd [Wed, 22 May 2024 20:02:52 +0000 (16:02 -0400)]
ruby : update bindings (#2154)
* update library files
* update whispercpp
* not needed for gem
Georgi Gerganov [Tue, 21 May 2024 15:44:37 +0000 (18:44 +0300)]
release : v1.6.1
William Tambellini [Tue, 21 May 2024 15:31:41 +0000 (08:31 -0700)]
examples : add support for decoding input with ffmpeg (Linux) (#2133)
- search for ffmpeg libs/headers at cmake time
- added ffmpeg-transcode.cpp into libcommon if ffmpeg on
- hooked ffmpeg trancoding in common read_wav(...)
- passed test:
./main -m ggml-base.en.bin -f samples/jfk.mp3
Pedro Probst [Mon, 20 May 2024 06:08:48 +0000 (03:08 -0300)]
node : add flash_attn param (#2170)
Tamotsu Takahashi [Sun, 19 May 2024 08:49:26 +0000 (17:49 +0900)]
ci: Update build.yml to suppress warnings about node.js versions (#2166)
* Update actions to suppress warnings about old node.js
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
* Update actions/upload-artifact, specify android cmdline-tools-version
* Use java 20
gradle 8.1 complains against 21
https://docs.gradle.org/current/userguide/compatibility.html
Georgi Gerganov [Wed, 15 May 2024 06:59:48 +0000 (09:59 +0300)]
release : v1.6.0
Georgi Gerganov [Wed, 15 May 2024 06:38:19 +0000 (09:38 +0300)]
whisper : use flash attention (#2152)
* whisper : use flash attention in the encoder
* whisper : add kv_pad
* whisper : remove extra backend instance (huh?)
* whisper : use FA for cross-attention
* whisper : use FA for self-attention
* whisper : simplify encoder FA
* whisper : add flash_attn runtime parameter
* scripts : add bench log
* scripts : add M1 Pro bench log
petterreinholdtsen [Tue, 14 May 2024 18:32:41 +0000 (20:32 +0200)]
talk-llama : reject runs without required arguments (#2153)
* Extended talk-llama example to reject runs without required arguments.
Print warning and exit if models are not specified on the command line.
* Update examples/talk-llama/talk-llama.cpp
* Update examples/talk-llama/talk-llama.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 14 May 2024 16:16:32 +0000 (19:16 +0300)]
sync : ggml