]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
Georgi Gerganov [Tue, 9 Jan 2024 08:42:06 +0000 (10:42 +0200)]
ggml : fix vld1q_s8_x4 32-bit compat (llama/4828)
* ggml : fix vld1q_s8_x4 32-bit compat
ggml-ci
* ggml : fix 32-bit ARM compat (cont)
ggml-ci
Johannes Gäßler [Tue, 9 Jan 2024 07:58:55 +0000 (08:58 +0100)]
CUDA: faster softmax via shared memory + fp16 math (llama/4742)
Georgi Gerganov [Thu, 11 Jan 2024 07:34:59 +0000 (09:34 +0200)]
metal : fix deprecation warning (ggml/690)
Timothy Cronin [Thu, 11 Jan 2024 07:27:48 +0000 (02:27 -0500)]
ggml : remove ggml_cpy_inplace and ggml_cont_inplace (ggml/693)
Jack Mousseau [Wed, 10 Jan 2024 14:19:19 +0000 (06:19 -0800)]
metal : wrap each operation in debug group (ggml/690)
leejet [Wed, 10 Jan 2024 13:13:42 +0000 (21:13 +0800)]
ggml : change GGML_MAX_NAME at compile time (ggml/682)
* change GGML_MAX_NAME to 128
* allow controlling the value of GGML_MAX_NAME through external macro definitions
Halalaluyafail3 [Tue, 9 Jan 2024 16:16:37 +0000 (11:16 -0500)]
Fix execlp call (ggml/689)
NULL can be an integer constant expression with the value zero, in this case the behavior would be undefined because of an incorrect type being passed to the variable arguments.
Kawrakow [Mon, 8 Jan 2024 15:02:32 +0000 (16:02 +0100)]
SOTA 2-bit quants (llama/4773)
* iq2_xxs: basics
* iq2_xxs: scalar and AVX2 dot products
Needed to change Q8_K to have quants in the -127...127 range,
else the IQ2_XXS AVX implementation becomes very awkward.
The alternative would have been to use Q8_0 instead. Perhaps
I'll change later, for now this is what we have.
* iq2_xxs: ARM_NEON dot product
Somehow strangely slow (112 ms/token).
* iq2_xxs: WIP Metal
Dequantize works, something is still wrong with the
dot product.
* iq2_xxs: Metal dot product now works
We have
PP-512 = 475 t/s
TG-128 = 47.3 t/s
Not the greatest performance, but not complete garbage either.
* iq2_xxs: slighty faster dot product
TG-128 is now 48.4 t/s
* iq2_xxs: slighty faster dot product
TG-128 is now 50.9 t/s
* iq2_xxs: even faster Metal dot product
TG-128 is now 54.1 t/s.
Strangely enough, putting the signs lookup table
into shared memory has a bigger impact than the
grid values being in shared memory.
* iq2_xxs: dequantize CUDA kernel - fix conflict with master
* iq2_xxs: quantized CUDA dot product (MMVQ)
We get TG-128 = 153.1 t/s
* iq2_xxs: slightly faster CUDA dot product
TG-128 is now at 155.1 t/s.
* iq2_xxs: add to llama ftype enum
* iq2_xxs: fix MoE on Metal
* Fix missing MMQ ops when on hipBLAS
I had put the ggml_supports_mmq call at the wrong place.
* Fix bug in qequantize_row_iq2_xxs
The 0.25f factor was missing.
Great detective work by @ggerganov!
* Fixing tests
* PR suggestion
---------
Co-authored-by: Iwan Kawrakow <redacted>
Johannes Gäßler [Sun, 7 Jan 2024 16:24:08 +0000 (17:24 +0100)]
CUDA: fixed redundant value dequantization (llama/4809)
Konstantin Zhuravlyov [Sun, 7 Jan 2024 06:52:42 +0000 (01:52 -0500)]
ggml : use __builtin_amdgcn_sudot4 in __dp4a for gfx11 (llama/4787)
Georgi Gerganov [Fri, 5 Jan 2024 13:18:21 +0000 (15:18 +0200)]
ggml : do not sched_yield when calling BLAS (llama/4761)
* ggml : do not sched_yield when calling BLAS
ggml-ci
* ggml : fix do_yield logic
ggml-ci
* ggml : simplify do_yield logic
ggml-ci
Georgi Gerganov [Thu, 4 Jan 2024 08:12:26 +0000 (10:12 +0200)]
ggml : include stdlib.h before intrin.h (llama/4736)
Alexandru Mariuti [Wed, 10 Jan 2024 16:12:06 +0000 (17:12 +0100)]
swift : checkout ggml commit instead of branch (#1750)
RhinoDevel [Wed, 10 Jan 2024 14:15:28 +0000 (15:15 +0100)]
talk-llama : add optional Piper TTS support (#1749)
Add commented-out command to optionally use Piper (https://github.com/rhasspy/piper) as text-to-speech solution for the talk-llama example. Piper voices sound almost like real people which is a big improvement (e.g.) from something like espeak.
Emmanuel Schmidbauer [Mon, 8 Jan 2024 22:39:51 +0000 (17:39 -0500)]
server : add request path option(#1741)
Georgi Gerganov [Mon, 8 Jan 2024 14:41:28 +0000 (16:41 +0200)]
main : add cli option to disable system prints (#1740)
Georgi Gerganov [Sun, 7 Jan 2024 11:35:14 +0000 (13:35 +0200)]
server : fix server temperature + add temperature_inc (#1729)
* server : fix server temperature + add temperature_inc
* server : change dashes to underscores in parameter names
Georgi Gerganov [Sat, 6 Jan 2024 15:22:57 +0000 (17:22 +0200)]
talk-llama : sync latest llama.cpp
Georgi Gerganov [Fri, 5 Jan 2024 15:11:27 +0000 (17:11 +0200)]
release : v1.5.4
Erik Scholz [Fri, 5 Jan 2024 15:00:00 +0000 (16:00 +0100)]
fix : cuda order of synchronization when setting a buffer (ggml/679)
* fix : cuda order of synchronization when setting a buffer
* also sync before memcpy
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Fri, 5 Jan 2024 14:30:52 +0000 (16:30 +0200)]
metal : switch back to default.metallib (ggml/681)
ggml-ci
Georgi Gerganov [Fri, 5 Jan 2024 13:36:04 +0000 (15:36 +0200)]
ggml : fix q2_k bpw in comments (ggml/680)
Yajing Tang [Thu, 4 Jan 2024 14:28:30 +0000 (06:28 -0800)]
coreml : fix ANE optimized encoder (#1716)
Georgi Gerganov [Thu, 4 Jan 2024 12:47:42 +0000 (14:47 +0200)]
whisper.swiftui : add .gitignore
Georgi Gerganov [Thu, 4 Jan 2024 11:37:25 +0000 (13:37 +0200)]
whispser : reset the "batched" timings (#1721)
Georgi Gerganov [Wed, 3 Jan 2024 17:36:33 +0000 (19:36 +0200)]
release : v1.5.3
Ashraful Islam [Wed, 3 Jan 2024 17:30:26 +0000 (11:30 -0600)]
swift : update Package.swift to use ggml as package dependency (#1701)
* updates Package.swift to use ggml as dependency
* cleans up the Package.swift file by removing redundant source files
* updates ggml url src to ggerganov
Finn Voorhees [Wed, 3 Jan 2024 13:39:43 +0000 (08:39 -0500)]
ggml : add error handling to graph_compute (#1714)
Georgi Gerganov [Wed, 3 Jan 2024 12:18:46 +0000 (14:18 +0200)]
cuda : simplify expression
Co-authored-by: slaren <redacted>
Georgi Gerganov [Wed, 3 Jan 2024 11:01:44 +0000 (13:01 +0200)]
cuda : mark I16 and I32 ops as unsupported
ggml-ci
Georgi Gerganov [Wed, 3 Jan 2024 09:35:46 +0000 (11:35 +0200)]
metal : add kernel_get_rows_i32
ggml-ci
Georgi Gerganov [Tue, 2 Jan 2024 19:07:47 +0000 (21:07 +0200)]
metal : optimize ggml_mul_mat_id (faster Mixtral PP) (llama/4725)
* ggml : disable fast-math for Metal (cmake build only)
ggml-ci
* metal : fix Metal API debug warnings
* cmake : add -fno-inline for Metal build (llama/4545)
* metal : fix API debug warnings
* metal : fix compile warnings
* metal : use uint64_t for strides
* cmake : rename option to LLAMA_METAL_SHADER_DEBUG
* metal : fix mat-vec Q8_0 kernel for BS > 1
* metal : normalize mat-vec kernel signatures
* cmake : respect LLAMA_QKK_64 option
* metal : fix mat-vec Q4_K kernel for QK_K == 64
* metal : optimizing ggml_mul_mat_id (wip)
* metal : minor fix
* metal : opt mul_mm_id
Georgi Gerganov [Tue, 2 Jan 2024 08:57:44 +0000 (10:57 +0200)]
metal : enable shader debugging (cmake option) (llama/4705)
* ggml : disable fast-math for Metal (cmake build only)
ggml-ci
* metal : fix Metal API debug warnings
* cmake : add -fno-inline for Metal build (llama/4545)
* metal : fix API debug warnings
* metal : fix compile warnings
* metal : use uint64_t for strides
* cmake : rename option to LLAMA_METAL_SHADER_DEBUG
* metal : fix mat-vec Q8_0 kernel for BS > 1
* metal : normalize mat-vec kernel signatures
* cmake : respect LLAMA_QKK_64 option
* metal : fix mat-vec Q4_K kernel for QK_K == 64
ggml-ci
Georgi Gerganov [Sun, 31 Dec 2023 09:43:31 +0000 (11:43 +0200)]
ggml : add ggml_vdotq_s32 alias (llama/4715)
ggml-ci
Johannes Gäßler [Sat, 30 Dec 2023 12:52:01 +0000 (13:52 +0100)]
CUDA: fixed tensor cores not being used on RDNA3 (llama/4697)
automaticcat [Sat, 30 Dec 2023 08:07:48 +0000 (15:07 +0700)]
ggml : add ggml_cpu_has_avx_vnni() (llama/4589)
* feat: add avx_vnni based on intel documents
* ggml: add avx vnni based on intel document
* llama: add avx vnni information display
* docs: add more details about using oneMKL and oneAPI for intel processors
* docs: add more details about using oneMKL and oneAPI for intel processors
* docs: add more details about using oneMKL and oneAPI for intel processors
* docs: add more details about using oneMKL and oneAPI for intel processors
* docs: add more details about using oneMKL and oneAPI for intel processors
* Update ggml.c
Fix indentation upgate
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Fri, 29 Dec 2023 22:12:53 +0000 (23:12 +0100)]
CUDA: fix tensor core logic for Pascal and HIP (llama/4682)
hydai [Fri, 29 Dec 2023 16:31:19 +0000 (00:31 +0800)]
cuda: fix vmm oom issue on NVIDIA AGX Orin (llama/4687)
Signed-off-by: hydai <redacted>
Guillaume Wenzek [Fri, 29 Dec 2023 17:07:03 +0000 (18:07 +0100)]
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
* add more int ops
* ggml_compute_forward_dup_bytes
* add tests
* PR comments
* tests : minor indentations
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 3 Jan 2024 09:42:42 +0000 (11:42 +0200)]
scripts : fix sync order + metal sed
Andreu Huguet [Tue, 2 Jan 2024 16:50:04 +0000 (17:50 +0100)]
examples : fix WASM Stack Overflow (#1713)
Fix for problem:
"""
RuntimeError: Aborted(Stack overflow! Stack cookie has been overwritten at 0x12be2b10, expected hex dwords 0x89BACDFE and 0x2135467, but received 0x00000000 0x00000000)
"""
That appears when executing the WASM example with the newer versions.
bobqianic [Sat, 30 Dec 2023 21:12:31 +0000 (21:12 +0000)]
docker : fix the publishing of the CUDA Docker image (#1704)
Georgi Gerganov [Fri, 29 Dec 2023 13:00:46 +0000 (15:00 +0200)]
scripts : do not sync commits from this repo
Tamotsu Takahashi [Fri, 29 Dec 2023 10:23:27 +0000 (19:23 +0900)]
ci : build with CLBlast + ggml-opencl use GGML_API (#1576)
* Build with CLBlast
* Declare GGML_API
After rebasing, examples/talk-llama failed:
"D:\a\whisper.cpp\whisper.cpp\build\ALL_BUILD.vcxproj" (build target) (1) ->
"D:\a\whisper.cpp\whisper.cpp\build\examples\talk-llama\talk-llama.vcxproj" (default target) (14) ->
(Link target) ->
llama.obj : error LNK2019: unresolved external symbol ggml_cl_free_data referenced in function "public: __cdecl llama_model::~llama_model(void)" (??1llama_model@@QEAA@XZ) [D:\a\whisper.cpp\whisper.cpp\build\examples\talk-llama\talk-llama.vcxproj]
llama.obj : error LNK2019: unresolved external symbol ggml_cl_transform_tensor referenced in function "public: void __cdecl llama_model_loader::load_all_data(struct ggml_context *,void (__cdecl*)(float,void *),void *,struct llama_mlock *)" (?load_all_data@llama_model_loader@@QEAAXPEAUggml_context@@P6AXMPEAX@Z1PEAUllama_mlock@@@Z) [D:\a\whisper.cpp\whisper.cpp\build\examples\talk-llama\talk-llama.vcxproj]
D:\a\whisper.cpp\whisper.cpp\build\bin\Release\talk-llama.exe : fatal error LNK1120: 2 unresolved externals [D:\a\whisper.cpp\whisper.cpp\build\examples\talk-llama\talk-llama.vcxproj]
bobqianic [Fri, 29 Dec 2023 09:38:35 +0000 (09:38 +0000)]
whisper : replace `tensor->n_dims` with `ggml_n_dims(tensor)` (#1694)
Georgi Gerganov [Fri, 29 Dec 2023 09:30:47 +0000 (11:30 +0200)]
sync : ggml (VMM, sync-ggml-am, dotprod ARM fixes, CUDA fixes) (#1691)
* scripts : add sync-ggml-am.sh
* sync : ggml (VMM, ARM dot prod fix, etc.)
* build : fix CUDA build
* ggml : fix some mul mat cases + add tests for src1 F16
https://github.com/ggerganov/ggml/commit/
dbd02958fa4f46898f68ca29c27ddcdc58a06f98
Dimo [Fri, 29 Dec 2023 09:14:32 +0000 (10:14 +0100)]
download : fix large q5 model name (#1695)
fixed typo in large-v3-q5-0 model name to match HF link
bobqianic [Sat, 23 Dec 2023 12:02:58 +0000 (12:02 +0000)]
whisper : Replace WHISPER_PRINT_DEBUG with WHISPER_LOG_DEBUG (#1681)
Georgi Gerganov [Fri, 22 Dec 2023 15:53:39 +0000 (17:53 +0200)]
sync : ggml (ggml_scale, ggml_row_size, etc.) (#1677)
* sync : ggml
* sync : llama.cpp
* talk-llama : fix obsolete param
* ggml-alloc : fix ggml_tallocr_is_own
* talk.wasm : update to new ggml
* ggml : fix type punning in ggml_scale
* ggml : cuda jetson + arm quants warnings
Chaoqun [Fri, 22 Dec 2023 11:16:02 +0000 (19:16 +0800)]
docker : Dockerize whisper.cpp (#1674)
* build: add dockerfile for ci
* ci: add action to build/push docker image
* fix: lowercase repository to fix ci
* ci: update cuBLAS flag
* build: install curl and ffmped in image
* docs: add docker section
* fix: improve args check when download model
bobqianic [Thu, 21 Dec 2023 22:39:46 +0000 (22:39 +0000)]
CI : Add coverage for talk-llama when WHISPER_CUBLAS=1 (#1672)
bobqianic [Thu, 21 Dec 2023 20:48:52 +0000 (20:48 +0000)]
examples : Revert CMakeLists.txt for talk-llama (#1669)
bobqianic [Thu, 21 Dec 2023 13:44:04 +0000 (13:44 +0000)]
cmake : set default CUDA architectures (#1667)
Alfredo Montesinos [Tue, 19 Dec 2023 10:40:14 +0000 (04:40 -0600)]
bench.py : add different large models (#1655)
Amend different large v1,v2,v3 models to benchmark.
Georgi Gerganov [Thu, 14 Dec 2023 20:00:47 +0000 (22:00 +0200)]
wchess : update README.md
Georgi Gerganov [Thu, 14 Dec 2023 15:56:11 +0000 (17:56 +0200)]
release : v1.5.2
Georgi Gerganov [Thu, 14 Dec 2023 15:51:14 +0000 (17:51 +0200)]
wchess : update readme
fraxy-v [Thu, 14 Dec 2023 13:58:26 +0000 (15:58 +0200)]
wchess : whisper assisted chess (#1595)
* wchess: whisper assisted chess
* wchess: fix allowed moves in check
* wchess: touchstart, touchend events
* wchess: css, disabled button
* wchess : html touches
* wchess : minor fixes and code style
* wchess : bump encoder context to 1280
* wchess : index.html
* wchess : fix CI warnings
* wchess : add array header
* wchess : build static library
* wchess : display grammar
* wchess : update UX
* wchess : add comment
* wchess : add README
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 13 Dec 2023 19:55:03 +0000 (21:55 +0200)]
sync : ggml (Metal fixes, new ops, tests) (#1633)
* sync : ggml (Metal fixes, new ops, tests)
* cuda : fix bin bcast when src1 and dst have different types
Kreijstal [Tue, 12 Dec 2023 11:35:00 +0000 (12:35 +0100)]
cmake : target windows 8 or above for prefetchVirtualMemory in llama-talk (#1617)
Since we use prefetchVirtualMemory we specify we target win 8 or above, otherwise other compilers will refuse to use the prefetchVirtualMemory api, (I understand you are loading it dynamically but the header definition has this limitation)
Kreijstal [Sun, 10 Dec 2023 17:47:52 +0000 (18:47 +0100)]
cmake : Fix bug in httplib.h for mingw (#1615)
Fix bug in httlib.h for mingw, please see https://github.com/yhirose/cpp-httplib/issues/1669
Finn Voorhees [Fri, 8 Dec 2023 11:50:50 +0000 (11:50 +0000)]
metal : fix `ggml_metal_log` vargs (#1606)
Georgi Gerganov [Fri, 8 Dec 2023 11:43:37 +0000 (13:43 +0200)]
whisper.objc : disable timestamps for real-time transcription
Georgi Gerganov [Fri, 8 Dec 2023 11:43:03 +0000 (13:43 +0200)]
whisper : more debug messages + fix fallback logic
Georgi Gerganov [Fri, 8 Dec 2023 11:39:32 +0000 (13:39 +0200)]
metal : fix soft_max kernel src1 argument (#1602)
Georgi Gerganov [Thu, 7 Dec 2023 20:27:19 +0000 (22:27 +0200)]
sync : ggml (new ops, new backend, etc) (#1602)
* sync : ggml (new ops, new backend, etc)
* whisper : remove obsolete broadcasting code
* ggml : remove backend self-registers + fix ggml_concat + n_task logic
* metal : fix assert
* metal : print resource path
* whisper : fix bug if metal init fails
Oleg Sidorov [Tue, 5 Dec 2023 21:01:45 +0000 (22:01 +0100)]
server : pass max-len argument to the server (#1574)
This commit fixes the missing parameter binding for max-len between the input
arguments and wparams.
Finn Voorhees [Tue, 5 Dec 2023 01:14:26 +0000 (01:14 +0000)]
ios : Remove `#if arch(arm)` check for using Metal (#1561)
Digipom [Sun, 3 Dec 2023 14:15:28 +0000 (09:15 -0500)]
ggml : Fix 32-bit compiler warning (#1575)
Warning about %lu on 32-bit targets. Updated to %zu.
Georgi Gerganov [Fri, 1 Dec 2023 21:57:52 +0000 (23:57 +0200)]
ggml : re-enable blas for src0 != F32 (#1583)
Aleksander Andrzejewski [Thu, 30 Nov 2023 23:44:26 +0000 (00:44 +0100)]
Server : Add support for .vtt format to Whisper server (#1578)
- The code comes from examples/main
- The output mimetype is set to text/vtt
Example usage:
```shell
curl 127.0.0.1:8080/inference \
-H "Content-Type: multipart/form-data" \
-F file="@samples/jfk.wav" \
-F temperature="0.2" \
-F response-format="vtt"
```
Oleg Sidorov [Tue, 28 Nov 2023 13:42:58 +0000 (14:42 +0100)]
server : backport .srt output format (#1565)
This commit adds a support of .srt format to Whisper server. The code is
effectively backported from examples/main. The output mimetype is set to
application/x-subrip as per https://en.wikipedia.org/wiki/SubRip.
Example usage:
curl 127.0.0.1:8080/inference \
-H "Content-Type: multipart/form-data" \
-F file="@<file-path>" \
-F temperature="0.2" \
-F response-format="srt"
Gregor Jasny [Tue, 28 Nov 2023 13:41:49 +0000 (14:41 +0100)]
cmake : install required ggml.h header (#1568)
Kasumi [Tue, 28 Nov 2023 09:55:20 +0000 (17:55 +0800)]
server : set default CORS headers to allow all (#1567)
Hang [Mon, 27 Nov 2023 10:04:08 +0000 (18:04 +0800)]
readme : update help (#1560)
bobqianic [Mon, 27 Nov 2023 10:03:16 +0000 (10:03 +0000)]
CI : Add CUDA 11.8.0 support (#1554)
* try to fix cublas build in CI
* add multiple cuda-toolkit version
* Update build.yml
* Disable CUDA-toolkit 10.2.89
bobqianic [Mon, 27 Nov 2023 09:35:37 +0000 (09:35 +0000)]
CI : Rectify the Clang-Related workflow issues (#1551)
* fix bugs in workflow
* fix missing clang in workflow
* Update build.yml
Ismatulla Mansurov [Mon, 27 Nov 2023 09:28:34 +0000 (02:28 -0700)]
server : automatically convert audio on the server (#1539)
* server : automatically convert audio on the server
* server : remove rebundant comments
* server : automatic conversion refactor
* server : update server readme
* server : remove unnecessary comments and tabs
* server : put back remove calling
* server : apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* server : check ffmpeg before the server lunch
* server : fix indentation
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* server : fix function typo calling
* server : fix function typo calling
* server : add warning in readme
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 24 Nov 2023 11:13:12 +0000 (13:13 +0200)]
whisper : remove trailing whitespaces
Georgi Gerganov [Fri, 24 Nov 2023 10:41:55 +0000 (12:41 +0200)]
release : v1.5.1
Georgi Gerganov [Fri, 24 Nov 2023 10:37:08 +0000 (12:37 +0200)]
metal : add backend function to check device family support (#1547)
Georgi Gerganov [Fri, 24 Nov 2023 10:36:21 +0000 (12:36 +0200)]
cuda : sync some minor stuff from llama.cpp (#1548)
Georgi Gerganov [Fri, 24 Nov 2023 07:45:10 +0000 (09:45 +0200)]
whisper : fix typo
ecneladis [Fri, 24 Nov 2023 07:35:02 +0000 (08:35 +0100)]
server : add --print-realtime param (#1541)
* server : add --print-realtime param
* Fix duplicate realtime output
bradmit [Fri, 24 Nov 2023 07:33:13 +0000 (18:33 +1100)]
whisper : add whisper_lang_str_full (#1546)
* Update whisper.h
add whisper_lang_fullstr to retrieve the full language name
* Update whisper.cpp
add whisper_lang_fullstr to return the full language name
* fullstr -> str_full
---------
Co-authored-by: Georgi Gerganov <redacted>
Okabintaro [Thu, 23 Nov 2023 18:59:36 +0000 (19:59 +0100)]
fix(server): typo in temperature parameter (#1545)
Also fixed another typo in comments.
sandrohanea [Thu, 23 Nov 2023 18:20:53 +0000 (19:20 +0100)]
metal : fix build (#1544)
Georgi Gerganov [Thu, 23 Nov 2023 15:20:33 +0000 (17:20 +0200)]
readme : add server example
Gleicon Moraes [Wed, 22 Nov 2023 16:08:11 +0000 (13:08 -0300)]
go : fixed Makefile for MacOS ARM 64 (#1530)
* Fixed Makefile for MacOS ARM 64 based on https://github.com/ggerganov/whisper.cpp/issues/1344 + proper ggml-metal env var setting
* conditional to fix broken non-macos compilation
* spaces -> tab
* make : fix whitespaces
---------
Co-authored-by: Georgi Gerganov <redacted>
Felix [Wed, 22 Nov 2023 08:23:36 +0000 (09:23 +0100)]
Change temp file name for server application (#1535)
Avoid issue of removing file if it exists in the current working
directory
Georgi Gerganov [Tue, 21 Nov 2023 20:27:22 +0000 (22:27 +0200)]
bench : pass memcpy threads from cli
Georgi Gerganov [Tue, 21 Nov 2023 20:07:30 +0000 (22:07 +0200)]
bench : multi-thread memcpy (#1534)
Felix [Tue, 21 Nov 2023 19:36:10 +0000 (20:36 +0100)]
Close file after writing in server application (#1533)
Fix of mistake leaving file open while reading it again as wav
Georgi Gerganov [Tue, 21 Nov 2023 15:30:43 +0000 (17:30 +0200)]
server : add video to readme
Felix [Mon, 20 Nov 2023 19:40:24 +0000 (20:40 +0100)]
server : add a REST Whisper server example with OAI-like API (#1380)
* Add first draft of server
* Added json support and base funcs for server.cpp
* Add more user input via api-request
also some clean up
* Add reqest params and load post function
Also some general clean up
* Remove unused function
* Add readme
* Add exception handlers
* Update examples/server/server.cpp
* make : add server target
* Add magic curl syntax
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
M. A. Ali [Mon, 20 Nov 2023 18:52:27 +0000 (20:52 +0200)]
whisper : update example in whisper.h (#1529)
update the example in the header, previous examples deprecated.
Georgi Gerganov [Mon, 20 Nov 2023 11:16:38 +0000 (13:16 +0200)]
sdl : fix audio callback (#1523)
Georgi Gerganov [Mon, 20 Nov 2023 11:16:11 +0000 (13:16 +0200)]
whisper : reuse whisper_decode_with_state (#1521)
Tamotsu Takahashi [Sun, 19 Nov 2023 10:43:22 +0000 (19:43 +0900)]
ci : redistribute CUDA DLLs (#1522)
see https://docs.nvidia.com/cuda/eula/index.html#attachment-a
sandrohanea [Sun, 19 Nov 2023 09:25:30 +0000 (10:25 +0100)]
whisper : fix with_state methods to use the correct state (#1519)
Co-authored-by: Sandro Hanea <redacted>