]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
Finn Voorhees [Wed, 3 Jan 2024 13:39:43 +0000 (08:39 -0500)]
ggml : add error handling to graph_compute (#1714)
Georgi Gerganov [Wed, 3 Jan 2024 12:18:46 +0000 (14:18 +0200)]
cuda : simplify expression
Co-authored-by: slaren <redacted>
Georgi Gerganov [Wed, 3 Jan 2024 11:01:44 +0000 (13:01 +0200)]
cuda : mark I16 and I32 ops as unsupported
ggml-ci
Georgi Gerganov [Wed, 3 Jan 2024 09:35:46 +0000 (11:35 +0200)]
metal : add kernel_get_rows_i32
ggml-ci
Georgi Gerganov [Tue, 2 Jan 2024 19:07:47 +0000 (21:07 +0200)]
metal : optimize ggml_mul_mat_id (faster Mixtral PP) (llama/4725)
* ggml : disable fast-math for Metal (cmake build only)
ggml-ci
* metal : fix Metal API debug warnings
* cmake : add -fno-inline for Metal build (llama/4545)
* metal : fix API debug warnings
* metal : fix compile warnings
* metal : use uint64_t for strides
* cmake : rename option to LLAMA_METAL_SHADER_DEBUG
* metal : fix mat-vec Q8_0 kernel for BS > 1
* metal : normalize mat-vec kernel signatures
* cmake : respect LLAMA_QKK_64 option
* metal : fix mat-vec Q4_K kernel for QK_K == 64
* metal : optimizing ggml_mul_mat_id (wip)
* metal : minor fix
* metal : opt mul_mm_id
Georgi Gerganov [Tue, 2 Jan 2024 08:57:44 +0000 (10:57 +0200)]
metal : enable shader debugging (cmake option) (llama/4705)
* ggml : disable fast-math for Metal (cmake build only)
ggml-ci
* metal : fix Metal API debug warnings
* cmake : add -fno-inline for Metal build (llama/4545)
* metal : fix API debug warnings
* metal : fix compile warnings
* metal : use uint64_t for strides
* cmake : rename option to LLAMA_METAL_SHADER_DEBUG
* metal : fix mat-vec Q8_0 kernel for BS > 1
* metal : normalize mat-vec kernel signatures
* cmake : respect LLAMA_QKK_64 option
* metal : fix mat-vec Q4_K kernel for QK_K == 64
ggml-ci
Georgi Gerganov [Sun, 31 Dec 2023 09:43:31 +0000 (11:43 +0200)]
ggml : add ggml_vdotq_s32 alias (llama/4715)
ggml-ci
Johannes Gäßler [Sat, 30 Dec 2023 12:52:01 +0000 (13:52 +0100)]
CUDA: fixed tensor cores not being used on RDNA3 (llama/4697)
automaticcat [Sat, 30 Dec 2023 08:07:48 +0000 (15:07 +0700)]
ggml : add ggml_cpu_has_avx_vnni() (llama/4589)
* feat: add avx_vnni based on intel documents
* ggml: add avx vnni based on intel document
* llama: add avx vnni information display
* docs: add more details about using oneMKL and oneAPI for intel processors
* docs: add more details about using oneMKL and oneAPI for intel processors
* docs: add more details about using oneMKL and oneAPI for intel processors
* docs: add more details about using oneMKL and oneAPI for intel processors
* docs: add more details about using oneMKL and oneAPI for intel processors
* Update ggml.c
Fix indentation upgate
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Fri, 29 Dec 2023 22:12:53 +0000 (23:12 +0100)]
CUDA: fix tensor core logic for Pascal and HIP (llama/4682)
hydai [Fri, 29 Dec 2023 16:31:19 +0000 (00:31 +0800)]
cuda: fix vmm oom issue on NVIDIA AGX Orin (llama/4687)
Signed-off-by: hydai <redacted>
Guillaume Wenzek [Fri, 29 Dec 2023 17:07:03 +0000 (18:07 +0100)]
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
* add more int ops
* ggml_compute_forward_dup_bytes
* add tests
* PR comments
* tests : minor indentations
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 3 Jan 2024 09:42:42 +0000 (11:42 +0200)]
scripts : fix sync order + metal sed
Andreu Huguet [Tue, 2 Jan 2024 16:50:04 +0000 (17:50 +0100)]
examples : fix WASM Stack Overflow (#1713)
Fix for problem:
"""
RuntimeError: Aborted(Stack overflow! Stack cookie has been overwritten at 0x12be2b10, expected hex dwords 0x89BACDFE and 0x2135467, but received 0x00000000 0x00000000)
"""
That appears when executing the WASM example with the newer versions.
bobqianic [Sat, 30 Dec 2023 21:12:31 +0000 (21:12 +0000)]
docker : fix the publishing of the CUDA Docker image (#1704)
Georgi Gerganov [Fri, 29 Dec 2023 13:00:46 +0000 (15:00 +0200)]
scripts : do not sync commits from this repo
Tamotsu Takahashi [Fri, 29 Dec 2023 10:23:27 +0000 (19:23 +0900)]
ci : build with CLBlast + ggml-opencl use GGML_API (#1576)
* Build with CLBlast
* Declare GGML_API
After rebasing, examples/talk-llama failed:
"D:\a\whisper.cpp\whisper.cpp\build\ALL_BUILD.vcxproj" (build target) (1) ->
"D:\a\whisper.cpp\whisper.cpp\build\examples\talk-llama\talk-llama.vcxproj" (default target) (14) ->
(Link target) ->
llama.obj : error LNK2019: unresolved external symbol ggml_cl_free_data referenced in function "public: __cdecl llama_model::~llama_model(void)" (??1llama_model@@QEAA@XZ) [D:\a\whisper.cpp\whisper.cpp\build\examples\talk-llama\talk-llama.vcxproj]
llama.obj : error LNK2019: unresolved external symbol ggml_cl_transform_tensor referenced in function "public: void __cdecl llama_model_loader::load_all_data(struct ggml_context *,void (__cdecl*)(float,void *),void *,struct llama_mlock *)" (?load_all_data@llama_model_loader@@QEAAXPEAUggml_context@@P6AXMPEAX@Z1PEAUllama_mlock@@@Z) [D:\a\whisper.cpp\whisper.cpp\build\examples\talk-llama\talk-llama.vcxproj]
D:\a\whisper.cpp\whisper.cpp\build\bin\Release\talk-llama.exe : fatal error LNK1120: 2 unresolved externals [D:\a\whisper.cpp\whisper.cpp\build\examples\talk-llama\talk-llama.vcxproj]
bobqianic [Fri, 29 Dec 2023 09:38:35 +0000 (09:38 +0000)]
whisper : replace `tensor->n_dims` with `ggml_n_dims(tensor)` (#1694)
Georgi Gerganov [Fri, 29 Dec 2023 09:30:47 +0000 (11:30 +0200)]
sync : ggml (VMM, sync-ggml-am, dotprod ARM fixes, CUDA fixes) (#1691)
* scripts : add sync-ggml-am.sh
* sync : ggml (VMM, ARM dot prod fix, etc.)
* build : fix CUDA build
* ggml : fix some mul mat cases + add tests for src1 F16
https://github.com/ggerganov/ggml/commit/
dbd02958fa4f46898f68ca29c27ddcdc58a06f98
Dimo [Fri, 29 Dec 2023 09:14:32 +0000 (10:14 +0100)]
download : fix large q5 model name (#1695)
fixed typo in large-v3-q5-0 model name to match HF link
bobqianic [Sat, 23 Dec 2023 12:02:58 +0000 (12:02 +0000)]
whisper : Replace WHISPER_PRINT_DEBUG with WHISPER_LOG_DEBUG (#1681)
Georgi Gerganov [Fri, 22 Dec 2023 15:53:39 +0000 (17:53 +0200)]
sync : ggml (ggml_scale, ggml_row_size, etc.) (#1677)
* sync : ggml
* sync : llama.cpp
* talk-llama : fix obsolete param
* ggml-alloc : fix ggml_tallocr_is_own
* talk.wasm : update to new ggml
* ggml : fix type punning in ggml_scale
* ggml : cuda jetson + arm quants warnings
Chaoqun [Fri, 22 Dec 2023 11:16:02 +0000 (19:16 +0800)]
docker : Dockerize whisper.cpp (#1674)
* build: add dockerfile for ci
* ci: add action to build/push docker image
* fix: lowercase repository to fix ci
* ci: update cuBLAS flag
* build: install curl and ffmped in image
* docs: add docker section
* fix: improve args check when download model
bobqianic [Thu, 21 Dec 2023 22:39:46 +0000 (22:39 +0000)]
CI : Add coverage for talk-llama when WHISPER_CUBLAS=1 (#1672)
bobqianic [Thu, 21 Dec 2023 20:48:52 +0000 (20:48 +0000)]
examples : Revert CMakeLists.txt for talk-llama (#1669)
bobqianic [Thu, 21 Dec 2023 13:44:04 +0000 (13:44 +0000)]
cmake : set default CUDA architectures (#1667)
Alfredo Montesinos [Tue, 19 Dec 2023 10:40:14 +0000 (04:40 -0600)]
bench.py : add different large models (#1655)
Amend different large v1,v2,v3 models to benchmark.
Georgi Gerganov [Thu, 14 Dec 2023 20:00:47 +0000 (22:00 +0200)]
wchess : update README.md
Georgi Gerganov [Thu, 14 Dec 2023 15:56:11 +0000 (17:56 +0200)]
release : v1.5.2
Georgi Gerganov [Thu, 14 Dec 2023 15:51:14 +0000 (17:51 +0200)]
wchess : update readme
fraxy-v [Thu, 14 Dec 2023 13:58:26 +0000 (15:58 +0200)]
wchess : whisper assisted chess (#1595)
* wchess: whisper assisted chess
* wchess: fix allowed moves in check
* wchess: touchstart, touchend events
* wchess: css, disabled button
* wchess : html touches
* wchess : minor fixes and code style
* wchess : bump encoder context to 1280
* wchess : index.html
* wchess : fix CI warnings
* wchess : add array header
* wchess : build static library
* wchess : display grammar
* wchess : update UX
* wchess : add comment
* wchess : add README
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 13 Dec 2023 19:55:03 +0000 (21:55 +0200)]
sync : ggml (Metal fixes, new ops, tests) (#1633)
* sync : ggml (Metal fixes, new ops, tests)
* cuda : fix bin bcast when src1 and dst have different types
Kreijstal [Tue, 12 Dec 2023 11:35:00 +0000 (12:35 +0100)]
cmake : target windows 8 or above for prefetchVirtualMemory in llama-talk (#1617)
Since we use prefetchVirtualMemory we specify we target win 8 or above, otherwise other compilers will refuse to use the prefetchVirtualMemory api, (I understand you are loading it dynamically but the header definition has this limitation)
Kreijstal [Sun, 10 Dec 2023 17:47:52 +0000 (18:47 +0100)]
cmake : Fix bug in httplib.h for mingw (#1615)
Fix bug in httlib.h for mingw, please see https://github.com/yhirose/cpp-httplib/issues/1669
Finn Voorhees [Fri, 8 Dec 2023 11:50:50 +0000 (11:50 +0000)]
metal : fix `ggml_metal_log` vargs (#1606)
Georgi Gerganov [Fri, 8 Dec 2023 11:43:37 +0000 (13:43 +0200)]
whisper.objc : disable timestamps for real-time transcription
Georgi Gerganov [Fri, 8 Dec 2023 11:43:03 +0000 (13:43 +0200)]
whisper : more debug messages + fix fallback logic
Georgi Gerganov [Fri, 8 Dec 2023 11:39:32 +0000 (13:39 +0200)]
metal : fix soft_max kernel src1 argument (#1602)
Georgi Gerganov [Thu, 7 Dec 2023 20:27:19 +0000 (22:27 +0200)]
sync : ggml (new ops, new backend, etc) (#1602)
* sync : ggml (new ops, new backend, etc)
* whisper : remove obsolete broadcasting code
* ggml : remove backend self-registers + fix ggml_concat + n_task logic
* metal : fix assert
* metal : print resource path
* whisper : fix bug if metal init fails
Oleg Sidorov [Tue, 5 Dec 2023 21:01:45 +0000 (22:01 +0100)]
server : pass max-len argument to the server (#1574)
This commit fixes the missing parameter binding for max-len between the input
arguments and wparams.
Finn Voorhees [Tue, 5 Dec 2023 01:14:26 +0000 (01:14 +0000)]
ios : Remove `#if arch(arm)` check for using Metal (#1561)
Digipom [Sun, 3 Dec 2023 14:15:28 +0000 (09:15 -0500)]
ggml : Fix 32-bit compiler warning (#1575)
Warning about %lu on 32-bit targets. Updated to %zu.
Georgi Gerganov [Fri, 1 Dec 2023 21:57:52 +0000 (23:57 +0200)]
ggml : re-enable blas for src0 != F32 (#1583)
Aleksander Andrzejewski [Thu, 30 Nov 2023 23:44:26 +0000 (00:44 +0100)]
Server : Add support for .vtt format to Whisper server (#1578)
- The code comes from examples/main
- The output mimetype is set to text/vtt
Example usage:
```shell
curl 127.0.0.1:8080/inference \
-H "Content-Type: multipart/form-data" \
-F file="@samples/jfk.wav" \
-F temperature="0.2" \
-F response-format="vtt"
```
Oleg Sidorov [Tue, 28 Nov 2023 13:42:58 +0000 (14:42 +0100)]
server : backport .srt output format (#1565)
This commit adds a support of .srt format to Whisper server. The code is
effectively backported from examples/main. The output mimetype is set to
application/x-subrip as per https://en.wikipedia.org/wiki/SubRip.
Example usage:
curl 127.0.0.1:8080/inference \
-H "Content-Type: multipart/form-data" \
-F file="@<file-path>" \
-F temperature="0.2" \
-F response-format="srt"
Gregor Jasny [Tue, 28 Nov 2023 13:41:49 +0000 (14:41 +0100)]
cmake : install required ggml.h header (#1568)
Kasumi [Tue, 28 Nov 2023 09:55:20 +0000 (17:55 +0800)]
server : set default CORS headers to allow all (#1567)
Hang [Mon, 27 Nov 2023 10:04:08 +0000 (18:04 +0800)]
readme : update help (#1560)
bobqianic [Mon, 27 Nov 2023 10:03:16 +0000 (10:03 +0000)]
CI : Add CUDA 11.8.0 support (#1554)
* try to fix cublas build in CI
* add multiple cuda-toolkit version
* Update build.yml
* Disable CUDA-toolkit 10.2.89
bobqianic [Mon, 27 Nov 2023 09:35:37 +0000 (09:35 +0000)]
CI : Rectify the Clang-Related workflow issues (#1551)
* fix bugs in workflow
* fix missing clang in workflow
* Update build.yml
Ismatulla Mansurov [Mon, 27 Nov 2023 09:28:34 +0000 (02:28 -0700)]
server : automatically convert audio on the server (#1539)
* server : automatically convert audio on the server
* server : remove rebundant comments
* server : automatic conversion refactor
* server : update server readme
* server : remove unnecessary comments and tabs
* server : put back remove calling
* server : apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* server : check ffmpeg before the server lunch
* server : fix indentation
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* server : fix function typo calling
* server : fix function typo calling
* server : add warning in readme
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 24 Nov 2023 11:13:12 +0000 (13:13 +0200)]
whisper : remove trailing whitespaces
Georgi Gerganov [Fri, 24 Nov 2023 10:41:55 +0000 (12:41 +0200)]
release : v1.5.1
Georgi Gerganov [Fri, 24 Nov 2023 10:37:08 +0000 (12:37 +0200)]
metal : add backend function to check device family support (#1547)
Georgi Gerganov [Fri, 24 Nov 2023 10:36:21 +0000 (12:36 +0200)]
cuda : sync some minor stuff from llama.cpp (#1548)
Georgi Gerganov [Fri, 24 Nov 2023 07:45:10 +0000 (09:45 +0200)]
whisper : fix typo
ecneladis [Fri, 24 Nov 2023 07:35:02 +0000 (08:35 +0100)]
server : add --print-realtime param (#1541)
* server : add --print-realtime param
* Fix duplicate realtime output
bradmit [Fri, 24 Nov 2023 07:33:13 +0000 (18:33 +1100)]
whisper : add whisper_lang_str_full (#1546)
* Update whisper.h
add whisper_lang_fullstr to retrieve the full language name
* Update whisper.cpp
add whisper_lang_fullstr to return the full language name
* fullstr -> str_full
---------
Co-authored-by: Georgi Gerganov <redacted>
Okabintaro [Thu, 23 Nov 2023 18:59:36 +0000 (19:59 +0100)]
fix(server): typo in temperature parameter (#1545)
Also fixed another typo in comments.
sandrohanea [Thu, 23 Nov 2023 18:20:53 +0000 (19:20 +0100)]
metal : fix build (#1544)
Georgi Gerganov [Thu, 23 Nov 2023 15:20:33 +0000 (17:20 +0200)]
readme : add server example
Gleicon Moraes [Wed, 22 Nov 2023 16:08:11 +0000 (13:08 -0300)]
go : fixed Makefile for MacOS ARM 64 (#1530)
* Fixed Makefile for MacOS ARM 64 based on https://github.com/ggerganov/whisper.cpp/issues/1344 + proper ggml-metal env var setting
* conditional to fix broken non-macos compilation
* spaces -> tab
* make : fix whitespaces
---------
Co-authored-by: Georgi Gerganov <redacted>
Felix [Wed, 22 Nov 2023 08:23:36 +0000 (09:23 +0100)]
Change temp file name for server application (#1535)
Avoid issue of removing file if it exists in the current working
directory
Georgi Gerganov [Tue, 21 Nov 2023 20:27:22 +0000 (22:27 +0200)]
bench : pass memcpy threads from cli
Georgi Gerganov [Tue, 21 Nov 2023 20:07:30 +0000 (22:07 +0200)]
bench : multi-thread memcpy (#1534)
Felix [Tue, 21 Nov 2023 19:36:10 +0000 (20:36 +0100)]
Close file after writing in server application (#1533)
Fix of mistake leaving file open while reading it again as wav
Georgi Gerganov [Tue, 21 Nov 2023 15:30:43 +0000 (17:30 +0200)]
server : add video to readme
Felix [Mon, 20 Nov 2023 19:40:24 +0000 (20:40 +0100)]
server : add a REST Whisper server example with OAI-like API (#1380)
* Add first draft of server
* Added json support and base funcs for server.cpp
* Add more user input via api-request
also some clean up
* Add reqest params and load post function
Also some general clean up
* Remove unused function
* Add readme
* Add exception handlers
* Update examples/server/server.cpp
* make : add server target
* Add magic curl syntax
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
M. A. Ali [Mon, 20 Nov 2023 18:52:27 +0000 (20:52 +0200)]
whisper : update example in whisper.h (#1529)
update the example in the header, previous examples deprecated.
Georgi Gerganov [Mon, 20 Nov 2023 11:16:38 +0000 (13:16 +0200)]
sdl : fix audio callback (#1523)
Georgi Gerganov [Mon, 20 Nov 2023 11:16:11 +0000 (13:16 +0200)]
whisper : reuse whisper_decode_with_state (#1521)
Tamotsu Takahashi [Sun, 19 Nov 2023 10:43:22 +0000 (19:43 +0900)]
ci : redistribute CUDA DLLs (#1522)
see https://docs.nvidia.com/cuda/eula/index.html#attachment-a
sandrohanea [Sun, 19 Nov 2023 09:25:30 +0000 (10:25 +0100)]
whisper : fix with_state methods to use the correct state (#1519)
Co-authored-by: Sandro Hanea <redacted>
Georgi Gerganov [Sun, 19 Nov 2023 08:32:32 +0000 (10:32 +0200)]
whisper : fix overriding the audio context
Georgi Gerganov [Sun, 19 Nov 2023 08:32:08 +0000 (10:32 +0200)]
cuda : assert ggml_add sources to be contiguous
Georgi Gerganov [Fri, 17 Nov 2023 08:42:04 +0000 (10:42 +0200)]
ios : sync submodule
Georgi Gerganov [Fri, 17 Nov 2023 08:00:07 +0000 (10:00 +0200)]
sync : ggml (ggml-alloc + linker + gguf fixes) (#1501)
Georgi Gerganov [Thu, 16 Nov 2023 14:18:24 +0000 (16:18 +0200)]
quantize : add support for K-quant types
Georgi Gerganov [Thu, 16 Nov 2023 08:59:32 +0000 (10:59 +0200)]
bench : fix memcpy bench size
Sam Pullara [Thu, 16 Nov 2023 08:34:05 +0000 (00:34 -0800)]
talk-llama : improve quote and backtick handling (#1364)
* ISSUE-1329: replace " with ' so it doesn't try to execute code in backticks.
* Typo
* Update to keep possessives in the output
Closes the ' then puts a ' in quotes then reopens the ' to escape the ' characters.
Georgi Gerganov [Wed, 15 Nov 2023 19:32:25 +0000 (21:32 +0200)]
talk-llama : enable GPU by default
Georgi Gerganov [Wed, 15 Nov 2023 19:10:13 +0000 (21:10 +0200)]
models : add info about distilled models
Georgi Gerganov [Wed, 15 Nov 2023 19:02:52 +0000 (21:02 +0200)]
release : v1.5.0
Georgi Gerganov [Wed, 15 Nov 2023 18:49:12 +0000 (20:49 +0200)]
bench-all : add distil models
Georgi Gerganov [Wed, 15 Nov 2023 18:10:16 +0000 (20:10 +0200)]
js : latest whisper.js
Georgi Gerganov [Wed, 15 Nov 2023 18:01:15 +0000 (20:01 +0200)]
bench-all : indentations
Georgi Gerganov [Wed, 15 Nov 2023 17:42:25 +0000 (19:42 +0200)]
whisper : make large version explicit + fix data size units (#1493)
Georgi Gerganov [Wed, 15 Nov 2023 15:42:53 +0000 (17:42 +0200)]
java : fix test (#1492)
Georgi Gerganov [Wed, 15 Nov 2023 14:12:52 +0000 (16:12 +0200)]
whisper : add batched decoding (#1486)
* whisper : add whisper_batch
* whisper : move kv_self to whisper_state
* whisper : full batched decoding support
* whisper : fix memory leak in whisper_batch
* whisper : fix mem leak again + remove oboslete function
* whisper : clear kv cache when using whisper_decode API
* whisper : speed-up sampling
* whisper : fix decoders initializer
* bench : add batch size 5 bench
* whisper : add comment about the KV cache size
* whisper : add check for max number of decoders
* whisper : avoid starting sampling threads with bs=1
* whisper : enable beam-search by default
* cuda : sync llama.cpp fixes
Georgi Gerganov [Mon, 13 Nov 2023 14:53:55 +0000 (16:53 +0200)]
java : use tiny.en for tests (#1484)
* java : use tiny.en for tests
* java : try to fix full params struct
Evan Jones [Mon, 13 Nov 2023 08:51:34 +0000 (03:51 -0500)]
whisper : add grammar-based sampling (#1229)
* whisper : add grammar-based sampling
* build : fix after master merge
* command : fix exception when recognizing the command
* whisper : fine-tuning grammar functionality
* command : grammar-related improvements
- option to read grammar from file
- add sample grammars for colors and chess moves
- fine-tune the performance further
* grammars : add assistant + update comments
* command : enable beam-search, add "no_timestamps", add "context", add p
* whisper : remove comment
---------
Co-authored-by: Georgi Gerganov <redacted>
rlapray [Mon, 13 Nov 2023 08:04:16 +0000 (09:04 +0100)]
talk-llama : add n_gpu_layers parameter (#1475)
Tong Li [Sun, 12 Nov 2023 16:31:58 +0000 (06:31 -1000)]
examples : add whisper.android.java for compatibility with older Android versions using Java (#1382)
* save the recorded audio to a file
* Alignment -help
* Save the correct audio
* chage to a consistent coding style
* Correct typo
* Update examples/stream/stream.cpp
* Update examples/stream/stream.cpp
* Correct variable misuse
* Update examples/stream/stream.cpp
* Update examples/stream/stream.cpp
* Update examples/stream/stream.cpp
* Update examples/stream/stream.cpp
* add *.bin .cxx/ .gradle/ cmake-build-debug/ to gitignore
* add whisper.android.java
* Added support for older versions of Android of Java
* add examples for android java
* add README.md for android java
* add fullTranscribeWithTime
* 增加 toString()方法和测试
* change return type to void
* update to v1.4.1
* add WhisperService
* chage to whisper_full_get_segment_t1
* add method transcribeDataWithTime
* modified toString
```
return "[" + start + " --> " + end + "]:" + sentence;
```
* Optimize code logic
* update text view on handle
* set max lines
* change Chinese to English
* Update bindings/java/build.gradle
* Update .gitignore
* add android.java to github action
* chage android.java to android_java in build.yml
* remove gradle
* chage jdk to temurin in android_java of CI
* chage jdk to temurin 11 in android_java of CI
* add x to gradlew
* set api-level for android_java of CI
* Update examples/whisper.android.java/app/src/main/jni/whisper/CMakeLists.txt
* add ndk version in build.gradle
* remove local.properties
* add testFullTranscribeWithTime
---------
Co-authored-by: litongmacos <redacted>
Co-authored-by: bobqianic <redacted>
Georgi Gerganov [Sun, 12 Nov 2023 15:47:37 +0000 (17:47 +0200)]
readme : update comment about source code
Georgi Gerganov [Sun, 12 Nov 2023 14:36:20 +0000 (16:36 +0200)]
ggml : fix some compile warnings
Georgi Gerganov [Sun, 12 Nov 2023 13:40:37 +0000 (15:40 +0200)]
readme : update GPU / CUDA
Georgi Gerganov [Sun, 12 Nov 2023 13:31:08 +0000 (15:31 +0200)]
whisper : add full CUDA and Metal offloading (#1472)
* whisper : migrate to ggml-backend
* whisper : fix logit reading
* whisper : fix tensor allocation during load
* whisper : fix beam-search with CUDA
* whisper : free backends + fix compile warning
* whisper : print when CUDA is enabled
* whisper : fix CoreML
* make : clean-up
* talk : fix compile warning
* whisper : support ggml_conv with CUDA and Metal (#1473)
* ggml : add CUDA support for ggml_conv
* whisper : remove ggml_repeat for conv bias + single backend
* cuda : fix im2col kernel
* metal : add im2col support + mul mat-vec f16 x f16
* bench-all : add q4 models
* whisper : clean-up
* quantize-all : fix
* ggml : im2col opts
* whisper : avoid whisper_model_data wrapper
* whisper : add note that ggml_mul_mat_pad does not work with CUDA
* whisper : factor out graph compute in common function
* whisper : fixes
* whisper : fix UB with measure buffers
* whisper : try to fix the parallel whisper_state functionality (#1479)
* whisper : try to fix the parallel whisper_state functionality
* whisper : fix multi-state Metal
* whisper : free backend instances in whisper_state
Ben Nortier [Fri, 10 Nov 2023 11:51:16 +0000 (13:51 +0200)]
whisper : return with error from whisper_encode_internal and whisper_decode_internal when abort callback is true (#1456)
Co-authored-by: Ben Nortier <redacted>
Jakub Ráček [Thu, 9 Nov 2023 17:21:44 +0000 (18:21 +0100)]
talk-llama : add language auto detect (#1467)
* Add '-l auto' to talk-llama example
* Update examples/talk-llama/talk-llama.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
bobqianic [Thu, 9 Nov 2023 10:42:39 +0000 (10:42 +0000)]
openvino : update convert-whisper-to-openvino.py to support v3 (#1459)