]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
slaren [Fri, 26 Apr 2024 15:07:42 +0000 (17:07 +0200)]
gguf : fix mismatch between alloc and free functions (llama/6929)
Georgi Gerganov [Fri, 26 Apr 2024 07:41:53 +0000 (10:41 +0300)]
Merge pull request from GHSA-p5mv-gjc5-mwqv
* always use calloc
clamp n_kv on failure to read a kv
* ggml : alternative ctx->header.n_kv update
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Thu, 25 Apr 2024 12:48:25 +0000 (15:48 +0300)]
ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (llama/6906)
Georgi Gerganov [Thu, 25 Apr 2024 12:12:28 +0000 (15:12 +0300)]
ggml : fix MIN / MAX macros (llama/6904)
ggml-ci
Georgi Gerganov [Wed, 24 Apr 2024 09:00:07 +0000 (12:00 +0300)]
ggml : move 32-bit arm compat in ggml-impl.h (llama/6865)
ggml-ci
Justine Tunney [Mon, 22 Apr 2024 19:00:36 +0000 (15:00 -0400)]
llamafile : improve sgemm.cpp (llama/6796)
* llamafile : improve sgemm.cpp
- Re-enable by default
- Fix issue described in #6716
- Make code more abstract, elegant, and maintainable
- Faster handling of weirdly shaped `m` an `n` edge cases
* Address review comments
* Help clang produce fma instructions
* Address review comments
Dave Airlie [Mon, 22 Apr 2024 14:05:06 +0000 (00:05 +1000)]
ggml : fix calloc argument ordering. (llama/6820)
Latest gcc complains here:
/home/airlied/devel/llama.cpp/ggml-alloc.c: In function ‘ggml_gallocr_new_n’:
/home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: warning: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Wcalloc-transposed-args]
374 | ggml_gallocr_t galloc = (ggml_gallocr_t)calloc(sizeof(struct ggml_gallocr), 1);
| ^~~~~~
/home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: note: earlier argument should specify number of elements, later size of each element
and a bunch more.
calloc is specified to take nmemb first then size, so realign the code.
In a couple of places there was a * x, 1 so I fixed those to use calloc properly.
Georgi Gerganov [Sun, 21 Apr 2024 13:47:57 +0000 (16:47 +0300)]
ggml : fix ggml_backend_cpu_supports_op() for CPY (llama/0)
slaren [Thu, 18 Apr 2024 13:18:48 +0000 (15:18 +0200)]
ggml : group all experts in a single ggml_mul_mat_id (llama/6505)
* ggml : group all experts in a single ggml_mul_mat_id
cuda : improve mmid row copy
* cuda : fix bin bcast with non-cont src0
* test-backend-ops : only run all mul mat tests for base types
* llama : disable moe offloading with SYCL
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 16 Apr 2024 20:50:22 +0000 (23:50 +0300)]
ggml : fix llamafile sgemm wdata offsets (llama/6710)
ggml-ci
Justine Tunney [Tue, 16 Apr 2024 18:55:30 +0000 (14:55 -0400)]
ggml : add llamafile sgemm (llama/6414)
This change upstreams llamafile's cpu matrix multiplication kernels
which improve image and prompt evaluation speed. For starters, Q4_0
and Q8_0 weights should go ~40% faster on CPU. The biggest benefits
are with data types like f16 / f32, which process prompts 2x faster
thus making them faster than quantized data types for prompt evals.
This change also introduces bona fide AVX512 support since tinyBLAS
is able to exploit the larger register file. For example, on my CPU
llama.cpp llava-cli processes an image prompt at 305 tokens/second,
using the Q4_K and Q4_0 types, which has always been faster than if
we used f16 LLaVA weights, which at HEAD go 188 tokens/second. With
this change, f16 LLaVA performance leap frogs to 464 tokens/second.
On Intel Core i9-14900K this change improves F16 prompt perf by 5x.
For example, using llama.cpp at HEAD with Mistral 7b f16 to process
a 215 token prompt will go 13 tok/sec. This change has fixes making
it go 52 tok/sec. It's mostly thanks to my vectorized outer product
kernels but also because I added support for correctly counting the
number of cores on Alderlake, so the default thread count discounts
Intel's new efficiency cores. Only Linux right now can count cores.
This work was sponsored by Mozilla who's given permission to change
the license of this code from Apache 2.0 to MIT. To read more about
what's improved, and how it works, see: https://justine.lol/matmul/
Shijie [Tue, 16 Apr 2024 15:40:48 +0000 (23:40 +0800)]
llama : add qwen2moe (llama/6074)
* support qwen2moe
* fix-review
* metal : support unary ops for nelements % 4 != 0
* metal : require contiguousness for float4 unary kernels
* metal : require contiguousness for float4 unary kernels (cont)
* fix-review
* names : for brevity "SHARED_EXP" -> "SHEXP"
* llama : reuse build_moe_ffn()
* llama : add model type name
---------
Co-authored-by: Georgi Gerganov <redacted>
Neo Zhang Jianyu [Mon, 15 Apr 2024 09:12:26 +0000 (17:12 +0800)]
fix mul_mat_id() for new input, make the ut pass (llama/6682)
Dave [Sun, 14 Apr 2024 11:14:19 +0000 (07:14 -0400)]
Added support for GGML_OP_CLAMP in Metal (llama/6662)
* Added support for GGML_OP_CLAMP in Metal
* Corrected size
---------
Co-authored-by: dave-fl <redacted>
Neo Zhang Jianyu [Sun, 14 Apr 2024 02:42:29 +0000 (10:42 +0800)]
fix memcpy() crash, add missed cmd in guide, fix softmax (llama/6622)
* disable mmap to fix memcpy crash, add missed cmd in guide, fix softmax
* refactor to disable mmap for SYCL backend
* fix compile error in other os
* refactor the solution, use host buf to fix it, instead of disable mmap
* keep to support mmap()
* use host buff to reduce malloc times
* revert to malloc/free solution, for threaad safe
Johannes Gäßler [Sat, 13 Apr 2024 22:21:55 +0000 (00:21 +0200)]
CUDA: fix matrix multiplication logic for tests (llama/6667)
slaren [Fri, 12 Apr 2024 16:13:20 +0000 (18:13 +0200)]
metal : unify mul_mv_id kernels (llama/6556)
jiez [Fri, 12 Apr 2024 10:45:06 +0000 (18:45 +0800)]
llama : add gguf_remove_key + remove split meta during quantize (llama/6591)
* Remove split metadata when quantize model shards
* Find metadata key by enum
* Correct loop range for gguf_remove_key and code format
* Free kv memory
---------
Co-authored-by: z5269887 <redacted>
Justina Cho [Wed, 1 May 2024 21:44:26 +0000 (14:44 -0700)]
feat: implemented sigmoid function (ggml/806)
* added sigmoid function
* implemented metal kernel for sigmoid
* implemented cuda kernel for sigmoid
* added sigmoid unary op and incremented count
Borislav Stanimirov [Thu, 25 Apr 2024 14:24:07 +0000 (17:24 +0300)]
build: fix and ignore msvc warnings (ggml/805)
Przemysław Pawełczyk [Wed, 8 May 2024 15:33:43 +0000 (17:33 +0200)]
ggml : expose SSE3 and SSSE3 for MSVC when AVX is available (#2128)
Przemysław Pawełczyk [Wed, 8 May 2024 15:32:43 +0000 (17:32 +0200)]
build : improve disabling AVX-512 (#2129)
* cmake : make WHISPER_NO_AVX512=ON disable all subsets of AVX-512
Previously it happened only for MSVC, but it makes sense to have the
same behavior for other compilers too.
* make : reorder x86 ISA extensions in chronological order
And update compiler flags at the end to ease modifying conditions.
* make : support WHISPER_NO_AVX512=1 for disabling all AVX-512 subsets.
That way you do not have to override each AVX-512 subset setting
individually if it has been turned on during autodetection.
Borislav Stanimirov [Wed, 8 May 2024 08:03:21 +0000 (11:03 +0300)]
minor: add CMakeSettings.json to gitignore (#2094)
Pedro Probst [Thu, 2 May 2024 21:52:55 +0000 (18:52 -0300)]
examples : fix node compilation (#2115)
* node : fix compilation and update examples
* node : fix readme
* Update addon.node test
Przemysław Pawełczyk [Sun, 28 Apr 2024 21:54:21 +0000 (23:54 +0200)]
make : change GNU make default CXX from g++ to c++ (#2100)
goldwaving [Sun, 28 Apr 2024 17:36:12 +0000 (15:06 -0230)]
Remove unnecessary memory reallocation in fft (#2080)
fft_out needs to be twice the frame_size, not the frame_step. It is resized in fft() anyway, but this change prevents an unnecessary reallocation.
n_fft must match the mel filter size, so it is best not to calculate it from the framesize.
We only need to get the magnitudes for half the spectrum since the other half is a mirror and not used in the mel filter loop later.
Georgi Gerganov [Wed, 24 Apr 2024 11:56:30 +0000 (14:56 +0300)]
models : disable old script (#2079)
Georgi Gerganov [Wed, 24 Apr 2024 11:45:27 +0000 (14:45 +0300)]
whisper : more prominent log message for sub-1s audio (#2065)
Georgi Gerganov [Wed, 17 Apr 2024 09:23:47 +0000 (12:23 +0300)]
main : pass nullptr when regex is empty (#2070)
AIWintermuteAI [Tue, 16 Apr 2024 11:15:52 +0000 (19:15 +0800)]
readme : add up-to-date repository for Python bindings (#2063)
README
Georgi Gerganov [Tue, 16 Apr 2024 11:08:31 +0000 (14:08 +0300)]
release : v1.5.5
Emmanuel Schmidbauer [Mon, 15 Apr 2024 19:16:58 +0000 (15:16 -0400)]
server : add dtw (#2044)
* server.cpp: add dtw
* Update examples/server/server.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Didzis Gosko [Mon, 15 Apr 2024 17:23:05 +0000 (20:23 +0300)]
build : fix embedded Metal library generation (#2045)
Pedro Probst [Mon, 15 Apr 2024 17:03:34 +0000 (14:03 -0300)]
node : support no timestamps (#2048)
* fix: node: do not compute timestamps if you do not need them
* feat: add no_timestamps parameter to node addon
Didzis Gosko [Mon, 15 Apr 2024 17:02:09 +0000 (20:02 +0300)]
build : detect AVX512 in Makefile, add AVX512 option in CMake (#2043)
* make : add AVX512 detection to Makefile and CMakeLists.txt
* make : autodetect more AVX512 instruction subsets
* cmake : do not default to AVX512, must be enabled explicitly
* cmake : enable a set of AVX512 subsets, when AVX512 is turned on
* make : consolidate AVX512 subsets, add AVX512 VBMI
* cmake : revert to NO AVX512 setting, add settings for AVX512 VNNI and VBMI
* make : re-introduce AVX512VNNI back
* cmake : remove superfluous comment line
Kendrick Taylor [Mon, 15 Apr 2024 16:41:28 +0000 (09:41 -0700)]
whisper.nvim : fix missing reference to "model" variable (#2049)
Ikko Eltociear Ashimine [Mon, 15 Apr 2024 16:40:27 +0000 (01:40 +0900)]
whisper : update grammar-parser.cpp (#2058)
preceeding -> preceding
Georgi Gerganov [Tue, 9 Apr 2024 17:27:55 +0000 (20:27 +0300)]
sync : ggml
Georgi Gerganov [Tue, 9 Apr 2024 17:27:44 +0000 (20:27 +0300)]
license : update copyright notice + add AUTHORS
Carolinabanana [Tue, 9 Apr 2024 08:16:13 +0000 (09:16 +0100)]
llama : add Command R Plus support (llama/6491)
* Add Command R Plus GGUF
* Add Command R Plus GGUF
* Loading works up to LayerNorm2D
* Export new tensors in 1D so they are not quantized.
* Fix embedding layer based on Noeda's example
* Whitespace
* Add line
* Fix unexpected tokens on MPS. Re-add F16 fix. ((Noeda)
* dranger003: Fix block index overflow in CUDA dequantizing.
* Reverted blocked multiplication code as it still has issues and could affect other Llama arches
* export norms as f32
* fix overflow issues during quant and other cleanup
* Type convention
Co-authored-by: Georgi Gerganov <redacted>
* dranger003: Fix more int overflow during quant.
---------
Co-authored-by: S <redacted>
Co-authored-by: S <redacted>
Co-authored-by: slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Abhilash Majumder [Mon, 8 Apr 2024 08:26:01 +0000 (13:56 +0530)]
remove row=1 cond (llama/6532)
Neo Zhang Jianyu [Sun, 7 Apr 2024 02:55:59 +0000 (10:55 +0800)]
support/fix OPs GGML_TYPE_IQ4_NL, GGML_TYPE_IQ4_XS, GGML_TYPE_IQ3_XXS, GGML_TYPE_IQ3_S, GGML_TYPE_IQ2_XXS, GGML_TYPE_IQ2_XS, GGML_TYPE_IQ2_S, GGML_TYPE_IQ1_S, GGML_TYPE_IQ1_M (llama/6521)
Georgi Gerganov [Tue, 9 Apr 2024 17:25:50 +0000 (20:25 +0300)]
scripts : update sync
Georgi Gerganov [Tue, 9 Apr 2024 17:12:17 +0000 (20:12 +0300)]
files : rename ./extra to ./scripts
Brad Murray [Tue, 9 Apr 2024 15:38:19 +0000 (11:38 -0400)]
whisper : fix DTW memory access (#2012)
* Fix DTW memory access
* Memory fix - Apply changes from denersc
ulatekh [Tue, 9 Apr 2024 15:34:34 +0000 (08:34 -0700)]
common : fix file-handle leak in read_wav() (#2026)
Now it cleans up in case of error.
Rotem Dan [Tue, 9 Apr 2024 15:33:32 +0000 (18:33 +0300)]
main : set stdin to binary mode on Windows (#2025)
slashlib [Tue, 9 Apr 2024 15:32:46 +0000 (18:32 +0300)]
cmake : support for CPU BLAS build via Intel MKL (#2024)
ulatekh [Tue, 9 Apr 2024 15:31:16 +0000 (08:31 -0700)]
main : allow a response-file as the sole parameter (#2019)
* The "main" example now allows a response-file as the sole parameter.
A response-file is a text file with command-line parameters, one per line.
Prefix the name of the response-file with "@" to identify it as such.
It's used under MS Windows to work around command-line length limits.
It may be useful under other platforms to simplify character-escaping.
* minor : style
---------
Co-authored-by: Georgi Gerganov <redacted>
ulatekh [Tue, 9 Apr 2024 15:27:28 +0000 (08:27 -0700)]
whisper : suppress tokens with a regex (#1997)
* Allow a regular expression to describe tokens to suppress.
Example: --suppress-tokens-re "[,\.]|[ ]?[0-9]+" will suppress commas, periods, and numeric tokens.
Technique inspired by https://github.com/openai/whisper/discussions/1041
Co-authored-by: Georgi Gerganov <redacted>
* Blind change to fix Java test.
---------
Co-authored-by: Georgi Gerganov <redacted>
ulatekh [Tue, 9 Apr 2024 15:23:33 +0000 (08:23 -0700)]
cmake : create solution folders (#2004)
* Create solution folders in the CMake build.
* Fixed non-SDL2 build.
* Fixed emscripten build.
Georgi Gerganov [Sun, 7 Apr 2024 14:04:56 +0000 (17:04 +0300)]
sync : ggml
Georgi Gerganov [Sun, 7 Apr 2024 14:04:22 +0000 (17:04 +0300)]
extra : sync grammar-parser
Georgi Gerganov [Sun, 7 Apr 2024 13:21:08 +0000 (16:21 +0300)]
talk-llama : sync llama.cpp
Georgi Gerganov [Sun, 7 Apr 2024 13:18:11 +0000 (16:18 +0300)]
sync : ggml
Georgi Gerganov [Sat, 6 Apr 2024 14:50:21 +0000 (17:50 +0300)]
sync : llama.cpp (skip)
ggml-ci
Ouadie EL FAROUKI [Fri, 5 Apr 2024 13:35:06 +0000 (14:35 +0100)]
Fixed minor bug when enabling FP16 for non intel targets (llama/6464)
* moved INTEL_MKL guard from gemm_impl to gemm (wrapper)
* Update ggml-sycl.cpp
Co-authored-by: AidanBeltonS <redacted>
---------
Co-authored-by: AidanBeltonS <redacted>
slaren [Wed, 3 Apr 2024 13:07:05 +0000 (15:07 +0200)]
ggml : mul_mat_id use the same tensor for all the experts (llama/6387)
* ggml : update mul_mat_id to use the same tensor for all the experts
* update cuda
* minor
* update metal
* update test-backend-ops
* fix cuda
* Update ggml-metal.m
Co-authored-by: Georgi Gerganov <redacted>
* update convert.py
* update convert-hf-to-gguf.py
* update convert.py for mixtral hf models
* Update convert-hf-to-gguf.py
Co-authored-by: Georgi Gerganov <redacted>
* cuda : support non-pow-2 number of experts
* allow quantize to work for split and merged experts models in the same way
* cleanup + disable mmap automatically with split tensors models
* update imatrix
* test-backend-ops : test qwen argsort
* update grok model loading
* llama : add merged experts tensors to the grok tensor map
* minor
* gguf : bump version
* fix quantizing of merged experts
* convert-hf-to-gguf.py : update grok (untested)
* make linter happy
* cuda/argsort : use shared memory instead of pool memory
* convert : fix grok tensor names
* metal : add support for non-pow-2 argsort
* llama : more loader cleanup, better error checking
* cuda : fix warning
* llama : still use mmap for loading old models, but copy the data to a host buffer
* add review note
* llama : remove ffn tensor counting + add sanity check
ggml-ci
* convert : fix handling of n_experts == None
ggml-ci
* imatrix : fix ncall counters
* llama : produce error if imatrix size does not match
* quantize : terminate on errors + trace logs
ggml-ci
* metal : pad shared memory to 16 bytes
---------
Co-authored-by: Georgi Gerganov <redacted>
Meng, Hengyu [Wed, 3 Apr 2024 02:34:40 +0000 (10:34 +0800)]
Disable iqx on windows as WA (llama/6435)
* disable iqx on windows as WA
* array instead of global_memory
0cc4m [Fri, 29 Mar 2024 16:29:21 +0000 (17:29 +0100)]
Vulkan k-quant mmq and ggml-backend offload functionality (llama/6155)
* Fix Vulkan no kv offload incoherence
* Add k-quant mul mat mat shaders
* Rework working buffer allocation, reduces vram use noticeably
Clean up cpu assist code, replaced with ggml-backend offload function
* Default to all dedicated GPUs
* Add fallback for integrated GPUs if no dedicated GPUs are found
* Add debug info which device is allocating memory
* Fix Intel dequant issue
Fix validation issue
* Fix Vulkan GGML_OP_GET_ROWS implementation
* Clean up merge artifacts
* Remove Vulkan warning
Neo Zhang Jianyu [Thu, 28 Mar 2024 00:55:24 +0000 (08:55 +0800)]
fix set main gpu crash (llama/6339)
slaren [Wed, 27 Mar 2024 14:07:50 +0000 (15:07 +0100)]
ggml : fix bounds checking of zero size views (llama/6347)
Daniel Bevenius [Wed, 3 Apr 2024 20:57:20 +0000 (22:57 +0200)]
backend : fix typo in scheduler documentation (ggml/781)
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Sun, 7 Apr 2024 13:10:44 +0000 (16:10 +0300)]
extra : sync ggml-cuda folder
Slava Primenko [Thu, 4 Apr 2024 12:49:24 +0000 (14:49 +0200)]
ggml: bypass code incompatible with CUDA < 11.1 (#2020)
`cudaHostRegisterReadOnly` parameter was only introduced in CUDA 11.1
See this issue for more details:
https://github.com/ggerganov/whisper.cpp/issues/2007
Przemysław Pawełczyk [Sat, 30 Mar 2024 07:20:20 +0000 (08:20 +0100)]
ci : add building in MSYS2 environments (Windows) (#1994)
Przemysław Pawełczyk [Fri, 29 Mar 2024 13:53:26 +0000 (14:53 +0100)]
build : use pkg-config for OpenBLAS (#1778)
* make : use pkg-config for finding CFLAGS & LDFLAGS needed by OpenBLAS
That way building on *nix like environments (including MSYS2 on Windows)
with WHISPER_OPENBLAS=1 works out of the box.
Fix handling of WHISPER_OPENBLAS, so that empty value or 0 won't be
misinterpreted by make as enabled. Mind that it's not intended to
detect CMake false constants (OFF NO FALSE N). make is not CMake.
By default OpenBLAS with 64-bit interface is used, but that can be
changed with `WHISPER_OPENBLAS_INTERFACE64=0` if 32-bit one is desired.
If OpenBLAS headers and library are respectively in include/ and lib/
subdirectories of given path, then you can specify it, e.g.
`OPENBLAS_PATH=/usr/local/openblas`, and this will take precedence over
any pkg-config file.
If there is no pkg-config file (.pc) for OpenBLAS and OPENBLAS_PATH is
empty, then headers are assumed to be in /usr/include/openblas and
library as assumed to be called 'openblas64' (or 'openblas' if
`WHISPER_OPENBLAS_INTERFACE64=0`). If different headers location should
be used, then it can be done, e.g.
`WHISPER_BLAS_CFLAGS=-I/usr/local/include/openblas`.
If different library should be used, it can be specified, e.g.
`WHISPER_BLAS_LIB=openblasp64` (pthreads version as seen on Fedora), or
you can provide LDFLAGS needed to link with OpenBLAS directly:
`WHISPER_BLAS_LDFLAGS="-L/usr/local/lib/openblas -lopenblas64"`.
Current solution is flexible enough to handle most cases out there
without needlessly hardcoding possible OpenBLAS installation details.
* cmake : fix how pkg-config is used for finding include dirs and libraries needed by OpenBLAS
That way building on *nix like environments (including MSYS2 on Windows)
with -DWHISPER_OPENBLAS=ON should work out of the box as long as you
have CMake 3.25 or newer.
Make OPENBLAS_PATH environment variable supported not only on Windows.
It sets OpenBLAS include dir to ${OPENBLAS_PATH}/include and library to
${WHISPER_BLAS_LIB} (name without prefixes and suffixes) in
${OPENBLAS_PATH}/lib and avoids further package finding.
By default OpenBLAS with 64-bit interface is used (equivalent to setting
`-DWHISPER_BLAS_LIB=openblas64`), but that can be changed with
`-DWHISPER_OPENBLAS_INTERFACE64=OFF` (equivalent to setting
`-DWHISPER_BLAS_LIB=openblas`) if 32-bit one is desired.
Turn on BLA_STATIC for FindBLAS only when WHISPER_STATIC is enabled.
BLA_STATIC may not work as expected for pkg-config based operation.
Get rid of supporting BLAS_HOME environment variable. If OPENBLAS_PATH
is insufficient in your case, there is no pkg-config file to rely on,
then you can manually specify include dir, e.g.
`-DBLAS_INCLUDE_DIRS=/usr/local/include/openblas`, and library, e.g.
`-DBLAS_LIBRARIES=/usr/local/lib/libopenblas.so`.
* make / cmake : use OpenBLAS with 32-bit interface by default.
OpenBLAS w/o INTERFACE64=1 vel USE_64BITINT=1 seems to be more common.
* cmake : hardcode "lib" prefix for OpenBLAS lib filename (even on Windows)
* cmake : hardcode OpenBLAS library name when building in MSVC (Windows)
Most *nix like environments (including MSYS2 on Windows) have OpenBLAS
packages that allow coexistence of OpenBLAS builds with 32-bit and
64-bit interface (w/o and w/ OPENBLAS_USE64BITINT defined) and they
differ by not having or having "64" suffix in their library filenames.
That's not the case for OpenBLAS prebuilt libraries for Windows.
ulatekh [Thu, 28 Mar 2024 10:02:10 +0000 (03:02 -0700)]
main : add command-style grammar (#1998)
* Implemented command-style grammar in the main example.
Mostly just copied the relevant parts from the command example.
* main : code style
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Thu, 28 Mar 2024 09:59:48 +0000 (11:59 +0200)]
make : add grammar parser to common objects
Georgi Gerganov [Wed, 27 Mar 2024 16:55:10 +0000 (18:55 +0200)]
sync : ggml (#2001)
* sync : update scripts
* sync : ggml
* talk-llama : sync llama.cpp
* make : WHISPER_CUBLAS -> WHISPER_CUDA
* ci : try to fix sycl build
* talk-llama : fix make build
Georgi Gerganov [Mon, 25 Mar 2024 12:48:19 +0000 (14:48 +0200)]
whisper : improve handling of prompts (#1981)
* whisper : improve handling of prompts
* whisper : add whisper_token_count helper
Sanchit Gandhi [Thu, 21 Mar 2024 16:53:30 +0000 (22:23 +0530)]
whisper : improve support for distil-large-v3 (#1982)
Georgi Gerganov [Thu, 21 Mar 2024 05:40:09 +0000 (07:40 +0200)]
ruby : fix build (#1980)
Tiago Fassoni [Wed, 20 Mar 2024 16:45:15 +0000 (13:45 -0300)]
docker : libcuda.so.1 in PATH (#1966)
Mohammadreza Hendiani [Wed, 20 Mar 2024 16:42:11 +0000 (20:12 +0330)]
readme : add Fedora dependencies (#1970)
* README.md
fix documentaion and added fedora liunx dependencies for stream build
* fix documentaion and added fedora liunx dependencies for command build
* fix documentaion and added fedora liunx dependencies for talk build
* fix documentaion and added fedora liunx dependencies for talk-llama build
* reverted back mistakenly removed MacOS documentaion
denersc [Wed, 20 Mar 2024 16:25:26 +0000 (13:25 -0300)]
whisper : token-level timestamps with DTW (#1485)
* whisper.cpp: impl dtw algo
* WIP: producing and placing DTW timestamps on tokens
* Fix compile and assertion errors. Attempt to DTW timestamp with single_segment=false.
* Fix mistake causing incorrect alignment of dtw timestamps
* implement N_TOP_MOST and CUSTOM alignment heads setting
* whisper: fix typo on alignment heads enum
* Fix issues related to changes in whisper.cpp
* Fixed excessive memory use when using DTW timestamps. Other minor fixes to DTW timestamping function
* decoder: save cross QKs only if requested
* Calling median filter with ggml_map_custom1
* Reimpl aheads n_top_most and custom. Sanity checks on chosen aheads
* Copying cross QKs from decoder backend correctly
* dtw: cleanup
* Fix incorrect n_frames passed to dtw when near end of audio
* Fix aheads_masks_init for backend != CPU
* whisper : minor style
* main : add dtw (wip)
* whisper: fix invalid memory access in aheads_masks_init
* main : add dtw (cont)
* whisper : minor
---------
Co-authored-by: Georgi Gerganov <redacted>
Jo Liss [Mon, 18 Mar 2024 15:53:33 +0000 (15:53 +0000)]
examples : rename --audio-context to --audio-ctx per help text (#1953)
Georgi Gerganov [Sat, 16 Mar 2024 15:30:55 +0000 (17:30 +0200)]
whisper : set outputs from conv graph (#1959)
slaren [Sat, 16 Mar 2024 14:47:14 +0000 (15:47 +0100)]
alloc : fix allocation data of pre-allocated leafs
Georgi Gerganov [Sat, 16 Mar 2024 15:15:21 +0000 (17:15 +0200)]
cmake : copy ggml-common.h to bin
Georgi Gerganov [Sat, 16 Mar 2024 14:26:35 +0000 (16:26 +0200)]
gitignore : .vimspector.json
Georgi Gerganov [Fri, 15 Mar 2024 12:21:59 +0000 (14:21 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Fri, 15 Mar 2024 12:12:19 +0000 (14:12 +0200)]
sync : ggml
slaren [Thu, 14 Mar 2024 15:45:27 +0000 (16:45 +0100)]
update examples and tests
Georgi Gerganov [Thu, 14 Mar 2024 15:16:45 +0000 (17:16 +0200)]
ggml : add ggml-common.h
Georgi Gerganov [Thu, 14 Mar 2024 10:38:37 +0000 (12:38 +0200)]
ggml : designate enum vals for integer types (llama/6050)
Georgi Gerganov [Thu, 14 Mar 2024 09:55:23 +0000 (11:55 +0200)]
metal : build metallib + fix embed path (llama/6015)
* metal : build metallib + fix embed path
ggml-ci
* metal : fix embed build + update library load logic
ggml-ci
* metal : fix embeded library build
ggml-ci
* ci : fix iOS builds to use embedded library
slaren [Wed, 13 Mar 2024 17:54:21 +0000 (18:54 +0100)]
llama : add pipeline parallelism support (llama/6017)
* llama : add pipeline parallelism support for batch processing with multiple CUDA GPUs
ggml-ci
* server : add -ub, --ubatch-size parameter
* fix server embedding test
* llama : fix Mamba inference for pipeline parallelism
Tested to work correctly with both `main` and `parallel` examples.
* llama : limit max batch size to n_batch
* add LLAMA_SCHED_MAX_COPIES to configure the number of input copies for pipeline parallelism
default increase to 4 (from 2)
changing this value may improve performance for some systems, but increases memory usage
* fix hip build
* fix sycl build (disable cpy_tensor_async)
* fix hip build
* llama : limit n_batch and n_ubatch to n_ctx during context creation
* llama : fix norm backend
* batched-bench : sync after decode
* swiftui : sync after decode
* ggml : allow ggml_get_rows to use multiple threads if they are available
* check n_ubatch >= n_tokens with non-casual attention
* llama : do not limit n_batch to n_ctx with non-casual attn
* server : construct batch with size of llama_n_batch
* ggml_backend_cpu_graph_compute : fix return value when alloc fails
* llama : better n_batch and n_ubatch comment
* fix merge
* small fix
* reduce default n_batch to 2048
---------
Co-authored-by: Francis Couture-Harpin <redacted>
Co-authored-by: Georgi Gerganov <redacted>
AidanBeltonS [Wed, 13 Mar 2024 13:17:54 +0000 (13:17 +0000)]
Update get version (llama/6025)
Georgi Gerganov [Tue, 12 Mar 2024 12:27:20 +0000 (14:27 +0200)]
ggml : reuse quantum structs across backends (llama/5943)
* ggml : reuse quant blocks across backends
ggml-ci
* ggml : define helper constants only for CUDA and SYCL
ggml-ci
* ggml : define helper quantum constants for SYCL
ggml-ci
Georgi Gerganov [Tue, 12 Mar 2024 11:49:55 +0000 (13:49 +0200)]
ggml : fix UB in IQ2_S and IQ3_S (llama/6012)
Georgi Gerganov [Tue, 12 Mar 2024 09:15:05 +0000 (11:15 +0200)]
sycl : update IQ1_S kernels (WIP - not working!) (llama/5995)
* sycl : try to fix after IQ1_S changes
* sycl : iq1s_grid -> iq1s_grid_gpu
* sycl : fix grid type
Kawrakow [Mon, 11 Mar 2024 15:53:15 +0000 (16:53 +0100)]
1.5 bit: we can do even better (llama/5999)
* iq1_s: we can do even better
Spent one of the 4 scale bits on a signs of a 0.125 shift.
I.e., quants are now -1 + delta, delta, 1 + delta, where delta
is +/- 0.125.
CUDA works, same performance as before.
PPL(LLaMA-v2-7B) is now 11.85!
* iq1_s: make scalar and AVX2 work with the new version
* iq1_s: make Neon work with new version.
~10% drop in performance, so will need some more work.
* iq1_s: make Metal work with new version
* iq1_s: very slightly faster dequantize on Metal
* iq1_s: fix dequantize on the CPU
---------
Co-authored-by: Iwan Kawrakow <redacted>
Michael Podvitskiy [Mon, 11 Mar 2024 09:28:51 +0000 (10:28 +0100)]
ggml, ci : Windows ARM runner and build fixes (llama/5979)
* windows arm ci
* fix `error C2078: too many initializers` with ggml_vld1q_u32 macro for MSVC ARM64
* fix `warning C4146: unary minus operator applied to unsigned type, result still unsigned`
* fix `error C2065: '__fp16': undeclared identifier`
Kawrakow [Mon, 11 Mar 2024 06:51:49 +0000 (07:51 +0100)]
Better 1.5 bit quantization (llama/5971)
* Trying blocvks of 16 for IQ1_S - seems slightly better
* iq1s_blocks16: Adjust scale fudge factor to 1.125
* iq1s_blocks16: going to blocks of 32
with 2048 lattice points, so same bpw.
This is even better than blocks of 16.
Should I try blocks of 64? But to keep the same
bpw, when I go to 4096 lattice points, I need to
remove blocks alltogether and just have superblocks of
256 weights.
* iq1s_blocks16: Use 2*<x^2> as sigma2 in weight adjustment
* iq1s_blocks16: scalar and AVX2 dot products
* iq1s_blocks16: CUDA dot product
* iq1s_blocks16: Metal works, Neon does not
Metal works but TG is dog slow (35 t/s). PP is OKish (493 t/s).
Not seeing the bug in the Neon implementation for now.
* iq1s_blocks16: fixed Neon
* iq1s_blocks16: very slightly faster TG on Metal
Still pathetic at 37 t/s
* iq1s_blocks16: speedup Metal by packing codebook into uint32_t's
* Formatting
* iq1s_blocks16: uint32_t codebook is also better in CUDA
TG-128 is now 204 t/s up from 194 t/s.
PP-512 is 5890 t/s, so significantly better than other quants
* iq1s_blocks16: slightly faster Neon dot product
* iq1s_blocks16: faster AVX2 dot product
* iq1s_blocks16: adjust to ggml-common.h
---------
Co-authored-by: Iwan Kawrakow <redacted>
Abhilash Majumder [Mon, 11 Mar 2024 04:57:56 +0000 (10:27 +0530)]
Add q3_s and q1_s (llama/5886)
* Add q3_s and q1_s
* fix compilation
* fix build
* fix build
* fix build
* enable ops
* rm macro
* increase grid space
Georgi Gerganov [Sun, 10 Mar 2024 21:12:48 +0000 (23:12 +0200)]
metal : move mm_id indices to shared mem (llama/5982)
Georgi Gerganov [Sat, 9 Mar 2024 15:36:20 +0000 (17:36 +0200)]
ggml : fix unnecessary f32 -> f16 -> f32 casts (mmla) (llama/5951)
Georgi Gerganov [Sat, 9 Mar 2024 13:53:59 +0000 (15:53 +0200)]
ggml : remove old quantization functions (llama/5942)
* ggml : remove old quantization functions
ggml-ci
* ggml : simplify ggml_quantize_chunk
ggml-ci
* ggml : restrict correctness
ggml-ci
* ggml : remove hist data from the quantization API
ggml-ci
* tests : remove hist usage in test-backend-ops
ggml-ci
* vulkan : remove hist and fix typo
Georgi Gerganov [Sat, 9 Mar 2024 10:47:57 +0000 (12:47 +0200)]
ggml : add ggml-common.h to deduplicate shared code (llama/5940)
* ggml : add ggml-common.h to shared code
ggml-ci
* scripts : update sync scripts
* sycl : reuse quantum tables
ggml-ci
* ggml : minor
* ggml : minor
* sycl : try to fix build