]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
Abhilash Majumder [Fri, 23 Feb 2024 07:22:24 +0000 (12:52 +0530)]
whisper : add SYCL support (#1863)
* add changes from llama upstream
* add sycl abstraction
* add sycl build
* update cmake
* add sycl build config
* fix bug
* fix bug
* refactor build
* fix bug
* update build
* call build
* use sycl header
* add examples
* add target
* fix typecast in quant.c
* readd fp16 and readme
* fix quant typecast
* add sample
* add readme
* remove cxx file check
Georgi Gerganov [Thu, 22 Feb 2024 21:30:53 +0000 (23:30 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Thu, 22 Feb 2024 21:25:38 +0000 (23:25 +0200)]
sync : ggml
Georgi Gerganov [Thu, 22 Feb 2024 21:21:39 +0000 (23:21 +0200)]
ggml : always define ggml_fp16_t as uint16_t (llama/5666)
* ggml : always define ggml_fp16_t as uint16_t
ggml-ci
* ggml : cont
ggml-ci
* ggml : cont
* ggml : cont
ggml-ci
* ggml : cont
ggml-ci
* cuda : no longer ggml headers last
ggml-ci
* ggml : fix q6_K FP16 -> FP32 conversion
ggml-ci
* ggml : more FP16 -> FP32 conversion fixes
ggml-ci
Georgi Gerganov [Thu, 22 Feb 2024 18:20:34 +0000 (20:20 +0200)]
ci : fix whitespace
Georgi Gerganov [Thu, 22 Feb 2024 16:31:40 +0000 (18:31 +0200)]
ggml : 32-bit arm compat (#1891)
* ggml : 32-bit arm compat
* ggml : add ggml_vqtbl1q_s8 impl
* ggml : cont
Georgi Gerganov [Thu, 22 Feb 2024 13:15:38 +0000 (15:15 +0200)]
sync : ggml
Georgi Gerganov [Wed, 21 Feb 2024 14:19:39 +0000 (16:19 +0200)]
sync : llama.cpp (ggml/0)
ggml-ci
Meng, Hengyu [Wed, 21 Feb 2024 09:52:06 +0000 (17:52 +0800)]
conext add name (llama/5624)
* [SYCL] conext add name
* name should start with SYCL*
AidanBeltonS [Tue, 20 Feb 2024 07:01:25 +0000 (07:01 +0000)]
Update ggml_sycl_op_mul_mat_vec_q (llama/5502)
* Update ggml_sycl_op_mul_mat_vec_q
* Apply suggestions from code review
Co-authored-by: Abhilash Majumder <redacted>
* revert suggestion on macro
* fix bug
* Add quant type GGML_TYPE_IQ1_S to unsupported
* fix format
---------
Co-authored-by: Abhilash Majumder <redacted>
0cc4m [Wed, 14 Feb 2024 19:57:17 +0000 (20:57 +0100)]
Refactor validation and enumeration platform checks into functions to clean up ggml_vk_instance_init()
0cc4m [Sat, 10 Feb 2024 21:14:52 +0000 (22:14 +0100)]
Add check for VK_KHR_portability_enumeration for MoltenVK support
Mathijs de Bruin [Tue, 6 Feb 2024 14:39:22 +0000 (14:39 +0000)]
Add preprocessor checks for Apple devices.
Based on work by @rbourgeat in https://github.com/ggerganov/llama.cpp/pull/5322/files
Mathijs de Bruin [Sat, 3 Feb 2024 18:00:11 +0000 (18:00 +0000)]
Resolve ErrorIncompatibleDriver with Vulkan on MacOS.
Refs:
- https://chat.openai.com/share/
7020ce72 -65fc-45ec-b7be-
9d9d798a5f3f
- https://github.com/SaschaWillems/Vulkan/issues/954
- https://github.com/haasn/libplacebo/issues/128
- https://github.com/KhronosGroup/Vulkan-Samples/issues/476
Mathijs de Bruin [Sat, 3 Feb 2024 17:56:46 +0000 (17:56 +0000)]
Allow for Vulkan build with Accelerate.
Closes #5304
slaren [Mon, 19 Feb 2024 22:40:26 +0000 (23:40 +0100)]
cuda : ignore peer access already enabled errors (llama/5597)
* cuda : ignore peer access already enabled errors
* fix hip
Siddharth Ramakrishnan [Wed, 21 Feb 2024 12:34:53 +0000 (04:34 -0800)]
ggml : compute forward no longer pass src tensors (ggml/729)
* refactored compute forward to not pass in the src tensors each time
* fix merge issues with flags
* missed one place in the last commit to fix the is_param / flags issue
* minor spacing fix
* fixed some variable assignments so all tests locally are passing
* new change after merge fix
---------
Co-authored-by: siddharthvader <redacted>
bssrdf [Tue, 20 Feb 2024 19:17:09 +0000 (14:17 -0500)]
ggml : fix conv_2d batch mode (ggml/737)
Co-authored-by: bssrdf <redacted>
st-gr [Thu, 22 Feb 2024 13:11:35 +0000 (05:11 -0800)]
openvino : fix convert-whisper-to-openvino.py (#1890)
Fix issue: Conversion from Whisper to OpenVino failed #1870
convert-whisper-to-openvino.py stopped working with OpenVINO version 2023.0.0-10926-
b4452d56304 -releases/2023/0 .
Error was: TypeError: load(): incompatible function arguments. The following argument types are supported:
1. (self: openvino._pyopenvino.FrontEnd, path: object) -> ov::frontend::InputModel
Tested successfully with a large-v3 conversion.
Co-authored-by: Stefan Grundmann <redacted>
Davidson Francis [Thu, 22 Feb 2024 13:01:08 +0000 (10:01 -0300)]
main : fix file existence check in main.cpp (#1889)
In commit
dda4b0e of PR #1872, I've introduced a check for the
existence of files before loading the model. However, I haven't
considered the case where whisper.cpp might read from stdin as well,
and in such cases, the checks should ignore the "-" argument as it
does not represent a regular file.
Additionally, this commit removes the usage of 'stat()' in favor of
the recently introduced function 'is_file_exist()' in common.cpp from
PR #1871.
Apologies for the bug introduced in the previous PR and any
inconvenience it may have caused.
Georgi Gerganov [Tue, 20 Feb 2024 10:09:57 +0000 (12:09 +0200)]
talk-llama : sync llama.cpp
LBlue [Tue, 20 Feb 2024 10:05:38 +0000 (18:05 +0800)]
make : fix CUBLAS link with WSL (#1878)
Georgi Gerganov [Mon, 19 Feb 2024 13:54:25 +0000 (15:54 +0200)]
sync : ggml
Georgi Gerganov [Mon, 19 Feb 2024 13:33:51 +0000 (15:33 +0200)]
ggml : resolve merge conflicts (ggml/0)
ggml-ci
Georgi Gerganov [Mon, 19 Feb 2024 13:27:37 +0000 (15:27 +0200)]
common : add IQ1_S (ggml/0)
ggml-ci
Georgi Gerganov [Mon, 19 Feb 2024 12:45:41 +0000 (14:45 +0200)]
ci : enable -Werror for CUDA builds (llama/5579)
* cmake : pass -Werror through -Xcompiler
ggml-ci
* make, cmake : enable CUDA errors on warnings
ggml-ci
slaren [Mon, 19 Feb 2024 08:04:45 +0000 (09:04 +0100)]
cuda, metal : fix nans in soft_max (llama/5574)
* cuda : fix nans in soft_max
* metal : fix nans in soft_max
---------
Co-authored-by: Georgi Gerganov <redacted>
bmwl [Mon, 19 Feb 2024 07:38:32 +0000 (23:38 -0800)]
ggml : android and old glibc NUMA incompatibility bugfixes (llama/5557)
* #ifdef out some code NUMA blocks for Android due to lack of support
* added in some __ANDROID__ if def gates around numa code and forced GLIBC prior to 2.29 to use a syscall for getcpu instead of the wrapper
* Changed gates on numa platform specific stuff to __gnu_linux__ to skip any platforms without glibc
* harmonizing #if defined blocks for numa code to __gnu_linux__ since that's the only model that's being followed anyways
---------
Co-authored-by: root <redacted>
Georgi Gerganov [Sun, 18 Feb 2024 20:58:57 +0000 (22:58 +0200)]
ggml : restore vec dot stride arg names (llama/5453)
Georgi Gerganov [Sun, 18 Feb 2024 20:39:30 +0000 (22:39 +0200)]
ci : fix wikitext url + compile warnings (llama/5569)
ggml-ci
Georgi Gerganov [Sun, 18 Feb 2024 19:39:58 +0000 (21:39 +0200)]
metal : fix unused warnings (llama/0)
Herman Semenov [Sun, 18 Feb 2024 16:20:12 +0000 (16:20 +0000)]
ggml, common, examples, tests : fixed type arguments in printf (llama/5528)
Kawrakow [Sun, 18 Feb 2024 16:16:55 +0000 (18:16 +0200)]
1.5 bit quantization (llama/5453)
* iq1_s: WIP basics
* iq1_s: CUDA is working
* iq1_s: scalar CPU dot product
* iq1_s: WIP AVX2 dot product - something is not right
* Fix tests
* Fix shadow warnings
* Fix after merge with latest master
* iq1_s: AVX2 finally works
* iq1_s: ARM_NEON dot product. Works, but not very fast
* iq1_s: better grid
* iq1_s: use IQ2_XXS for attn_output
At a cost of 0.04 extra bpw this gives a big improvement in PPL.
* iq1_s: Metal basics
Dequantize works, but not dot product
* iq1_s: Metal works, but quite slow
As usual, Apple Silicon does not like the code I write.
* iq1_s: Tests
* iq1_s: slightly faster dot product
---------
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Mon, 19 Feb 2024 13:18:09 +0000 (15:18 +0200)]
ggml : add ALiBi support for ggml_soft_max_ext (llama/5488)
Ananta Bastola [Sat, 17 Feb 2024 21:03:14 +0000 (16:03 -0500)]
ci : add an option to fail on compile warning (llama/3952)
* feat(ci): add an option to fail on compile warning
* Update CMakeLists.txt
* minor : fix compile warnings
ggml-ci
* ggml : fix unreachable code warnings
ggml-ci
* ci : disable fatal warnings for windows, ios and tvos
* ggml : fix strncpy warning
* ci : disable fatal warnings for MPI build
* ci : add fatal warnings to ggml-ci
ggml-ci
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 16 Feb 2024 17:05:56 +0000 (19:05 +0200)]
cmake : fix VULKAN and ROCm builds (llama/5525)
* cmake : fix VULKAN and ROCm builds
* cmake : fix (cont)
* vulkan : fix compile warnings
ggml-ci
* cmake : fix
ggml-ci
* cmake : minor
ggml-ci
bmwl [Fri, 16 Feb 2024 09:31:07 +0000 (01:31 -0800)]
ggml : add numa options (llama/5377)
* Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h
* Reverted Makefile
* Fixed include
* Removed sched.h from ggml.h, moved ggml_get_numa_affinity into ggml.c, removed trailing whitespace and fixed up a few inconsistent variables
* removed trailing whitespace
* Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h
* Reverting Makefile
* Fixed a number of issues with the move from BOOL to ggml_numa_strategies. Added a note about mirror mode note being implemented yet
* Removing MIRROR_MODE code for this PR
* Removing last bit of MIRROR_MODE code for this PR
* Removing unneeded branch in server.cpp example and moving get_numa_affinity and making it static
* Fixed lingering init_llama_backend() bool calls in tests and examples
* Remote enum llama_numa_strategies
* Revert bad merge with dynatemp flags
* add missing enum ggml_numa_strategies declaration and revert sync problem with master
* add missing enum ggml_numa_strategies declaration
* fixed ggml_init_numa variable
* Update ggml.h
Co-authored-by: Jared Van Bortel <redacted>
* Update READMEs with info about numa flags, change INTERLEAVE strategy name to DISTRIBUTE everywhere, implement the improved distribution strategy from @rankaiyx, fix a spelling mistake and un-merge some bad merges
* split numa init out from llama_backend_init and created llama_numa_init. Updated all code paths and samples
* Fix up some boolean vs enum comparisons
* Added #ifdefs for non-Linux OS that don't have cpu_set_t datatype
* Update ggml.h
Align enum values
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml.c
Remove whitespace
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml.c
align paremeters
Co-authored-by: Georgi Gerganov <redacted>
* Update examples/server/server.cpp
remove whitespace and align brace
Co-authored-by: Georgi Gerganov <redacted>
* Update common/common.cpp
Remove whitespace and align brace
Co-authored-by: Georgi Gerganov <redacted>
* unified ggml_numa_strategy enum and fixed text alignment in server.cpp example
* Update ggml.c
simplified return for platforms without NUMA support
Co-authored-by: Jared Van Bortel <redacted>
* removed redundant else from cli argument processing of --numa
* whitespace
---------
Co-authored-by: root <redacted>
Co-authored-by: Jared Van Bortel <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Jared Van Bortel <redacted>
slaren [Thu, 15 Feb 2024 15:49:01 +0000 (16:49 +0100)]
cuda : print message when initialization fails (llama/5512)
* cuda : print message when initialization fails
* use CUDA_NAME both times
Neuman Vong [Thu, 15 Feb 2024 06:11:15 +0000 (17:11 +1100)]
vulkan: Find optimal memory type but with fallback (llama/5381)
* @0cc4m feedback
* More feedback @0cc4m
AT [Tue, 13 Feb 2024 21:44:25 +0000 (15:44 -0600)]
Early return for zero size calls to get_tensor. (llama/5482)
* Early return for zero size calls to get_tensor.
Signed-off-by: Adam Treat <redacted>
* Update ggml-kompute.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml-kompute.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Add an early return to the get/set tensor when the size is null.
Signed-off-by: Adam Treat <redacted>
* Early return after the assertions.
Signed-off-by: Adam Treat <redacted>
* Since we do the early return in the generic backend now no reason to do so here as well.
Signed-off-by: Adam Treat <redacted>
---------
Signed-off-by: Adam Treat <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Tue, 13 Feb 2024 07:07:57 +0000 (09:07 +0200)]
ggml-quants : fix compiler warnings (shadow variable) (llama/5472)
Co-authored-by: Iwan Kawrakow <redacted>
Abhilash Majumder [Mon, 12 Feb 2024 14:52:05 +0000 (20:22 +0530)]
ggml-sycl: Replace 3d ops with macro (llama/5458)
* use macro
* use macro
* fix format
Georgi Gerganov [Mon, 19 Feb 2024 12:44:46 +0000 (14:44 +0200)]
build : update CBLAS flags + fix unused var warning (#0)
Davidson Francis [Mon, 19 Feb 2024 08:51:26 +0000 (05:51 -0300)]
main : check if input files exist before proceeding (#1872)
Until the most recent commit (
3d42463 ), the main.cpp sample file does
not check whether the input files exist or not. Consequently, the
model is loaded first before reporting whether there was a failure or
not when processing a file. In environments with HDD, this can take
about 50 seconds or more, depending on the loaded model.
This commit addresses this issue by checking in advance whether the
input files exist or not.
Felix [Mon, 19 Feb 2024 08:50:15 +0000 (09:50 +0100)]
examples : clean up common code (#1871)
move some utility functions into common.h
Jumper775 [Mon, 19 Feb 2024 02:19:47 +0000 (21:19 -0500)]
models : fix openvino setup info (#1874)
Georgi Gerganov [Tue, 13 Feb 2024 09:51:32 +0000 (11:51 +0200)]
models : add update py requirements
Georgi Gerganov [Mon, 12 Feb 2024 17:54:11 +0000 (19:54 +0200)]
swift : package no longer use ggml dependency (#1861)
* Revert "swift : update Package.swift to use ggml as package dependency (#1701)"
This reverts commit
993acb5d410cd8eaebaa3fc54d4b153e04bbefce .
* spm : add ggml.h
Georgi Gerganov [Mon, 12 Feb 2024 17:53:51 +0000 (19:53 +0200)]
whisper : fix external encoder (#1860)
Georgi Gerganov [Mon, 12 Feb 2024 17:07:56 +0000 (19:07 +0200)]
sync : ggml
slaren [Mon, 12 Feb 2024 17:07:14 +0000 (18:07 +0100)]
ggml-alloc : allocate all leafs as if they were inputs (ggml/731)
* ggml-alloc : allocate all leafs as if they were inputs
* ensure static leafs are allocated
* gpt-2-backend : remove unnecesary ggml_new_tensor
* update other gpt-2 examples to remove ggml_new_tensor calls in the graph
Georgi Gerganov [Mon, 12 Feb 2024 08:39:58 +0000 (10:39 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Mon, 12 Feb 2024 07:32:15 +0000 (09:32 +0200)]
sync : ggml
Georgi Gerganov [Mon, 12 Feb 2024 07:27:57 +0000 (09:27 +0200)]
ggml-backend : sync remnant
Johannes Gäßler [Sun, 11 Feb 2024 18:08:39 +0000 (19:08 +0100)]
CUDA: mul_mat_vec_q tiling, refactor mul mat logic (llama/5434)
* CUDA: mul_mat_vec_q tiling, refactor mul mat logic
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Sergio López [Sun, 11 Feb 2024 14:12:00 +0000 (15:12 +0100)]
vulkan: only use M-sized matmul on Apple GPUs (llama/5412)
* vulkan: refactor guess_matmul_pipeline for vendor
Refactor ggml_vk_guess_matmul_pipeline to simplify adding per-vendor
conditionals.
Signed-off-by: Sergio Lopez <redacted>
* vulkan: only use M-sized matmul on Apple GPUs
L-sized and S-sized matmuls are broken on Apple GPUs, force using
M-size with this vendor.
Signed-off-by: Sergio Lopez <redacted>
---------
Signed-off-by: Sergio Lopez <redacted>
Georgi Gerganov [Sun, 11 Feb 2024 13:33:01 +0000 (15:33 +0200)]
ggml : fix compile warnings (unused vars) (llama/4966)
snadampal [Sun, 11 Feb 2024 13:22:33 +0000 (07:22 -0600)]
ggml : add mmla kernels for quantized GEMM (llama/4966)
* ggml: aarch64: implement smmla kernel for q8_0_q8_0 quantized gemm
armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q8_0_q8_0 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"
On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.
* ggml: aarch64: implement smmla kernel for q4_0_q8_0 quantized gemm
armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q4_0_q8_0 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"
On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.
* ggml: aarch64: implement smmla kernel for q4_1_q8_1 quantized gemm
armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q4_1_q8_1 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"
On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.
* ggml: update unit tests for the new vec_dot interface
* llama.cpp: add MATMUL_INT8 capability to system_info
Ian Bull [Sat, 10 Feb 2024 10:53:28 +0000 (02:53 -0800)]
metal : use autoreleasepool to avoid memory leaks (llama/5437)
There appears to be a known memory leak when using the
`MLTCommandBuffer`. It is suggested to use `@autoreleasepool` in
[1,2]
[1] https://developer.apple.com/forums/thread/662721
[2] https://forums.developer.apple.com/forums/thread/120931
This change-set wraps the `ggml_metal_graph_compute` in a
`@autoreleasepool`.
This commit addresses https://github.com/ggerganov/llama.cpp/issues/5436
slaren [Sun, 11 Feb 2024 12:37:58 +0000 (13:37 +0100)]
ggml-alloc : v3 (ggml/727)
* ggml-alloc v3
ggml-ci
* fix ci
ggml-ci
* whisper : check for backend buffer allocation failures
* whisper : avoid leaks when initialization fails
* cleanup
ggml-ci
* style fixes
ggml-ci
dscripka [Mon, 12 Feb 2024 07:19:07 +0000 (02:19 -0500)]
examples : added audio_ctx argument to main and server (#1857)
* added audio_ctx argument to main and server examples
* Better default value
Co-authored-by: Georgi Gerganov <redacted>
* better default value (again)
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Didzis Gosko [Sun, 11 Feb 2024 14:41:41 +0000 (16:41 +0200)]
metal : option to embed MSL source into compiled binary (#1842)
* ggml : embed Metal library source (ggml-metal.metal) into binary
enable by setting WHISPER_EMBED_METAL_LIBRARY
* rename the build option
* rename the preprocessor directive
* generate Metal library embedding assembly on-fly during build process
Georgi Gerganov [Sun, 11 Feb 2024 14:39:12 +0000 (16:39 +0200)]
examples : initialize context params properly (#1852)
Georgi Gerganov [Sat, 10 Feb 2024 08:10:59 +0000 (10:10 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Sat, 10 Feb 2024 07:56:47 +0000 (09:56 +0200)]
sync : ggml
Georgi Gerganov [Sat, 10 Feb 2024 07:50:24 +0000 (09:50 +0200)]
src : relocate new backend sources
Michael Podvitskiy [Fri, 9 Feb 2024 09:56:43 +0000 (10:56 +0100)]
ggml : fix `error C2078: too many initializers` for MSVC ARM64 (llama/5404)
Johannes Gäßler [Thu, 8 Feb 2024 20:56:40 +0000 (21:56 +0100)]
CUDA: more warps for mmvq on NVIDIA (llama/5394)
Johannes Gäßler [Wed, 7 Feb 2024 11:40:26 +0000 (12:40 +0100)]
CUDA: fixed mmvq kernel for bs 2,3,4 and -sm row (llama/5386)
0cc4m [Wed, 7 Feb 2024 06:54:50 +0000 (07:54 +0100)]
Basic Vulkan Multi-GPU implementation (llama/5321)
* Initial Vulkan multi-gpu implementation
Move most global variables into backend context
* Add names to backend device functions
* Add further missing cleanup code
* Reduce code duplication in tensor split layer assignment
* generalize LLAMA_SPLIT_LAYER for all backends, do not expose device count and memory in llama.h
* Only do device info print in the beginning and initialize one backend for cpu assist
Add missing cleanup code
* Rework backend memory management to make sure devices and buffers get properly allocated and freed
* Rename cpu assist free function
---------
Co-authored-by: slaren <redacted>
Johannes Gäßler [Tue, 6 Feb 2024 17:43:06 +0000 (18:43 +0100)]
CUDA: mul_mat_vec_q max. batch size 8 -> 4 (llama/5370)
Kawrakow [Tue, 6 Feb 2024 15:28:02 +0000 (17:28 +0200)]
Slight quantization improvement for Q4_K and Q5_K (llama/5361)
* Q4_K: slightly better quantization
* Q5_K: slightly better quantization
---------
Co-authored-by: Iwan Kawrakow <redacted>
Johannes Gäßler [Tue, 6 Feb 2024 13:44:06 +0000 (14:44 +0100)]
CUDA: mul_mat_vec_q for batch sizes > 1 (llama/5351)
Kawrakow [Mon, 5 Feb 2024 12:09:47 +0000 (14:09 +0200)]
ggml : make use of ggml-quants.h possible in C++ code (llama/5338)
* Make use of ggml-quants.h possible in C++ code
* One cannot possibly be defining static_assert in a C++ compilation
---------
Co-authored-by: Iwan Kawrakow <redacted>
Dr. Tom Murphy VII Ph.D [Mon, 5 Feb 2024 11:13:57 +0000 (06:13 -0500)]
ggml : avoid duplicating function calls using MIN/MAX macros (llama/5325)
* Avoid duplicating function calls when using MIN/MAX macros.
Since these copy "a" and "b" they ask the compiler to evaluate one of them twice. The compiler doesn't have a problem with removing the duplication in something like MAX(0, x + 2), but in some cases we're calling functions, and those calls just happen twice.
By explicitly evaluating at the expression we get smaller and faster code without duplicate calls. See ggml_rope_yarn_corr_dims in Compiler Explorer:
https://godbolt.org/z/Ee4KMrvKh
Code behaves exactly the same.
* Update ggml.c
---------
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Mon, 5 Feb 2024 08:46:06 +0000 (10:46 +0200)]
iq2_xxs: tune quantization (llama/5320)
We get slightly better PPL, and we cut quantization time in
nearly half.
The trick is to 1st quantize without forcing points onto the E8-lattice.
We can then use a narrower search range around the block scale that we
got that way.
Co-authored-by: Iwan Kawrakow <redacted>
slaren [Thu, 1 Feb 2024 17:30:17 +0000 (18:30 +0100)]
cuda : fix LLAMA_CUDA_F16 (llama/5262)
Georgi Gerganov [Wed, 31 Jan 2024 13:35:41 +0000 (15:35 +0200)]
metal : add im2col F32 dst support (llama/5132)
JidongZhang-THU [Wed, 31 Jan 2024 13:10:15 +0000 (21:10 +0800)]
llava : add MobileVLM support (llama/5132)
* New Feature:
1. Sum_Rows:
fix cuda kernel overflow
fix block shape error when nrows too big
2. Im2Col:
Support Batch in cuda
Support f32 to f32 both in cpu && cuda
3. DepthWiseConv:
Support by Im2Col && MulMat
4. Pool_2d:
Supoort avg pooling in cuda
5. HardSigmoid:
Imp in cuda
6. HardSwish:
Imp in cuda
* fix tabs instead of spaces
* code clean
* CUDA POOL2D
* ADD POOL2D test case in test-backend-ops.cpp
* code clean
* fix pool2d_kernel
nits
* fix bug in pool2d kernel
* fix avg pooling, count_include_pad
nits
* test-backend-ops : add more pool_2d tests
* cuda : fix warnings and formatting
* ggml : check types in release builds too in pool_2d
* test-backend-ops : remove f16 pool_2d tests
* cuda : more style fixes
* Add assert in ggml_cuda_op_pool2d
* pool2d float padding fallback
* test-backend-ops : add dst_type to im2col
---------
Co-authored-by: slaren <redacted>
slaren [Wed, 31 Jan 2024 12:43:03 +0000 (13:43 +0100)]
ggml : limit n_threads to the max n_tasks (llama/5238)
Jared Van Bortel [Wed, 31 Jan 2024 00:04:37 +0000 (19:04 -0500)]
kompute : llama-bench support and ggml_cpu_has_kompute() (llama/5226)
Michael Podvitskiy [Fri, 9 Feb 2024 09:42:27 +0000 (10:42 +0100)]
ggml : add abort_callback for cpu backend (ggml/725)
* a way to use abort_callback with the cpu backend
* whisper update
Georgi Gerganov [Sat, 10 Feb 2024 07:55:19 +0000 (09:55 +0200)]
extra : update sync scripts
Valentin Gosu [Fri, 9 Feb 2024 15:42:41 +0000 (16:42 +0100)]
server : allow CORS request with authorization headers (#1850)
Whisper plugin in Obsidian requires an API key which is
then sent as an authorization header.
However, the presence of an authorization header requires
a CORS Preflight, so both the OPTIONS method and
the Access-Control-Allow-Headers: authorization must be
handled.
Neuman Vong [Fri, 9 Feb 2024 15:39:05 +0000 (02:39 +1100)]
whisper.android : how to build with CLBlast (#1809)
* FetchContent
* OpenCL
* Documentation and make optional
* Specify GGML build options in build.gradle
* Use gradle properties
* @ggerganov
Co-authored-by: Georgi Gerganov <redacted>
* @gpokat
---------
Co-authored-by: Georgi Gerganov <redacted>
Didzis Gosko [Fri, 9 Feb 2024 15:27:47 +0000 (17:27 +0200)]
whisper : expose CUDA device setting in public API (#1840)
* Makefile : allow to override CUDA_ARCH_FLAG
* whisper : allow to select GPU (CUDA) device from public API
Didzis Gosko [Fri, 9 Feb 2024 15:26:29 +0000 (17:26 +0200)]
make : add macOS deployment target option (#1839)
Georgi Gerganov [Tue, 6 Feb 2024 17:56:12 +0000 (19:56 +0200)]
talk-llama : stream response (#1121)
Georgi Gerganov [Tue, 30 Jan 2024 19:30:26 +0000 (21:30 +0200)]
sync : ggml (#0)
Kawrakow [Tue, 30 Jan 2024 17:15:28 +0000 (19:15 +0200)]
ggml : fix IQ3_XXS on Metal (llama/5219)
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Tue, 30 Jan 2024 14:21:57 +0000 (16:21 +0200)]
sync : ggml (llama/0)
Kawrakow [Tue, 30 Jan 2024 13:15:07 +0000 (15:15 +0200)]
Faster AVX2 dot product for IQ2_XS (llama/5187)
* iq2xs: faster AVX2 dot product
* iq2xs: small AVX2 imrovement
* Speed up computing sign bits in AVX2 iq2_xs dot product
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Peter Reid <redacted>
Kawrakow [Tue, 30 Jan 2024 13:14:12 +0000 (15:14 +0200)]
SOTA 3-bit quants (llama/5196)
* iq3_xxs: quantize/dequantize
RMSE seems a bit high-ish at about half-way between q2_K and
q3_K, so need to check more.
* iq3_xxs: CUDA dequantize works
* iq2_xxs: tuning quantization
* iq3_xxs: starting to look better
PPL on wiki.test.raw
LLaMA-v1-7B: 6.4218
LLaMA-v2-7B: 6.3560
Mistral-7B : 6.0717
This is better than Q3_K_XS, with a 5% reduction in quantized model
size.
* iq3_xxs: CUDA dot product
We have
PP-512: 5891 t/s
TG-128: 143.9 t/s
* iq3_xxs: scalar and AVX2 dot products
* iq3_xxs: ARM_NEON and Metal
Metal performance is decent, ARM_NEON is pathetic
* iq3_xxs: slightly better grid points
* Faster iq3_xxs and iq2_xs dot products on CUDA
* iq3_xxs: add some quant mix
* iq3_xxs: fix failing quantization test
Dot product still fails. Is this real?
* iq3_xxs: hopefully fix ROCm
* iq3_xxs: failing tests
This time the dot product accuracy did find an actual bug
in the AVX2 implementation.
* Add IQ3_XXS to test-backend-ops
---------
Co-authored-by: Iwan Kawrakow <redacted>
Paul Tsochantaris [Mon, 29 Jan 2024 22:19:29 +0000 (22:19 +0000)]
ggml alloc: Fix for null dereference on alloc failure (llama/5200)
* Fix for a null pointer dereference if a metal GGML buffer fails to be allocated
* Freeing the allocated buffers rather than the pointer in ggml-alloc.c
* Fixed the fix of the fix
Jared Van Bortel [Mon, 29 Jan 2024 20:50:50 +0000 (15:50 -0500)]
Nomic Vulkan backend (llama/4456)
Signed-off-by: Jared Van Bortel <redacted>
Co-authored-by: niansa <redacted>
Co-authored-by: Adam Treat <redacted>
Co-authored-by: Aaron Miller <redacted>
Co-authored-by: ToKiNoBug <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: slaren <redacted>
slaren [Mon, 29 Jan 2024 08:05:13 +0000 (09:05 +0100)]
ggml : add max buffer sizes to opencl and metal backends (llama/5181)
Paul Tsochantaris [Sun, 28 Jan 2024 19:50:16 +0000 (19:50 +0000)]
metal : free metal objects (llama/5161)
* Releasing MTLFunction references after Metal pipeline construction
* Keeping the `ggml_metal_kernel` structure
* Spacing fix
* Whitespace fix
Georgi Gerganov [Mon, 29 Jan 2024 19:08:18 +0000 (21:08 +0200)]
gguf : fix comparison (ggml/715)
ggml-ci
John Balis [Mon, 29 Jan 2024 12:37:33 +0000 (06:37 -0600)]
`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)
* added cuda float16->float32 upcasting to ggml_cuda_cpy
* added ability to copy 4d tensors with the cuda backend
* added tests for float16_>float32 upcast and 4d tensor cuda copys
* added 4d copy test for float32->float16 copy
* applied patch suggested by @iamlemec
* simplify cpy tests
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Mon, 29 Jan 2024 12:00:10 +0000 (14:00 +0200)]
gguf : add input validation, prevent integer overflows (ggml/709)
* gguf : add input validation, prevent integer overflows
ggml-ci
* gguf : fix switch default case
* gguf : sanitize info->n_dims and info->type
ggml-ci
* gguf : assert GGUF_TYPE_SIZE access
ggml-ci
* ggml : assert mallocs are successful
ggml-ci
* gguf : prevent integer overflow
* gguf : sanitize tensor info
ggml-ci
* gguf : stricter limit on the number of items
ggml-ci