]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
Johannes Gäßler [Tue, 23 Jan 2024 12:31:56 +0000 (13:31 +0100)]
CUDA: more info when no device code (llama/5088)
Georgi Gerganov [Tue, 23 Jan 2024 12:12:57 +0000 (14:12 +0200)]
minor : clean-up some warnings and style (llama/5094)
* minor : clean-up some warnings and style
ggml-ci
* ggml : add comment
Reinforce-II [Mon, 22 Jan 2024 13:15:08 +0000 (21:15 +0800)]
ggml : parallelize FP32 conversion when using BLAS (llama/5045)
* make GGML_TASK_INIT phase can be run in multithread
* multithreaded dequantize in mul_mat when using blas library
* minor fixes
* update outdated comment
* fix coding style
* simplify code
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
XiaotaoChen [Mon, 22 Jan 2024 13:09:35 +0000 (21:09 +0800)]
llava : MobileVLM support (llama/4954)
* MobileVLM native implementation
* delete depthwise_conv_2d and permute_cpy relative code, replace the two by the existed functions, and opt ldp definition, support LLAMA_PERF option for CMake
* move android script to example/llava directory
* Fix the editor config checks
---------
Co-authored-by: Chenxiaotao03 <redacted>
slaren [Sat, 20 Jan 2024 15:05:49 +0000 (16:05 +0100)]
llama : run all KQV ops on the CPU with no KV offload (llama/5049)
ggml-ci
Kylin [Sat, 20 Jan 2024 07:01:46 +0000 (15:01 +0800)]
cuda : fix compile error in jetson platform (llama/4975)
* cuda: fix compile error in jetson platform
* cuda: update comment in ggml-cuda.cu
* cuda: update ggml-cuda.cu comment
Judd [Fri, 26 Jan 2024 13:04:01 +0000 (21:04 +0800)]
ggml : check ggml_add src1 type (ggml/708)
Co-authored-by: Judd <redacted>
Michael Rienstra [Fri, 26 Jan 2024 15:39:54 +0000 (07:39 -0800)]
docs : make model options / model install methods clearer (#1806)
* Make models more "discoverable"
* Clean up code block language identifiers
* make 3 options clearer
* undo Prettier formatter change
* docs: `$` shell prompt, consistently
* docs: minor changes
trixirt [Mon, 22 Jan 2024 13:02:35 +0000 (05:02 -0800)]
cmake : make libwhisper.so position independent (#1792)
This is similar to how libllama.so is built.
Signed-off-by: Tom Rix <redacted>
Georgi Gerganov [Mon, 22 Jan 2024 12:51:42 +0000 (14:51 +0200)]
cmake : temporary remove VLA check (#1795)
Neuman Vong [Fri, 19 Jan 2024 14:17:38 +0000 (01:17 +1100)]
whisper.android : return output from benchmarks (#1785)
Benchmarks are failing because JNI expects a jstring and the benchmarks
are missing a return statement (i.e., returning null). The functions
actually build a jstring but don't return it, so this seems to have been
an oversight.
This patch returns the jstring and now the benchmarks run successfully.
Fixes #1783.
Ryan Hitchman [Thu, 18 Jan 2024 20:58:42 +0000 (13:58 -0700)]
server : implement "verbose_json" format with token details (#1781)
* examples/server: implement "verbose_json" format with token details.
This is intended to mirror the format of openai's Python
whisper.transcribe() return values.
* server: don't write WAV to a temporary file if not converting
* server: use std::lock_guard instead of manual lock/unlock
Georgi Gerganov [Thu, 18 Jan 2024 09:03:13 +0000 (11:03 +0200)]
ggml : sync ggml-metal.m
Georgi Gerganov [Wed, 17 Jan 2024 19:23:33 +0000 (21:23 +0200)]
sync : llama.cpp
Georgi Gerganov [Wed, 17 Jan 2024 19:22:38 +0000 (21:22 +0200)]
sync : ggml
Georgi Gerganov [Wed, 17 Jan 2024 16:54:56 +0000 (18:54 +0200)]
ggml : add IQ2 to test-backend-ops + refactoring (llama/4990)
* ggml : add IQ2 to test-backend-ops + refactoring
ggml-ci
* cuda : update supports_op for IQ2
ggml-ci
* ci : enable LLAMA_CUBLAS=1 for CUDA nodes
ggml-ci
* cuda : fix out-of-bounds-access in `mul_mat_vec_q`
ggml-ci
* tests : avoid creating RNGs for each Q tensor
ggml-ci
* tests : avoid creating RNGs for each tensor
ggml-ci
Georgi Gerganov [Wed, 17 Jan 2024 16:46:30 +0000 (18:46 +0200)]
imatrix : offload to GPU support (llama/4957)
* backend : add eval callback
ggml-ci
* backend : group nodes in a single compute when user don't need them
* backend : clean-up the implementation
ggml-ci
* simple : do not perform tensor data copy if not needed
* simple : fix
* imatrix : offload to GPU support
* imatrix : fix ggml_mul_mat_id hanlding
ggml-ci
* ci : add imatrix test
ggml-ci
* ci : rearrange output
ggml-ci
Georgi Gerganov [Wed, 17 Jan 2024 16:39:41 +0000 (18:39 +0200)]
backend : add eval callback (llama/4935)
* backend : add eval callback
ggml-ci
* backend : group nodes in a single compute when user don't need them
* backend : clean-up the implementation
ggml-ci
* simple : do not perform tensor data copy if not needed
* simple : fix
* simple : no need for ggml_is_contiguous + fix bool parse
* llama : fix callback placement in llama_context_params
* backend : avoid double-ask callback calls
* simple : restore examples, imatrix will serve as a demo
Georgi Gerganov [Wed, 17 Jan 2024 16:38:39 +0000 (18:38 +0200)]
metal : create autorelease pool during library build (llama/4970)
* metal : create autorelease pool during library build
ggml-ci
* test : simplify
ggml-ci
Kawrakow [Tue, 16 Jan 2024 17:51:26 +0000 (19:51 +0200)]
ggml : importance matrix support for legacy quants (llama/4969)
* imatrix: adding support for legacy quants
* imatrix: guard Q4_0/Q5_0 against ffn_down craziness
---------
Co-authored-by: Iwan Kawrakow <redacted>
Alex Azarov [Tue, 16 Jan 2024 13:33:02 +0000 (14:33 +0100)]
metal : log `recommendedMaxWorkingSetSize` on iOS 16+ (llama/4936)
* metal: Log `recommendedMaxWorkingSetSize` on iOS 16+
* Only log on iOS and macOS, ignoring tvOS and other platforms
* Check for Xcode version before using recommendedMaxWorkingSetSize
---------
Co-authored-by: Georgi Gerganov <redacted>
Justine Tunney [Tue, 16 Jan 2024 11:16:33 +0000 (03:16 -0800)]
ggml : introduce GGML_CALL function annotation (llama/4850)
This change makes it possible to build ggml-cuda.cu and ggml-metal.m as
independent dynamic shared objects, that may be conditionally linked at
runtime in a multiplatform binary. It introduces a GGML_CALL annotation
that documents which functions have a cyclic call relationship, between
the application code and GPU modules.
This change does nothing, unless the build defines -DGGML_MULTIPLATFORM
which causes back-references and function pointers to conform to MS ABI
which is supported by NVCC, ROCm, XCode, GCC and Clang across platforms
Georgi Gerganov [Mon, 15 Jan 2024 11:27:00 +0000 (13:27 +0200)]
cuda : fix dequantize kernel names (llama/4938)
Kawrakow [Mon, 15 Jan 2024 05:48:06 +0000 (07:48 +0200)]
CUDA: faster dequantize kernels for Q4_0 and Q4_1 (llama/4938)
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Sun, 14 Jan 2024 14:21:12 +0000 (16:21 +0200)]
Add ability to use importance matrix for all k-quants (llama/4930)
Co-authored-by: Iwan Kawrakow <redacted>
Benjamin Heiniger [Tue, 16 Jan 2024 13:52:01 +0000 (14:52 +0100)]
talk-llama : optional wake-up command and audio confirmation (#1765)
* talk-llama: add optional wake-word detection from command
* talk-llama: add optional audio confirmation before generating answer
* talk-llama: fix small formatting issue in output
* talk-llama.cpp: fix Windows build
Przemysław Pawełczyk [Mon, 15 Jan 2024 13:48:13 +0000 (14:48 +0100)]
server : fix building and simplify lib deps on Windows (#1772)
* make : fix server example building on MSYS2 environments (Windows)
It was not working since commit
eff3570f78742dfd56024328ed93d4f442434280
when server was introduced.
* cmake : simplify server example lib deps on Windows
server uses httplib::Server, not httplib::SSLServer, so there is no need
to mention cryptographic libraries in target_link_libraries.
Winsock (ws2_32) suffices here.
Also use plain library names like we use in other places.
Georgi Gerganov [Sun, 14 Jan 2024 16:08:20 +0000 (18:08 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Sun, 14 Jan 2024 09:06:28 +0000 (11:06 +0200)]
talk-llama : llama.cpp
Georgi Gerganov [Sun, 14 Jan 2024 08:55:18 +0000 (10:55 +0200)]
sync : ggml
Alex Azarov [Sun, 14 Jan 2024 08:44:39 +0000 (09:44 +0100)]
metal : correctly set SIMD support flags on iOS (llama/4923)
* Correctly set support_simdgroup_reduction and support_simdgroup_mm on iPhone/iPad
* log a little bit more info on iOS
Kawrakow [Sun, 14 Jan 2024 07:45:56 +0000 (09:45 +0200)]
2-bit quantizations (llama/4897)
* imatrix: load
* imatrix: WIP
* imatrix: Add Q2_K quantization
* imatrix: also guard against Q2_K_S quantization without importance matrix
* imatrix: guard even more against low-bit quantization misuse
---------
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Sun, 14 Jan 2024 08:53:19 +0000 (10:53 +0200)]
scripts : sync-ggml-am.sh add option to skip commits
Georgi Gerganov [Sat, 13 Jan 2024 22:13:17 +0000 (00:13 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Sat, 13 Jan 2024 22:12:17 +0000 (00:12 +0200)]
sync : ggml
Georgi Gerganov [Sat, 13 Jan 2024 22:09:26 +0000 (00:09 +0200)]
examples : adapt to metal API
Johannes Gäßler [Sat, 13 Jan 2024 20:41:37 +0000 (21:41 +0100)]
ggml: cache sin/cos for RoPE (llama/4908)
Georgi Gerganov [Sat, 13 Jan 2024 18:45:45 +0000 (20:45 +0200)]
metal : remove old API (llama/4919)
ggml-ci
Georgi Gerganov [Sat, 13 Jan 2024 16:46:37 +0000 (18:46 +0200)]
metal : disable log for loaded kernels (llama/4794)
texmex76 [Sat, 13 Jan 2024 16:06:20 +0000 (17:06 +0100)]
gguf : fix potential infinite for-loop (llama/4600)
Co-authored-by: Bernhard Gstrein <redacted>
Georgi Gerganov [Sat, 13 Jan 2024 16:03:45 +0000 (18:03 +0200)]
metal : refactor kernel loading code (llama/4794)
* metal : detect more GPU families
* metal : refactor kernel loading
* metal : set kernel family requirements
* metal : fix kernel init + fix compile options
* metal : take into account simdgroup reduction support
* metal : print only skipped kernels
* metal : fix check for simdgroup reduction support
* metal : check for Metal 3
* metal : free allocations
* metal : normalize encoder:setComputePipelineStatus calls
ggml-ci
* metal : fix Metal3 family check
ggml-ci
* metal : check for simdgroup matrix mul. feature
ggml-ci
Johannes Gäßler [Fri, 12 Jan 2024 19:38:54 +0000 (20:38 +0100)]
CUDA: faster q8_0 -> f16 dequantization (llama/4895)
RhinoDevel [Sat, 13 Jan 2024 18:51:35 +0000 (19:51 +0100)]
talk-llama : add optional CLI arg to set the bot name (#1764)
james wolf [Sat, 13 Jan 2024 17:37:18 +0000 (12:37 -0500)]
examples : add python example for transcription (#1744)
* rebase and add simple python interface
* moved python files to examples/python
Georgi Gerganov [Sat, 13 Jan 2024 15:47:40 +0000 (17:47 +0200)]
whisper : load the model into multiple buffers of max size 1GB (#1763)
Georgi Gerganov [Fri, 12 Jan 2024 20:04:51 +0000 (22:04 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Fri, 12 Jan 2024 19:56:50 +0000 (21:56 +0200)]
sync : ggml
slaren [Fri, 12 Jan 2024 19:38:34 +0000 (20:38 +0100)]
backend_sched : fix assignments
ggml-ci
slaren [Fri, 12 Jan 2024 19:07:38 +0000 (20:07 +0100)]
llama : ggml-backend integration (llama/4766)
* llama : ggml-backend integration
* ggml-backend : add names to buffers
* fix unmap after loading
* batched-bench : add tensor_split param
* llama : check for null tensor_split
* ggml-backend : increase GGML_MAX_BACKENDS
* improve graph splitting, partial fix for --no-kv-offload
* cuda : add ggml-backend split buffer support
* cuda : do not create buffer types for devices that don't exist (fixes usage without CUDA devices available)
* ggml : fix null backend dereference (llama/4807)
* ggml : fix null backend dereference
* ggml : also check ggml_backend_is_cpu
* test-backend-ops : check buffer allocation failures
* llama : add cparam (split_mode) and command line argument (--split-mode, -sm) to configure the split mode (none, layer or row)
* ggml : fix mul_mat_id work size
* llama : rewrite session kv load/set without graphs
* minor
* llama : only initialize used backends, free backends on context free
* llama : abort ctx if cuda backend init fails
* llama : rewrite lora with ggml-backend and compute on CPU
ggml-ci
* llama : only map to a backend buffer the region of the file mapping containing the tensors used in the buffer
* opencl : add ggml-backend buffer type
* cuda : only use batched_cublas with batched mat muls (fixes fp16 tg perf)
* llama : on Metal, by default offload the full model
ggml-ci
* metal : page align the data ptr (llama/4854)
* Apply suggestions from code review
Co-authored-by: Johannes Gäßler <redacted>
* cuda : fix split buffer free
* address review comments
* llama-bench : add split-mode parameter
* fix whitespace
* opencl : fix double initialization
* server : add --split-mode parameter
* use async copy and compute to improve multi-gpu performance
ggml-ci
* use async memcpys to copy the graph outputs to the CPU
* fix opencl
* use a host buffer for the cpu compute buffer for faster copies to the gpu
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Johannes Gäßler <redacted>
Johannes Gäßler [Fri, 12 Jan 2024 11:30:41 +0000 (12:30 +0100)]
CUDA: fix softmax compile for old CUDA versions (llama/4862)
Kawrakow [Fri, 12 Jan 2024 05:59:57 +0000 (06:59 +0100)]
Importance Matrix calculation (llama/4861)
* imatrix: 1st version
* imatrix: WIP
* Cleanup
* Update examples/imatrix/imatrix.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Sơn Phan Trung [Fri, 12 Jan 2024 12:11:04 +0000 (19:11 +0700)]
models : make all scripts to be POSIX Compliant (#1725)
* download-coreml-model: make it POSIX-compliant
* download-ggml-model: posix compliant (2nd)
* minor edit
* forgot to add newline
* generate-coreml-interface: far more straightforward
* generate-coreml-model: done with the posix thingy
* typo
* Update download-ggml-model.sh
* fix
* fix typo
* another fix
* Update download-coreml-model.sh
* Update download-ggml-model.sh
* Update download-coreml-model.sh
Georgi Gerganov [Fri, 12 Jan 2024 12:02:30 +0000 (14:02 +0200)]
ggml : fix 32-bit ARM compat for IQ2_XS (#1758)
* ggml : fix 32-bit ARM compat
* ggml : fix fix
* ggml : fix fix fix
Boris Bliznioukov [Fri, 12 Jan 2024 11:44:50 +0000 (14:44 +0300)]
go : add SetInitialPrompt method to bindings (#1753)
George Hindle [Fri, 12 Jan 2024 11:42:52 +0000 (11:42 +0000)]
server : add more parameters to server api (#1754)
* feat(server): add more parameters to server api
* fix(server): reset params to original parsed values for each request
Georgi Gerganov [Fri, 12 Jan 2024 11:37:38 +0000 (13:37 +0200)]
whisper : fix segment length with params.no_timestamps == true
George Hindle [Fri, 12 Jan 2024 11:24:38 +0000 (11:24 +0000)]
params : don't compute timestamps when not printing them (#1755)
Georgi Gerganov [Thu, 11 Jan 2024 20:10:10 +0000 (22:10 +0200)]
talk-llama : sync llama.cpp
Georgi Gerganov [Thu, 11 Jan 2024 20:00:12 +0000 (22:00 +0200)]
swift : remove local ggml.h reference
Georgi Gerganov [Thu, 11 Jan 2024 19:57:40 +0000 (21:57 +0200)]
swift : track ggml release branch
Georgi Gerganov [Thu, 11 Jan 2024 19:54:17 +0000 (21:54 +0200)]
sync : ggml
Georgi Gerganov [Thu, 11 Jan 2024 19:49:13 +0000 (21:49 +0200)]
sync : llama.cpp
Kawrakow [Thu, 11 Jan 2024 19:39:39 +0000 (20:39 +0100)]
ggml : SOTA 2-bit quants (add IQ2_XS) (llama/4856)
* iq2_xs: basics
* iq2_xs: this should have been in the basics
* iq2_xs: CUDA and scalar CPU works
* iq2_xs: WIP Metal
* iq2_xs: Metal now works
* iq2_xs: working, but dog slow, ARM_NEON dot product
* iq2_xs: better ARM_NEON dot product
We are now at 19.5 t/s for TG-128 and 61 t/s for PP-512 when
running on the CPU.
* iq2_xs: AVX2 dot product - 19.5 t/s
* iq2_xs: faster AVX2 dit product
21.4 t/s for TG-128, 59.2 t/s for PP-512.
The latter is 2x compared to the previous version.
* iq2_xs: had forgotten to delete iq2-data.h
* Add llama enum for IQ2_XS
---------
Co-authored-by: Iwan Kawrakow <redacted>
Paul Tsochantaris [Thu, 11 Jan 2024 14:31:52 +0000 (14:31 +0000)]
metal : put encoder debug group behind a define (llama/4873)
Georgi Gerganov [Tue, 9 Jan 2024 17:37:08 +0000 (19:37 +0200)]
metal : improve dequantize precision to match CPU (llama/4836)
ggml-ci
Georgi Gerganov [Tue, 9 Jan 2024 08:42:06 +0000 (10:42 +0200)]
ggml : fix vld1q_s8_x4 32-bit compat (llama/4828)
* ggml : fix vld1q_s8_x4 32-bit compat
ggml-ci
* ggml : fix 32-bit ARM compat (cont)
ggml-ci
Johannes Gäßler [Tue, 9 Jan 2024 07:58:55 +0000 (08:58 +0100)]
CUDA: faster softmax via shared memory + fp16 math (llama/4742)
Georgi Gerganov [Thu, 11 Jan 2024 07:34:59 +0000 (09:34 +0200)]
metal : fix deprecation warning (ggml/690)
Timothy Cronin [Thu, 11 Jan 2024 07:27:48 +0000 (02:27 -0500)]
ggml : remove ggml_cpy_inplace and ggml_cont_inplace (ggml/693)
Jack Mousseau [Wed, 10 Jan 2024 14:19:19 +0000 (06:19 -0800)]
metal : wrap each operation in debug group (ggml/690)
leejet [Wed, 10 Jan 2024 13:13:42 +0000 (21:13 +0800)]
ggml : change GGML_MAX_NAME at compile time (ggml/682)
* change GGML_MAX_NAME to 128
* allow controlling the value of GGML_MAX_NAME through external macro definitions
Halalaluyafail3 [Tue, 9 Jan 2024 16:16:37 +0000 (11:16 -0500)]
Fix execlp call (ggml/689)
NULL can be an integer constant expression with the value zero, in this case the behavior would be undefined because of an incorrect type being passed to the variable arguments.
Kawrakow [Mon, 8 Jan 2024 15:02:32 +0000 (16:02 +0100)]
SOTA 2-bit quants (llama/4773)
* iq2_xxs: basics
* iq2_xxs: scalar and AVX2 dot products
Needed to change Q8_K to have quants in the -127...127 range,
else the IQ2_XXS AVX implementation becomes very awkward.
The alternative would have been to use Q8_0 instead. Perhaps
I'll change later, for now this is what we have.
* iq2_xxs: ARM_NEON dot product
Somehow strangely slow (112 ms/token).
* iq2_xxs: WIP Metal
Dequantize works, something is still wrong with the
dot product.
* iq2_xxs: Metal dot product now works
We have
PP-512 = 475 t/s
TG-128 = 47.3 t/s
Not the greatest performance, but not complete garbage either.
* iq2_xxs: slighty faster dot product
TG-128 is now 48.4 t/s
* iq2_xxs: slighty faster dot product
TG-128 is now 50.9 t/s
* iq2_xxs: even faster Metal dot product
TG-128 is now 54.1 t/s.
Strangely enough, putting the signs lookup table
into shared memory has a bigger impact than the
grid values being in shared memory.
* iq2_xxs: dequantize CUDA kernel - fix conflict with master
* iq2_xxs: quantized CUDA dot product (MMVQ)
We get TG-128 = 153.1 t/s
* iq2_xxs: slightly faster CUDA dot product
TG-128 is now at 155.1 t/s.
* iq2_xxs: add to llama ftype enum
* iq2_xxs: fix MoE on Metal
* Fix missing MMQ ops when on hipBLAS
I had put the ggml_supports_mmq call at the wrong place.
* Fix bug in qequantize_row_iq2_xxs
The 0.25f factor was missing.
Great detective work by @ggerganov!
* Fixing tests
* PR suggestion
---------
Co-authored-by: Iwan Kawrakow <redacted>
Johannes Gäßler [Sun, 7 Jan 2024 16:24:08 +0000 (17:24 +0100)]
CUDA: fixed redundant value dequantization (llama/4809)
Konstantin Zhuravlyov [Sun, 7 Jan 2024 06:52:42 +0000 (01:52 -0500)]
ggml : use __builtin_amdgcn_sudot4 in __dp4a for gfx11 (llama/4787)
Georgi Gerganov [Fri, 5 Jan 2024 13:18:21 +0000 (15:18 +0200)]
ggml : do not sched_yield when calling BLAS (llama/4761)
* ggml : do not sched_yield when calling BLAS
ggml-ci
* ggml : fix do_yield logic
ggml-ci
* ggml : simplify do_yield logic
ggml-ci
Georgi Gerganov [Thu, 4 Jan 2024 08:12:26 +0000 (10:12 +0200)]
ggml : include stdlib.h before intrin.h (llama/4736)
Alexandru Mariuti [Wed, 10 Jan 2024 16:12:06 +0000 (17:12 +0100)]
swift : checkout ggml commit instead of branch (#1750)
RhinoDevel [Wed, 10 Jan 2024 14:15:28 +0000 (15:15 +0100)]
talk-llama : add optional Piper TTS support (#1749)
Add commented-out command to optionally use Piper (https://github.com/rhasspy/piper) as text-to-speech solution for the talk-llama example. Piper voices sound almost like real people which is a big improvement (e.g.) from something like espeak.
Emmanuel Schmidbauer [Mon, 8 Jan 2024 22:39:51 +0000 (17:39 -0500)]
server : add request path option(#1741)
Georgi Gerganov [Mon, 8 Jan 2024 14:41:28 +0000 (16:41 +0200)]
main : add cli option to disable system prints (#1740)
Georgi Gerganov [Sun, 7 Jan 2024 11:35:14 +0000 (13:35 +0200)]
server : fix server temperature + add temperature_inc (#1729)
* server : fix server temperature + add temperature_inc
* server : change dashes to underscores in parameter names
Georgi Gerganov [Sat, 6 Jan 2024 15:22:57 +0000 (17:22 +0200)]
talk-llama : sync latest llama.cpp
Georgi Gerganov [Fri, 5 Jan 2024 15:11:27 +0000 (17:11 +0200)]
release : v1.5.4
Erik Scholz [Fri, 5 Jan 2024 15:00:00 +0000 (16:00 +0100)]
fix : cuda order of synchronization when setting a buffer (ggml/679)
* fix : cuda order of synchronization when setting a buffer
* also sync before memcpy
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Fri, 5 Jan 2024 14:30:52 +0000 (16:30 +0200)]
metal : switch back to default.metallib (ggml/681)
ggml-ci
Georgi Gerganov [Fri, 5 Jan 2024 13:36:04 +0000 (15:36 +0200)]
ggml : fix q2_k bpw in comments (ggml/680)
Yajing Tang [Thu, 4 Jan 2024 14:28:30 +0000 (06:28 -0800)]
coreml : fix ANE optimized encoder (#1716)
Georgi Gerganov [Thu, 4 Jan 2024 12:47:42 +0000 (14:47 +0200)]
whisper.swiftui : add .gitignore
Georgi Gerganov [Thu, 4 Jan 2024 11:37:25 +0000 (13:37 +0200)]
whispser : reset the "batched" timings (#1721)
Georgi Gerganov [Wed, 3 Jan 2024 17:36:33 +0000 (19:36 +0200)]
release : v1.5.3
Ashraful Islam [Wed, 3 Jan 2024 17:30:26 +0000 (11:30 -0600)]
swift : update Package.swift to use ggml as package dependency (#1701)
* updates Package.swift to use ggml as dependency
* cleans up the Package.swift file by removing redundant source files
* updates ggml url src to ggerganov
Finn Voorhees [Wed, 3 Jan 2024 13:39:43 +0000 (08:39 -0500)]
ggml : add error handling to graph_compute (#1714)
Georgi Gerganov [Wed, 3 Jan 2024 12:18:46 +0000 (14:18 +0200)]
cuda : simplify expression
Co-authored-by: slaren <redacted>
Georgi Gerganov [Wed, 3 Jan 2024 11:01:44 +0000 (13:01 +0200)]
cuda : mark I16 and I32 ops as unsupported
ggml-ci
Georgi Gerganov [Wed, 3 Jan 2024 09:35:46 +0000 (11:35 +0200)]
metal : add kernel_get_rows_i32
ggml-ci
Georgi Gerganov [Tue, 2 Jan 2024 19:07:47 +0000 (21:07 +0200)]
metal : optimize ggml_mul_mat_id (faster Mixtral PP) (llama/4725)
* ggml : disable fast-math for Metal (cmake build only)
ggml-ci
* metal : fix Metal API debug warnings
* cmake : add -fno-inline for Metal build (llama/4545)
* metal : fix API debug warnings
* metal : fix compile warnings
* metal : use uint64_t for strides
* cmake : rename option to LLAMA_METAL_SHADER_DEBUG
* metal : fix mat-vec Q8_0 kernel for BS > 1
* metal : normalize mat-vec kernel signatures
* cmake : respect LLAMA_QKK_64 option
* metal : fix mat-vec Q4_K kernel for QK_K == 64
* metal : optimizing ggml_mul_mat_id (wip)
* metal : minor fix
* metal : opt mul_mm_id
Georgi Gerganov [Tue, 2 Jan 2024 08:57:44 +0000 (10:57 +0200)]
metal : enable shader debugging (cmake option) (llama/4705)
* ggml : disable fast-math for Metal (cmake build only)
ggml-ci
* metal : fix Metal API debug warnings
* cmake : add -fno-inline for Metal build (llama/4545)
* metal : fix API debug warnings
* metal : fix compile warnings
* metal : use uint64_t for strides
* cmake : rename option to LLAMA_METAL_SHADER_DEBUG
* metal : fix mat-vec Q8_0 kernel for BS > 1
* metal : normalize mat-vec kernel signatures
* cmake : respect LLAMA_QKK_64 option
* metal : fix mat-vec Q4_K kernel for QK_K == 64
ggml-ci
Georgi Gerganov [Sun, 31 Dec 2023 09:43:31 +0000 (11:43 +0200)]
ggml : add ggml_vdotq_s32 alias (llama/4715)
ggml-ci
Johannes Gäßler [Sat, 30 Dec 2023 12:52:01 +0000 (13:52 +0100)]
CUDA: fixed tensor cores not being used on RDNA3 (llama/4697)