]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Neo Zhang Jianyu [Sat, 2 Mar 2024 11:49:30 +0000 (19:49 +0800)]
Support multiple GPUs (split mode) on SYCL backend (llama/5806)
* suport multiple cards: split-mode - layer|row
* rm warning
* rebase with master, support tow new OPs, close feature for -sm=row, fix for unit test
* update news
* fix merge error
* update according to review comments
ddpasa [Fri, 1 Mar 2024 17:00:00 +0000 (18:00 +0100)]
ggml-vulkan: fix VULKAN_CHECK_RESULTS flag, which was previously broken (llama/5813)
AidanBeltonS [Fri, 1 Mar 2024 07:36:47 +0000 (07:36 +0000)]
Use batched mul_mat pathway (llama/5591)
* Use batched mul_mat pathway
* rm extra line
* Explicitly state scaled data type
---------
Co-authored-by: Abhilash Majumder <redacted>
Eve [Wed, 28 Feb 2024 19:33:37 +0000 (19:33 +0000)]
make portability_enumeration_ext apple only (llama/5757)
leejet [Sun, 3 Mar 2024 12:23:52 +0000 (20:23 +0800)]
add some new ops, fix some operators and add batch operations to certain operators. (#747)
* cuda: fix group_norm
* cuda: add batch inference support for ggml_pad/ggml_upscale
* add ggml_arrange
* add ggml_timestep_embedding
* update ggml_arange/ggml_timestep_embedding tests
* cuda: fix im2col
* add ggml_arange/ggml_timestep_embbeding support for metal backend
* fix some bugs
* fix some bugs
* Update include/ggml/ggml.h
Co-authored-by: Georgi Gerganov <redacted>
* Update src/ggml-cuda.cu
Co-authored-by: Georgi Gerganov <redacted>
* Update src/ggml-metal.m
Co-authored-by: Georgi Gerganov <redacted>
* Update src/ggml-metal.m
Co-authored-by: Georgi Gerganov <redacted>
* Update src/ggml-metal.metal
Co-authored-by: Georgi Gerganov <redacted>
* modify according to the review comments
* ggml : fix compile warnings + code style
* ggml : normalize compute_forward calls + fix seg fault in debug
* minor
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: slaren <redacted>
Steward Garcia [Wed, 28 Feb 2024 16:40:12 +0000 (11:40 -0500)]
ggml : add simple example (#713)
* add simple example for explain memory management and basic operation of ggml
Georgi Gerganov [Wed, 28 Feb 2024 11:05:21 +0000 (13:05 +0200)]
sync : whisper.cpp
Tamotsu Takahashi [Sat, 24 Feb 2024 07:24:47 +0000 (16:24 +0900)]
talk, talk-llama : pass text_to_speak as a file (whisper/1865)
* talk-llama: pass file instead of arg
it is too hard to quote text in a portable way
* talk-llama: pass heard_ok as a file
* talk-llama: let eleven-labs.py accept options
Options: -v voice, -s savefile, -p (--play)
* talk-llama: check installed commands in "speak"
Pass "-q" to eleven-labs.py to skip checking whether elevenlabs is installed
* talk-llama: pass voice_id again
in order to sync talk with talk-llama
* talk: sync with talk-llama
Passing text_to_speak as a file is safer and more portable
cf. https://stackoverflow.com/a/
59036879 /45375
* talk and talk-llama: get all installed voices in speak.ps1
* talk and talk-llama: get voices from api
* talk and talk-llama: add more options to eleven-labs.py
and remove DEFAULT_VOICE because it is deprecated (https://www.reddit.com/r/ElevenLabs/comments/1830abt/what_happened_to_bella/)
```
usage: eleven-labs.py [-q] [-l] [-h] [-n NAME | -v NUMBER] [-f KEY=VAL] [-s FILE | -p] [TEXTFILE]
options:
-q, --quick skip checking the required library
action:
TEXTFILE read the text file (default: stdin)
-l, --list show the list of voices and exit
-h, --help show this help and exit
voice selection:
-n NAME, --name NAME get a voice object by name (default: Arnold)
-v NUMBER, --voice NUMBER
get a voice object by number (see --list)
-f KEY=VAL, --filter KEY=VAL
filter voices by labels (default: "use case=narration")
this option can be used multiple times
filtering will be disabled if the first -f has no "=" (e.g. -f "any")
output:
-s FILE, --save FILE save the TTS to a file (default: audio.mp3)
-p, --play play the TTS with ffplay
```
* examples: add speak_with_file()
as suggested in the review
* talk and talk-llama: ignore to_speak.txt
Abhilash Majumder [Fri, 23 Feb 2024 07:22:24 +0000 (12:52 +0530)]
whisper : add SYCL support (whisper/1863)
* add changes from llama upstream
* add sycl abstraction
* add sycl build
* update cmake
* add sycl build config
* fix bug
* fix bug
* refactor build
* fix bug
* update build
* call build
* use sycl header
* add examples
* add target
* fix typecast in quant.c
* readd fp16 and readme
* fix quant typecast
* add sample
* add readme
* remove cxx file check
Georgi Gerganov [Wed, 28 Feb 2024 10:59:11 +0000 (12:59 +0200)]
sync : llama.cpp (#0)
Kawrakow [Wed, 28 Feb 2024 08:37:02 +0000 (10:37 +0200)]
ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (llama/5760)
* WIP: make i-quants work for QK_K = 64
* iq2_xs: attempt to fix AVX dot product for QK_K = 64
Tests pass, but I get gibberish.
* QK_K = 64 tests pass on ARM_NEON and Metal
Sadly, that does not mean it actually works.
* Make CUDA compile with QK_K = 64
Tests don't pass, plus we get misaligned access
* Q2_K: fixed bug in imatrix quantization for QK_K = 64
* iq1_s: turn off SIMD implementation for QK_K = 64 (it does not work)
---------
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Tue, 27 Feb 2024 17:16:49 +0000 (19:16 +0200)]
Attempt to fix android build (llama/5752)
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Tue, 27 Feb 2024 14:34:24 +0000 (16:34 +0200)]
IQ4_XS: a 4.25 bpw quantization (llama/5747)
* Try IQ4_NL with blocks of 64 - does not look good
* iq4_xs: go to super-blocks of 256 and 6-bit scales for blocks of 32
* iq4_xs: CUDA works - 133.2 t/s
* iq4_xs: AVX2 dot product
* iq4_xs: ARM_NEON dot product
* iq4_nl: Metal implementation
As usual, Metal / Apple Silicon don't like my quants.
* iq3_xs: minor fix
* iq4_xs: shrink by using IQ3_S for attn_k and attn_q
* iq4_xs: revert using IQ3_S for attn_k and attn_v
PPL vs size is good, but CPU performance suffers: on M2 Max
TG-128 drops to 21.7 t/s from 28.8, and on a Ryzen-7950X
to 14.5 t/s from 15.8 t/s. On CUDA we have 135 t/s when
using IQ3_S vs 133 t/s with pure IQ4_XS.
* Fix CI
* iq4_xs: Added forgotten check for 256 divisibility
---------
Co-authored-by: Iwan Kawrakow <redacted>
Engininja2 [Tue, 27 Feb 2024 13:22:45 +0000 (07:22 -0600)]
cuda : replace remaining shfl_xor with calls to warp_reduce functions (llama/5744)
Engininja2 [Tue, 27 Feb 2024 12:50:18 +0000 (06:50 -0600)]
ggml-quants : fix avx2 iq1_s vec_dot when compiled with gcc (llama/5742)
Kawrakow [Mon, 26 Feb 2024 16:28:38 +0000 (18:28 +0200)]
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (llama/5721)
* Adding IQ2_S and IQ2_M as a single cumulative commit
* Update examples/quantize/quantize.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Mon, 26 Feb 2024 14:36:38 +0000 (15:36 +0100)]
CUDA: fix DEBUG_CUDA_MALLOC (llama/5729)
AidanBeltonS [Mon, 26 Feb 2024 14:02:11 +0000 (14:02 +0000)]
Add support for soft_max ALiBi (llama/5639)
* Add support for bias
* Update pre-processor
* rm commented code
* fix format
* fix CI
---------
Co-authored-by: Abhilash Majumder <redacted>
Radosław Gryta [Sun, 25 Feb 2024 18:43:00 +0000 (19:43 +0100)]
ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility (llama/5711)
* [ggml-quants] Provide ggml_vqtbl1q_u8 for 64bit compatibility
vqtbl1q_u8 is not part of arm v7 neon library
* [android-example] Remove abi filter after arm v7a fix
* [github-workflows] Do not skip Android armeabi-v7a build
slaren [Sun, 25 Feb 2024 19:41:35 +0000 (20:41 +0100)]
add google magika inference example (#748)
* add magika inference example
* ggml : fix unaligned accesses in custom ops
* ggml : fix FP32 GELU for values that exceed the FP16 range
* use ggml_pool_1d
* add README
* Update README.md
* pad inputs if the files are too small
* cleanup
ggml-ci
Georgi Gerganov [Sun, 25 Feb 2024 17:58:06 +0000 (19:58 +0200)]
sync : llama.cpp (#0)
Georgi Gerganov [Sun, 25 Feb 2024 10:09:09 +0000 (12:09 +0200)]
code : normalize enum names (llama/5697)
* coda : normalize enum names
ggml-ci
* code : cont
* code : cont
Kawrakow [Sat, 24 Feb 2024 14:23:52 +0000 (16:23 +0200)]
IQ3_S: a much better alternative to Q3_K (llama/5676)
* iq4_nl: squash commits for easier rebase
* Basics (quantize, dequantize)
* CUDA dequantize and dot product
* Slightly faster CUDA dot product (120 t/s)
* Switch to 6-bit scales
* Scalar dot product
* AVX2 dot product
* ARM_NEON dot product
* Works on metal, but still slow
* Slightly better Metal dot product
* Another small Metal improvement
* Metal dot product is getting there
* Faster CUDA dot product
* Add 1/8 ffn_down layers as Q5_K when no imatrix has been provided
* Report the actual bpw
* Add _xs mix that is 4.05 bpw for non-MoE models
* Remove IQ4_XS for now, slightly adjust kvalues_iq4nl
* AVX2 dot product uses Q8_0 instead of Q8_K
* Add to test-backend-ops
* Minor fix
* Also use use Q5_K for attn_output in MoE models
* Fixes after merging latest master
* Switching to blocks of 32
* AVX2 for blocks of 32
* Scaler dot product for blocks of 32
* ARM_NEON dot product for blocks of 32
* Metal kernels for blocks of 32
* Slightly faster Metal kernels
* Resurrecting iq3_xs
After all the experimentation, nothing was better than this.
* Minor PPL improvement via a block scale fudge factor
* Minor improvement via 3 neighbours
* iq3_xs: working scalar and AVX2 dot products
* iq3_xs: ARM_NEON dot product - works but extremely slow (10 t/s)
* iq3_xs: working Metal implementation
* Adding IQ3_M - IQ3_XS mix with mostly Q4_K
* iiq3_xs: a 3.4375 bpw variant
* iq3_xs: make CUDA work for new version
* iq3_xs: make scalar and AVX2 work for new version
* iq3_s: make ARM_NEON work with new version
* iq3_xs: make new version work on metal
Performance is very similar to Q3_K_S
* iq3_xs: tiny Metal speed improvement
* iq3_xs: tiny Metal speed improvement
* Fix stupid warning
* Q3_K_XS now uses a mix of IQ3_XS and IQ3_XXS
* iq3_xs: rename to iq3_s
* iq3_s: make tests pass
* Move Q3_K_XS mix to 3.25 bpw
* Attempt to fix failing tests
* Another attempt to fix the Windows builds
* Attempt to fix ROCm
* ROCm again
* iq3_s: partial fix for QK_K = 64
* iq3_s: make it work on metal for QK_K = 64
Pleasent surprise: the coding was super-block size independent,
so all it took was to delete some QK_K == 256 guards.
* Will this fix ROCm?
---------
Co-authored-by: Iwan Kawrakow <redacted>
Ondřej Čertík [Sun, 25 Feb 2024 13:57:04 +0000 (06:57 -0700)]
mnist : fix the emcc command to compile correctly (#746)
Otherwise we get linking errors:
wasm-ld: error: /var/folders/8w/5mqfkhqn1nj0tw97g3hl1nyh0000gn/T/emscripten_temp_hj657tq4/ggml_0.o: undefined symbol: dequantize_row_q4_0
wasm-ld: error: /var/folders/8w/5mqfkhqn1nj0tw97g3hl1nyh0000gn/T/emscripten_temp_hj657tq4/ggml_0.o: undefined symbol: quantize_row_q4_0
...
UEXTM.com [Sat, 24 Feb 2024 16:27:36 +0000 (11:27 -0500)]
Introduce backend GUIDs (#743)
* Introduce backend GUIDs
Initial proposed implementation of backend GUIDs
(Discussed in https://github.com/ggerganov/ggml/pull/741)
Hardcoded CPU backend GUID (for now)
Change ggml_backend_is_cpu logic to use GUID
* Remove redundant functions
Remove redundant functions `ggml_backend_i::get_name` and `ggml_backend_guid` which are not desired for future expansion
* Add spaces to match style
Co-authored-by: slaren <redacted>
* Fix brace style to match
Co-authored-by: slaren <redacted>
* Add void to () in function signature
Co-authored-by: slaren <redacted>
* Add back ggml_backend_guid and make CPU_GUID a local static in ggml_backend_cpu_guid
* add guids to all backends
ggml-ci
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Thu, 22 Feb 2024 21:25:19 +0000 (23:25 +0200)]
sync : llama.cpp
Georgi Gerganov [Thu, 22 Feb 2024 21:21:39 +0000 (23:21 +0200)]
ggml : always define ggml_fp16_t as uint16_t (llama/5666)
* ggml : always define ggml_fp16_t as uint16_t
ggml-ci
* ggml : cont
ggml-ci
* ggml : cont
* ggml : cont
ggml-ci
* ggml : cont
ggml-ci
* cuda : no longer ggml headers last
ggml-ci
* ggml : fix q6_K FP16 -> FP32 conversion
ggml-ci
* ggml : more FP16 -> FP32 conversion fixes
ggml-ci
Georgi Gerganov [Thu, 22 Feb 2024 18:25:15 +0000 (20:25 +0200)]
sync : whisper.cpp
Georgi Gerganov [Thu, 22 Feb 2024 16:31:40 +0000 (18:31 +0200)]
ggml : 32-bit arm compat (whisper/1891)
* ggml : 32-bit arm compat
* ggml : add ggml_vqtbl1q_s8 impl
* ggml : cont
Davidson Francis [Thu, 22 Feb 2024 13:01:08 +0000 (10:01 -0300)]
main : fix file existence check in main.cpp (whisper/1889)
In commit
dda4b0e of PR #1872, I've introduced a check for the
existence of files before loading the model. However, I haven't
considered the case where whisper.cpp might read from stdin as well,
and in such cases, the checks should ignore the "-" argument as it
does not represent a regular file.
Additionally, this commit removes the usage of 'stat()' in favor of
the recently introduced function 'is_file_exist()' in common.cpp from
PR #1871.
Apologies for the bug introduced in the previous PR and any
inconvenience it may have caused.
Georgi Gerganov [Wed, 21 Feb 2024 14:19:39 +0000 (16:19 +0200)]
sync : llama.cpp (#0)
ggml-ci
Meng, Hengyu [Wed, 21 Feb 2024 09:52:06 +0000 (17:52 +0800)]
conext add name (llama/5624)
* [SYCL] conext add name
* name should start with SYCL*
AidanBeltonS [Tue, 20 Feb 2024 07:01:25 +0000 (07:01 +0000)]
Update ggml_sycl_op_mul_mat_vec_q (llama/5502)
* Update ggml_sycl_op_mul_mat_vec_q
* Apply suggestions from code review
Co-authored-by: Abhilash Majumder <redacted>
* revert suggestion on macro
* fix bug
* Add quant type GGML_TYPE_IQ1_S to unsupported
* fix format
---------
Co-authored-by: Abhilash Majumder <redacted>
0cc4m [Wed, 14 Feb 2024 19:57:17 +0000 (20:57 +0100)]
Refactor validation and enumeration platform checks into functions to clean up ggml_vk_instance_init()
0cc4m [Sat, 10 Feb 2024 21:14:52 +0000 (22:14 +0100)]
Add check for VK_KHR_portability_enumeration for MoltenVK support
Mathijs de Bruin [Tue, 6 Feb 2024 14:39:22 +0000 (14:39 +0000)]
Add preprocessor checks for Apple devices.
Based on work by @rbourgeat in https://github.com/ggerganov/llama.cpp/pull/5322/files
Mathijs de Bruin [Sat, 3 Feb 2024 18:00:11 +0000 (18:00 +0000)]
Resolve ErrorIncompatibleDriver with Vulkan on MacOS.
Refs:
- https://chat.openai.com/share/
7020ce72 -65fc-45ec-b7be-
9d9d798a5f3f
- https://github.com/SaschaWillems/Vulkan/issues/954
- https://github.com/haasn/libplacebo/issues/128
- https://github.com/KhronosGroup/Vulkan-Samples/issues/476
Mathijs de Bruin [Sat, 3 Feb 2024 17:56:46 +0000 (17:56 +0000)]
Allow for Vulkan build with Accelerate.
Closes #5304
slaren [Mon, 19 Feb 2024 22:40:26 +0000 (23:40 +0100)]
cuda : ignore peer access already enabled errors (llama/5597)
* cuda : ignore peer access already enabled errors
* fix hip
Siddharth Ramakrishnan [Wed, 21 Feb 2024 12:34:53 +0000 (04:34 -0800)]
ggml : compute forward no longer pass src tensors (#729)
* refactored compute forward to not pass in the src tensors each time
* fix merge issues with flags
* missed one place in the last commit to fix the is_param / flags issue
* minor spacing fix
* fixed some variable assignments so all tests locally are passing
* new change after merge fix
---------
Co-authored-by: siddharthvader <redacted>
bssrdf [Tue, 20 Feb 2024 19:17:09 +0000 (14:17 -0500)]
ggml : fix conv_2d batch mode (#737)
Co-authored-by: bssrdf <redacted>
Georgi Gerganov [Mon, 19 Feb 2024 13:56:03 +0000 (15:56 +0200)]
sync : whisper.cpp
Georgi Gerganov [Mon, 19 Feb 2024 12:44:46 +0000 (14:44 +0200)]
build : update CBLAS flags + fix unused var warning (whisper/0)
Davidson Francis [Mon, 19 Feb 2024 08:51:26 +0000 (05:51 -0300)]
main : check if input files exist before proceeding (whisper/1872)
Until the most recent commit (
3d42463 ), the main.cpp sample file does
not check whether the input files exist or not. Consequently, the
model is loaded first before reporting whether there was a failure or
not when processing a file. In environments with HDD, this can take
about 50 seconds or more, depending on the loaded model.
This commit addresses this issue by checking in advance whether the
input files exist or not.
Felix [Mon, 19 Feb 2024 08:50:15 +0000 (09:50 +0100)]
examples : clean up common code (whisper/1871)
move some utility functions into common.h
Georgi Gerganov [Mon, 12 Feb 2024 17:53:51 +0000 (19:53 +0200)]
whisper : fix external encoder (whisper/1860)
Georgi Gerganov [Mon, 19 Feb 2024 13:33:51 +0000 (15:33 +0200)]
ggml : resolve merge conflicts (#0)
ggml-ci
Georgi Gerganov [Mon, 19 Feb 2024 13:27:37 +0000 (15:27 +0200)]
common : add IQ1_S (#0)
ggml-ci
Georgi Gerganov [Mon, 19 Feb 2024 13:19:26 +0000 (15:19 +0200)]
sync : llama.cpp
Georgi Gerganov [Mon, 19 Feb 2024 12:45:41 +0000 (14:45 +0200)]
ci : enable -Werror for CUDA builds (llama/5579)
* cmake : pass -Werror through -Xcompiler
ggml-ci
* make, cmake : enable CUDA errors on warnings
ggml-ci
slaren [Mon, 19 Feb 2024 08:04:45 +0000 (09:04 +0100)]
cuda, metal : fix nans in soft_max (llama/5574)
* cuda : fix nans in soft_max
* metal : fix nans in soft_max
---------
Co-authored-by: Georgi Gerganov <redacted>
bmwl [Mon, 19 Feb 2024 07:38:32 +0000 (23:38 -0800)]
ggml : android and old glibc NUMA incompatibility bugfixes (llama/5557)
* #ifdef out some code NUMA blocks for Android due to lack of support
* added in some __ANDROID__ if def gates around numa code and forced GLIBC prior to 2.29 to use a syscall for getcpu instead of the wrapper
* Changed gates on numa platform specific stuff to __gnu_linux__ to skip any platforms without glibc
* harmonizing #if defined blocks for numa code to __gnu_linux__ since that's the only model that's being followed anyways
---------
Co-authored-by: root <redacted>
Georgi Gerganov [Sun, 18 Feb 2024 20:58:57 +0000 (22:58 +0200)]
ggml : restore vec dot stride arg names (llama/5453)
Georgi Gerganov [Sun, 18 Feb 2024 20:39:30 +0000 (22:39 +0200)]
ci : fix wikitext url + compile warnings (llama/5569)
ggml-ci
Georgi Gerganov [Sun, 18 Feb 2024 19:39:58 +0000 (21:39 +0200)]
metal : fix unused warnings (llama/0)
Herman Semenov [Sun, 18 Feb 2024 16:20:12 +0000 (16:20 +0000)]
ggml, common, examples, tests : fixed type arguments in printf (llama/5528)
Kawrakow [Sun, 18 Feb 2024 16:16:55 +0000 (18:16 +0200)]
1.5 bit quantization (llama/5453)
* iq1_s: WIP basics
* iq1_s: CUDA is working
* iq1_s: scalar CPU dot product
* iq1_s: WIP AVX2 dot product - something is not right
* Fix tests
* Fix shadow warnings
* Fix after merge with latest master
* iq1_s: AVX2 finally works
* iq1_s: ARM_NEON dot product. Works, but not very fast
* iq1_s: better grid
* iq1_s: use IQ2_XXS for attn_output
At a cost of 0.04 extra bpw this gives a big improvement in PPL.
* iq1_s: Metal basics
Dequantize works, but not dot product
* iq1_s: Metal works, but quite slow
As usual, Apple Silicon does not like the code I write.
* iq1_s: Tests
* iq1_s: slightly faster dot product
---------
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Mon, 19 Feb 2024 13:18:09 +0000 (15:18 +0200)]
ggml : add ALiBi support for ggml_soft_max_ext (llama/5488)
Georgi Gerganov [Mon, 19 Feb 2024 13:14:20 +0000 (15:14 +0200)]
sync : llama.cpp
Ananta Bastola [Sat, 17 Feb 2024 21:03:14 +0000 (16:03 -0500)]
ci : add an option to fail on compile warning (llama/3952)
* feat(ci): add an option to fail on compile warning
* Update CMakeLists.txt
* minor : fix compile warnings
ggml-ci
* ggml : fix unreachable code warnings
ggml-ci
* ci : disable fatal warnings for windows, ios and tvos
* ggml : fix strncpy warning
* ci : disable fatal warnings for MPI build
* ci : add fatal warnings to ggml-ci
ggml-ci
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 16 Feb 2024 17:05:56 +0000 (19:05 +0200)]
cmake : fix VULKAN and ROCm builds (llama/5525)
* cmake : fix VULKAN and ROCm builds
* cmake : fix (cont)
* vulkan : fix compile warnings
ggml-ci
* cmake : fix
ggml-ci
* cmake : minor
ggml-ci
bmwl [Fri, 16 Feb 2024 09:31:07 +0000 (01:31 -0800)]
ggml : add numa options (llama/5377)
* Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h
* Reverted Makefile
* Fixed include
* Removed sched.h from ggml.h, moved ggml_get_numa_affinity into ggml.c, removed trailing whitespace and fixed up a few inconsistent variables
* removed trailing whitespace
* Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h
* Reverting Makefile
* Fixed a number of issues with the move from BOOL to ggml_numa_strategies. Added a note about mirror mode note being implemented yet
* Removing MIRROR_MODE code for this PR
* Removing last bit of MIRROR_MODE code for this PR
* Removing unneeded branch in server.cpp example and moving get_numa_affinity and making it static
* Fixed lingering init_llama_backend() bool calls in tests and examples
* Remote enum llama_numa_strategies
* Revert bad merge with dynatemp flags
* add missing enum ggml_numa_strategies declaration and revert sync problem with master
* add missing enum ggml_numa_strategies declaration
* fixed ggml_init_numa variable
* Update ggml.h
Co-authored-by: Jared Van Bortel <redacted>
* Update READMEs with info about numa flags, change INTERLEAVE strategy name to DISTRIBUTE everywhere, implement the improved distribution strategy from @rankaiyx, fix a spelling mistake and un-merge some bad merges
* split numa init out from llama_backend_init and created llama_numa_init. Updated all code paths and samples
* Fix up some boolean vs enum comparisons
* Added #ifdefs for non-Linux OS that don't have cpu_set_t datatype
* Update ggml.h
Align enum values
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml.c
Remove whitespace
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml.c
align paremeters
Co-authored-by: Georgi Gerganov <redacted>
* Update examples/server/server.cpp
remove whitespace and align brace
Co-authored-by: Georgi Gerganov <redacted>
* Update common/common.cpp
Remove whitespace and align brace
Co-authored-by: Georgi Gerganov <redacted>
* unified ggml_numa_strategy enum and fixed text alignment in server.cpp example
* Update ggml.c
simplified return for platforms without NUMA support
Co-authored-by: Jared Van Bortel <redacted>
* removed redundant else from cli argument processing of --numa
* whitespace
---------
Co-authored-by: root <redacted>
Co-authored-by: Jared Van Bortel <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Jared Van Bortel <redacted>
slaren [Thu, 15 Feb 2024 15:49:01 +0000 (16:49 +0100)]
cuda : print message when initialization fails (llama/5512)
* cuda : print message when initialization fails
* use CUDA_NAME both times
Neuman Vong [Thu, 15 Feb 2024 06:11:15 +0000 (17:11 +1100)]
vulkan: Find optimal memory type but with fallback (llama/5381)
* @0cc4m feedback
* More feedback @0cc4m
AT [Tue, 13 Feb 2024 21:44:25 +0000 (15:44 -0600)]
Early return for zero size calls to get_tensor. (llama/5482)
* Early return for zero size calls to get_tensor.
Signed-off-by: Adam Treat <redacted>
* Update ggml-kompute.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml-kompute.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Add an early return to the get/set tensor when the size is null.
Signed-off-by: Adam Treat <redacted>
* Early return after the assertions.
Signed-off-by: Adam Treat <redacted>
* Since we do the early return in the generic backend now no reason to do so here as well.
Signed-off-by: Adam Treat <redacted>
---------
Signed-off-by: Adam Treat <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 13 Feb 2024 09:20:24 +0000 (11:20 +0200)]
tests : disable moe test (llama/5473)
Kawrakow [Tue, 13 Feb 2024 07:07:57 +0000 (09:07 +0200)]
ggml-quants : fix compiler warnings (shadow variable) (llama/5472)
Co-authored-by: Iwan Kawrakow <redacted>
Abhilash Majumder [Mon, 12 Feb 2024 14:52:05 +0000 (20:22 +0530)]
ggml-sycl: Replace 3d ops with macro (llama/5458)
* use macro
* use macro
* fix format
Georgi Gerganov [Mon, 19 Feb 2024 12:41:31 +0000 (14:41 +0200)]
cmake : update CBLAS build flags (#0)
slaren [Mon, 12 Feb 2024 17:07:14 +0000 (18:07 +0100)]
ggml-alloc : allocate all leafs as if they were inputs (#731)
* ggml-alloc : allocate all leafs as if they were inputs
* ensure static leafs are allocated
* gpt-2-backend : remove unnecesary ggml_new_tensor
* update other gpt-2 examples to remove ggml_new_tensor calls in the graph
Georgi Gerganov [Mon, 12 Feb 2024 07:32:58 +0000 (09:32 +0200)]
sync : whisper.cpp
dscripka [Mon, 12 Feb 2024 07:19:07 +0000 (02:19 -0500)]
examples : added audio_ctx argument to main and server (whisper/1857)
* added audio_ctx argument to main and server examples
* Better default value
Co-authored-by: Georgi Gerganov <redacted>
* better default value (again)
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Didzis Gosko [Sun, 11 Feb 2024 14:41:41 +0000 (16:41 +0200)]
metal : option to embed MSL source into compiled binary (whisper/1842)
* ggml : embed Metal library source (ggml-metal.metal) into binary
enable by setting WHISPER_EMBED_METAL_LIBRARY
* rename the build option
* rename the preprocessor directive
* generate Metal library embedding assembly on-fly during build process
Georgi Gerganov [Sun, 11 Feb 2024 14:39:12 +0000 (16:39 +0200)]
examples : initialize context params properly (whisper/1852)
Georgi Gerganov [Mon, 12 Feb 2024 07:30:12 +0000 (09:30 +0200)]
sync : llama.cpp
Georgi Gerganov [Mon, 12 Feb 2024 07:27:57 +0000 (09:27 +0200)]
ggml-backend : sync remnant
Johannes Gäßler [Sun, 11 Feb 2024 18:08:39 +0000 (19:08 +0100)]
CUDA: mul_mat_vec_q tiling, refactor mul mat logic (llama/5434)
* CUDA: mul_mat_vec_q tiling, refactor mul mat logic
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Sergio López [Sun, 11 Feb 2024 14:12:00 +0000 (15:12 +0100)]
vulkan: only use M-sized matmul on Apple GPUs (llama/5412)
* vulkan: refactor guess_matmul_pipeline for vendor
Refactor ggml_vk_guess_matmul_pipeline to simplify adding per-vendor
conditionals.
Signed-off-by: Sergio Lopez <redacted>
* vulkan: only use M-sized matmul on Apple GPUs
L-sized and S-sized matmuls are broken on Apple GPUs, force using
M-size with this vendor.
Signed-off-by: Sergio Lopez <redacted>
---------
Signed-off-by: Sergio Lopez <redacted>
Georgi Gerganov [Sun, 11 Feb 2024 13:33:01 +0000 (15:33 +0200)]
ggml : fix compile warnings (unused vars) (llama/4966)
snadampal [Sun, 11 Feb 2024 13:22:33 +0000 (07:22 -0600)]
ggml : add mmla kernels for quantized GEMM (llama/4966)
* ggml: aarch64: implement smmla kernel for q8_0_q8_0 quantized gemm
armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q8_0_q8_0 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"
On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.
* ggml: aarch64: implement smmla kernel for q4_0_q8_0 quantized gemm
armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q4_0_q8_0 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"
On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.
* ggml: aarch64: implement smmla kernel for q4_1_q8_1 quantized gemm
armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q4_1_q8_1 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"
On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.
* ggml: update unit tests for the new vec_dot interface
* llama.cpp: add MATMUL_INT8 capability to system_info
Ian Bull [Sat, 10 Feb 2024 10:53:28 +0000 (02:53 -0800)]
metal : use autoreleasepool to avoid memory leaks (llama/5437)
There appears to be a known memory leak when using the
`MLTCommandBuffer`. It is suggested to use `@autoreleasepool` in
[1,2]
[1] https://developer.apple.com/forums/thread/662721
[2] https://forums.developer.apple.com/forums/thread/120931
This change-set wraps the `ggml_metal_graph_compute` in a
`@autoreleasepool`.
This commit addresses https://github.com/ggerganov/llama.cpp/issues/5436
slaren [Sun, 11 Feb 2024 12:37:58 +0000 (13:37 +0100)]
ggml-alloc : v3 (#727)
* ggml-alloc v3
ggml-ci
* fix ci
ggml-ci
* whisper : check for backend buffer allocation failures
* whisper : avoid leaks when initialization fails
* cleanup
ggml-ci
* style fixes
ggml-ci
Georgi Gerganov [Sat, 10 Feb 2024 14:04:18 +0000 (16:04 +0200)]
examples : remove old stuff (#728)
* examples : remove old stuff
ggml-ci
* readme : remove examples links
Georgi Gerganov [Sat, 10 Feb 2024 08:09:09 +0000 (10:09 +0200)]
sync : whisper.cpp
Didzis Gosko [Fri, 9 Feb 2024 15:27:47 +0000 (17:27 +0200)]
whisper : expose CUDA device setting in public API (whisper/1840)
* Makefile : allow to override CUDA_ARCH_FLAG
* whisper : allow to select GPU (CUDA) device from public API
Georgi Gerganov [Tue, 30 Jan 2024 19:30:26 +0000 (21:30 +0200)]
sync : ggml (whisper/0)
Georgi Gerganov [Sat, 10 Feb 2024 07:50:24 +0000 (09:50 +0200)]
src : relocate new backend sources
Georgi Gerganov [Sat, 10 Feb 2024 07:46:12 +0000 (09:46 +0200)]
sync : llama.cpp
Georgi Gerganov [Sat, 10 Feb 2024 07:46:00 +0000 (09:46 +0200)]
ci : fix mpt test
Georgi Gerganov [Sat, 10 Feb 2024 07:45:40 +0000 (09:45 +0200)]
tests : fix im2col usage
Michael Podvitskiy [Fri, 9 Feb 2024 09:56:43 +0000 (10:56 +0100)]
ggml : fix `error C2078: too many initializers` for MSVC ARM64 (llama/5404)
0cc4m [Fri, 9 Feb 2024 05:52:33 +0000 (06:52 +0100)]
Fix Vulkan crash on APUs with very little device memory (llama/5424)
* Fix Vulkan crash on APUs with very little device memory
* Fix debug output function names
Johannes Gäßler [Thu, 8 Feb 2024 20:56:40 +0000 (21:56 +0100)]
CUDA: more warps for mmvq on NVIDIA (llama/5394)
Abhilash Majumder [Thu, 8 Feb 2024 17:09:10 +0000 (22:39 +0530)]
Fix f16_sycl cpy call from Arc (llama/5411)
* fix f16_sycl cpy call
* rm old logic
* add fp16 build CI
* use macro
* format fix
Johannes Gäßler [Wed, 7 Feb 2024 11:40:26 +0000 (12:40 +0100)]
CUDA: fixed mmvq kernel for bs 2,3,4 and -sm row (llama/5386)
0cc4m [Wed, 7 Feb 2024 06:54:50 +0000 (07:54 +0100)]
Basic Vulkan Multi-GPU implementation (llama/5321)
* Initial Vulkan multi-gpu implementation
Move most global variables into backend context
* Add names to backend device functions
* Add further missing cleanup code
* Reduce code duplication in tensor split layer assignment
* generalize LLAMA_SPLIT_LAYER for all backends, do not expose device count and memory in llama.h
* Only do device info print in the beginning and initialize one backend for cpu assist
Add missing cleanup code
* Rework backend memory management to make sure devices and buffers get properly allocated and freed
* Rename cpu assist free function
---------
Co-authored-by: slaren <redacted>
Johannes Gäßler [Tue, 6 Feb 2024 17:43:06 +0000 (18:43 +0100)]
CUDA: mul_mat_vec_q max. batch size 8 -> 4 (llama/5370)
Kawrakow [Tue, 6 Feb 2024 15:28:02 +0000 (17:28 +0200)]
Slight quantization improvement for Q4_K and Q5_K (llama/5361)
* Q4_K: slightly better quantization
* Q5_K: slightly better quantization
---------
Co-authored-by: Iwan Kawrakow <redacted>
Johannes Gäßler [Tue, 6 Feb 2024 13:44:06 +0000 (14:44 +0100)]
CUDA: mul_mat_vec_q for batch sizes > 1 (llama/5351)
Kawrakow [Mon, 5 Feb 2024 12:09:47 +0000 (14:09 +0200)]
ggml : make use of ggml-quants.h possible in C++ code (llama/5338)
* Make use of ggml-quants.h possible in C++ code
* One cannot possibly be defining static_assert in a C++ compilation
---------
Co-authored-by: Iwan Kawrakow <redacted>