]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Georgi Gerganov [Sat, 29 Apr 2023 06:51:06 +0000 (09:51 +0300)]
common : change default parameters to pre-#1126 (#1223)
Ivan Stepanov [Sat, 29 Apr 2023 05:34:41 +0000 (08:34 +0300)]
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
slaren [Sat, 29 Apr 2023 00:04:18 +0000 (02:04 +0200)]
cuBLAS: use host pinned memory and dequantize while copying (#1207)
* cuBLAS: dequantize simultaneously while copying memory
* cuBLAS: use host pinned memory
* cuBLAS: improve ggml_compute_forward_mul_mat_f16_f32 with pinned memory
* cuBLAS: also pin kv cache
* fix rebase
Henri Vasserman [Fri, 28 Apr 2023 23:31:56 +0000 (02:31 +0300)]
cuBLAS: non-contiguous tensor support (#1215)
* Cuda: non-contiguous tensor support
* remove extra stuff
* rename
* fix error
* more fixes, now OpenBLAS and CLBlast build too
* now then?
Stephan Walter [Fri, 28 Apr 2023 23:10:43 +0000 (23:10 +0000)]
Remove Q4_3 which is no better than Q5 (#1218)
Georgi Gerganov [Fri, 28 Apr 2023 18:32:52 +0000 (21:32 +0300)]
readme : update hot topics
Georgi Gerganov [Fri, 28 Apr 2023 17:37:43 +0000 (20:37 +0300)]
ggml : sync ggml (ggml_alibi)
CRD716 [Fri, 28 Apr 2023 16:13:33 +0000 (11:13 -0500)]
examples : add Jeopardy example (#1168)
* Basic Setup
* Prevent Results.txt from coming up
* Prefixes, Line separators, etc
* editorcheck
* introduction to give more consistent results
* Basic graph thing
* Grading, ready for testing!
* Y'all ready to get funky?
* fix column removal stuff
* missed a few
Evan Jones [Fri, 28 Apr 2023 15:59:37 +0000 (11:59 -0400)]
llama : add session file format and saved sessions in main (#1169)
Georgi Gerganov [Fri, 28 Apr 2023 14:58:44 +0000 (17:58 +0300)]
ggml : add helper debug printf in soft_max
0cc4m [Fri, 28 Apr 2023 14:57:16 +0000 (16:57 +0200)]
ggml : add CLBlast support (#1164)
* Allow use of OpenCL GPU-based BLAS using ClBlast instead of OpenBLAS for context processing
* Improve ClBlast implementation, avoid recreating buffers, remove redundant transfers
* Finish merge of ClBlast support
* Move CLBlast implementation to separate file
Add buffer reuse code (adapted from slaren's cuda implementation)
* Add q4_2 and q4_3 CLBlast support, improve code
* Double CLBlast speed by disabling OpenBLAS thread workaround
Co-authored-by: Concedo <redacted>
Co-authored-by: slaren <redacted>
* Fix device selection env variable names
* Fix cast in opencl kernels
* Add CLBlast to CMakeLists.txt
* Replace buffer pool with static buffers a, b, qb, c
Fix compile warnings
* Fix typos, use GGML_TYPE defines, improve code
* Improve btype dequant kernel selection code, add error if type is unsupported
* Improve code quality
* Move internal stuff out of header
* Use internal enums instead of CLBlast enums
* Remove leftover C++ includes and defines
* Make event use easier to read
Co-authored-by: Henri Vasserman <redacted>
* Use c compiler for opencl files
* Simplify code, fix include
* First check error, then release event
* Make globals static, fix indentation
* Rename dequant kernels file to conform with other file names
* Fix import cl file name
---------
Co-authored-by: Concedo <redacted>
Co-authored-by: slaren <redacted>
Co-authored-by: Henri Vasserman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Folko-Ven [Fri, 28 Apr 2023 14:22:48 +0000 (19:22 +0500)]
Correcting link to w64devkit (#1214)
Correcting link to w64devkit (change seeto to skeeto).
Johannes Gäßler [Fri, 28 Apr 2023 13:40:32 +0000 (15:40 +0200)]
Add Manjaro CUDA include and lib dirs to Makefile (#1212)
Yann Follet [Fri, 28 Apr 2023 11:59:48 +0000 (19:59 +0800)]
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
Stephan Walter [Wed, 26 Apr 2023 20:26:42 +0000 (20:26 +0000)]
ggml : slightly faster AVX2 implementation for Q5 (#1197)
Georgi Gerganov [Wed, 26 Apr 2023 20:24:42 +0000 (23:24 +0300)]
readme : add quantization info
Georgi Gerganov [Wed, 26 Apr 2023 20:14:13 +0000 (23:14 +0300)]
ggml : add Q5_0 and Q5_1 quantization (#1187)
* ggml : add Q5_0 quantization (cuBLAS only)
* ggml : fix Q5_0 qh -> uint32_t
* ggml : fix q5_0 histogram stats
* ggml : q5_0 scalar dot product
* ggml : q5_0 ARM NEON dot
* ggml : q5_0 more efficient ARM NEON using uint64_t masks
* ggml : rename Q5_0 -> Q5_1
* ggml : adding Q5_0 mode
* quantize : add Q5_0 and Q5_1 to map
* ggml : AVX2 optimizations for Q5_0, Q5_1 (#1195)
---------
Co-authored-by: Stephan Walter <redacted>
Ásgeir Bjarni Ingvarsson [Wed, 26 Apr 2023 20:08:43 +0000 (20:08 +0000)]
Allow setting the rng seed after initialization. (#1184)
The llama_set_state_data function restores the rng state to what it
was at the time llama_copy_state_data was called. But users may want
to restore the state and proceed with a different seed.
DaniAndTheWeb [Wed, 26 Apr 2023 20:03:03 +0000 (22:03 +0200)]
Updating build instructions to include BLAS support (#1183)
* Updated build information
First update to the build instructions to include BLAS.
* Update README.md
* Update information about BLAS
* Better BLAS explanation
Adding a clearer BLAS explanation and adding a link to download the CUDA toolkit.
* Better BLAS explanation
* BLAS for Mac
Specifying that BLAS is already supported on Macs using the Accelerate Framework.
* Clarify the effect of BLAS
* Windows Make instructions
Added the instructions to build with Make on Windows
* Fixing typo
* Fix trailing whitespace
Pavol Rusnak [Wed, 26 Apr 2023 16:43:27 +0000 (18:43 +0200)]
quantize : use `map` to assign quantization type from `string` (#1191)
instead of `int` (while `int` option still being supported)
This allows the following usage:
`./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0`
instead of:
`./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`
Stephan Walter [Tue, 25 Apr 2023 21:41:56 +0000 (21:41 +0000)]
Update SHA256SUMS after quantization change (#1181)
Co-authored-by: Pavol Rusnak <redacted>
ostix360 [Tue, 25 Apr 2023 21:33:08 +0000 (23:33 +0200)]
py : cast lora_alpha to int in convert-lora-to-ggml (#1170)
Co-authored-by: Pavol Rusnak <redacted>
Pavol Rusnak [Tue, 25 Apr 2023 21:19:57 +0000 (23:19 +0200)]
nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)
Georgi Gerganov [Tue, 25 Apr 2023 20:40:51 +0000 (23:40 +0300)]
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
* ggml : add Q8_0 quantization format (rename the old one to Q8_1)
* tests : fix test-quantize-fns
* ggml : finalize Q8_0 implementation
* ggml : use q4_0_q8_0 and q4_2_q8_0
* ggml : fix Q8_0 dot product bug (ARM)
* ggml : Q8_0 unroll x2
* ggml : fix bug - using wrong block type
* ggml : extend quantize_fns_t with "vec_dot_type"
* ggml : fix Q8_0 to use 255 values out of 256
* ggml : fix assert using wrong QK4_2 instead of QK4_3
unbounded [Tue, 25 Apr 2023 17:20:46 +0000 (19:20 +0200)]
ggml : use full range for Q4_0 and Q4_2 quantization (#729)
* Use full range for q4_0 quantization
By keeping the sign of the highest magnitude, we can make sure the
highest value maps to -8, which is currently unused.
This is a bit of a freebie since it is fully backwards compatible with
the current format.
* Update quantize_row_q4_0 for AVX/AVX2
* Update quantize_row_q4_0 for WASM
Untested
* Update quantize_row_q4_0 for Arm NEON
* Update quantize_row_q4_0 for PowerPC
Untested
* Use full range for q4_2 quantization
xaedes [Mon, 24 Apr 2023 21:02:02 +0000 (23:02 +0200)]
ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
The sum over all rows is now computed instead of just the last row
Georgi Gerganov [Mon, 24 Apr 2023 19:18:25 +0000 (22:18 +0300)]
ggml : export symbols (#1155)
xaedes [Mon, 24 Apr 2023 16:23:31 +0000 (18:23 +0200)]
examples : add save_load_state example (#1150)
* add save_load_state example
* use <cstdio> instead of <iostream> and fprintf / printf instead of cout
* renamed save-load-state example files replacing underscores by dashes
Georgi Gerganov [Mon, 24 Apr 2023 15:47:03 +0000 (18:47 +0300)]
llama : increase scratch buffer size for 65B (ref #1152)
Temporary solution
mgroeber9110 [Mon, 24 Apr 2023 15:45:32 +0000 (17:45 +0200)]
examples/main README improvements and some light refactoring (#1131)
Stephan Walter [Mon, 24 Apr 2023 15:38:26 +0000 (15:38 +0000)]
Fix build for gcc 8 and test in CI (#1154)
slaren [Mon, 24 Apr 2023 15:29:58 +0000 (17:29 +0200)]
Fix cuda compilation (#1128)
* Fix: Issue with CUBLAS compilation error due to missing -fPIC flag
---------
Co-authored-by: B1gM8c <redacted>
Georgi Gerganov [Mon, 24 Apr 2023 04:40:02 +0000 (07:40 +0300)]
llama : refactor get / set state + remove redundant kv cache API (#1143)
slaren [Sun, 23 Apr 2023 21:03:44 +0000 (23:03 +0200)]
Fix LoRA acronym (#1145)
Georgi Gerganov [Sun, 23 Apr 2023 16:57:09 +0000 (19:57 +0300)]
scripts : add helper scripts to synch ggml repo
DannyDaemonic [Sun, 23 Apr 2023 15:37:02 +0000 (08:37 -0700)]
Added README.md for main with examples and explanations (#1139)
Georgi Gerganov [Sun, 23 Apr 2023 15:32:52 +0000 (18:32 +0300)]
ggml : do not print perf ops that have not been used at all
Georgi Gerganov [Sun, 23 Apr 2023 15:15:39 +0000 (18:15 +0300)]
ggml : better PERF prints + support "LLAMA_PERF=1 make"
Stephan Walter [Sun, 23 Apr 2023 11:01:03 +0000 (11:01 +0000)]
Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)
Pavol Rusnak [Sun, 23 Apr 2023 08:21:26 +0000 (10:21 +0200)]
readme : update gpt4all instructions (#980)
Yishuo Wang [Sun, 23 Apr 2023 07:57:05 +0000 (15:57 +0800)]
A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX512 (#1119)
Georgi Gerganov [Sat, 22 Apr 2023 13:31:56 +0000 (16:31 +0300)]
ggml : fix Q4_3 cuBLAS
Stephan Walter [Sat, 22 Apr 2023 13:12:29 +0000 (13:12 +0000)]
ci : trigger CI for drafts, but not most PR actions (#1125)
Stephan Walter [Sat, 22 Apr 2023 10:54:13 +0000 (10:54 +0000)]
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
unbounded [Sat, 22 Apr 2023 09:10:39 +0000 (11:10 +0200)]
ggml : unit test for quantization functions (#953)
* Unit test for quantization functions
Use the ggml_internal_get_quantize_fn function to loop through all
quantization formats and run a sanity check on the result.
Also add a microbenchmark that times these functions directly without
running the rest of the GGML graph.
* test-quantize-fns: CI fixes
Fix issues uncovered in CI
- need to use sizes divisible by 32*8 for loop unrolling
- use intrinsic header that should work on Mac
* test-quantize: remove
Per PR comment, subsumed by test-quantize-fns
* test-quantize: fix for q8_0 intermediates
wbpxre150 [Sat, 22 Apr 2023 08:56:35 +0000 (16:56 +0800)]
llama : print timings on ctrl+c exit (#1021)
* print timings on ctrl+c exit
* remove redundant free memory call.
* add global pointer to ctx.
eiery [Sat, 22 Apr 2023 08:27:05 +0000 (04:27 -0400)]
llama : have n_batch default to 512 (#1091)
* set default n_batch to 512 when using BLAS
* spacing
* alternate implementation of setting different n_batch for BLAS
* set n_batch to 512 for all cases
Howard Su [Sat, 22 Apr 2023 08:18:20 +0000 (16:18 +0800)]
cmake : fix build under Windows when enable BUILD_SHARED_LIBS (#1100)
* Fix build under Windows when enable BUILD_SHARED_LIBS
* Make AVX512 test on Windows to build the shared libs
Georgi Gerganov [Sat, 22 Apr 2023 08:08:12 +0000 (11:08 +0300)]
ggml : fix AVX build + update to new Q8_0 format
Georgi Gerganov [Sat, 22 Apr 2023 07:55:35 +0000 (10:55 +0300)]
ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)
* ggml : prefer vzip to vuzp
This way we always use the same type of instruction across all quantizations
* ggml : alternative Q4_3 implementation using modified Q8_0
* ggml : fix Q4_3 scalar imlpementation
* ggml : slight improvement of Q4_3 - no need for loop unrolling
* ggml : fix AVX paths for Q8_0 quantization
Stephan Walter [Sat, 22 Apr 2023 07:37:05 +0000 (07:37 +0000)]
ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)
* AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring
* finish AVX vectorization of quantize_row_q8_0
* Rename hsum_int_8 to hsum_i32_8
Clint Herron [Sat, 22 Apr 2023 06:54:33 +0000 (02:54 -0400)]
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107)
* Moving parameters to separate lines for readability.
* Increasing repeate_penalty to 1.1 to make alpaca more usable by default.
* Adding trailing newline.
xaedes [Sat, 22 Apr 2023 06:21:32 +0000 (08:21 +0200)]
llama : add api for getting/setting the complete state: rng, logits, embedding and kv_cache (#1105)
* reserve correct size for logits
* add functions to get and set the whole llama state:
including rng, logits, embedding and kv_cache
* remove unused variables
* remove trailing whitespace
* fix comment
slaren [Fri, 21 Apr 2023 19:59:17 +0000 (21:59 +0200)]
Improve cuBLAS performance by using a memory pool (#1094)
* Improve cuBLAS performance by using a memory pool
* Move cuda specific definitions to ggml-cuda.h/cu
* Add CXX flags to nvcc
* Change memory pool synchronization mechanism to a spin lock
General code cleanup
apaz [Fri, 21 Apr 2023 18:48:06 +0000 (13:48 -0500)]
llama : fixed rlimit error message (#888)
源文雨 [Fri, 21 Apr 2023 18:27:06 +0000 (02:27 +0800)]
cmake : link threads publicly to ggml (#1042)
* fix: ld link test-tokenizer-0 error
```
cmake3 --build . --config Release
[ 5%] Built target ggml
[ 16%] Built target llama
[ 22%] Linking CXX executable ../bin/test-tokenizer-0
../libllama.a(ggml.c.o):在函数‘ggml_graph_compute’中:
ggml.c:(.text+0xf2db):对‘pthread_create’未定义的引用
ggml.c:(.text+0xf9d4):对‘pthread_join’未定义的引用
collect2: error: ld returned 1 exit status
gmake[2]: *** [bin/test-tokenizer-0] 错误 1
gmake[1]: *** [tests/CMakeFiles/test-tokenizer-0.dir/all] 错误 2
gmake: *** [all] 错误 2
```
* Update CMakeLists.txt
* Update CMakeLists.txt
* Update CMakeLists.txt
Alex Klinkhamer [Fri, 21 Apr 2023 18:18:09 +0000 (11:18 -0700)]
main : evaluate tokens in batches after swapping context (#1014)
* examples : evaluate tokens in batches after swapping context
* Update examples/main/main.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
xaedes [Fri, 21 Apr 2023 15:25:21 +0000 (17:25 +0200)]
llama : remember and restore kv cache data pointers (#1104)
because their value is stored in buf and overwritten by memcpy
Kawrakow [Fri, 21 Apr 2023 15:18:26 +0000 (17:18 +0200)]
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
* A faster version for Q4_1 x Q8_0 dot products
The idea nehind being that Q8_0 quantized
values get used many times in the matrix multiplications
where they are involved. In the current implementations,
when we are evaluating the dot products, we need to compute
the sum of the quants in the Q8_0 vector, so the same
operation is repeated many times. Here we pre-compute
the sum during Q8_0 quantization, store it in the
now modified block_q8_0 struct, and then reuse this
result in the subsequent dot products.
In a synthetic benchmark (just compute a bunch of dot
products), this change speeds up the Q4_1 * Q8_0 dot
product by 80%, making the performance identical to
Q4_0 * Q8_0.
In practical application, I see a ~15% gain in speed for
token prediction on M2, and ~5% gain on Ryzen 7950X.
The speed gain in the prompt evaluation is much bigger
(around 50%).
I have only done the change for the scalar version,
ARM_NEON, and AVX2, so we still need an AVX implementation.
* Cleaning up
---------
Co-authored-by: Iwan Kawrakow <redacted>
slaren [Fri, 21 Apr 2023 12:57:57 +0000 (14:57 +0200)]
Show perplexity ETA in hours and minutes (#1096)
Georgi Gerganov [Fri, 21 Apr 2023 07:23:36 +0000 (10:23 +0300)]
llama : fix comment for "output.weight" tensor
Stephan Walter [Thu, 20 Apr 2023 21:56:44 +0000 (21:56 +0000)]
Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)
* Add ggml-model-*.bin checksums for 7B, 13B, 30B
* Add ggml-model-*.bin checksums for 65B
---------
Co-authored-by: Pavol Rusnak <redacted>
Georgi Gerganov [Thu, 20 Apr 2023 20:32:59 +0000 (23:32 +0300)]
ggml : sync ggml (add GPT-NeoX RoPE implementation)
Georgi Gerganov [Thu, 20 Apr 2023 18:58:05 +0000 (21:58 +0300)]
ggml : fix bug in ggml_compute_forward_dup_f32()
slaren [Thu, 20 Apr 2023 18:49:53 +0000 (20:49 +0200)]
Add Q4_3 support to cuBLAS (#1086)
Georgi Gerganov [Thu, 20 Apr 2023 18:43:50 +0000 (21:43 +0300)]
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)
Georgi Gerganov [Thu, 20 Apr 2023 17:44:05 +0000 (20:44 +0300)]
ggml : fix Q4_3 quantization
Broke it during conflict resolution in last PR
Kawrakow [Thu, 20 Apr 2023 17:42:27 +0000 (19:42 +0200)]
llama : multi-threaded quantization (#1075)
* Multi-threading quantization.
Not much gain for simple quantizations, bit it will be important
for quantizations that require more CPU cycles.
* Multi-threading for quantize-stats
It now does the job in ~14 seconds on my Mac for
Q4_0, Q4_1 and Q4_2. Single-threaded it was taking
more than 2 minutes after adding the more elaborate
version of Q4_2.
* Reviewer comments
* Avoiding compiler confusion
After changing chunk_size to const int as suggested by
@ggerganov, clang and GCC starting to warn me that I don't
need to capture it in the lambda. So, I removed it from the
capture list. But that makes the MSVC build fail. So,
making it a constexpr to make every compiler happy.
* Still fighting with lambda captures in MSVC
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Thu, 20 Apr 2023 17:35:53 +0000 (20:35 +0300)]
ggml : add Q4_3 quantization (#1082)
Ivan Komarov [Thu, 20 Apr 2023 15:15:18 +0000 (17:15 +0200)]
ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the CI (#1074)
[Accelerate](https://developer.apple.com/documentation/accelerate) is an Apple framework which can only be used on macOS, and the CMake build [ignores](https://github.com/ggerganov/llama.cpp/blob/master/CMakeLists.txt#L102) the `LLAMA_ACCELERATE` variable when run on non-Apple platforms. This implies setting `LLAMA_ACCELERATE` is a no-op on Ubuntu and can be removed.
This will reduce visual noise in CI check results (in addition to reducing the number of checks we have to run for every PR). Right now every sanitized build is duplicated twice for no good reason (e.g., we have `CI / ubuntu-latest-cmake-sanitizer (ADDRESS, Debug, ON)` and `CI / ubuntu-latest-cmake-sanitizer (ADDRESS, Debug, OFF)`).
源文雨 [Thu, 20 Apr 2023 13:28:43 +0000 (21:28 +0800)]
fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)
Stephan Walter [Thu, 20 Apr 2023 06:45:41 +0000 (06:45 +0000)]
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
slaren [Thu, 20 Apr 2023 01:14:14 +0000 (03:14 +0200)]
Improve cuBLAS performance by dequantizing on the GPU (#1065)
CRD716 [Wed, 19 Apr 2023 19:52:14 +0000 (14:52 -0500)]
Minor: Readme fixed grammar, spelling, and misc updates (#1071)
Kawrakow [Wed, 19 Apr 2023 18:20:14 +0000 (20:20 +0200)]
Q4_2 quantization with rmse-optimized scale and quants (#1062)
* Q4_2 quantization with rmse-optimized scale and quants
For quantize-stats we get
q4_2: rmse 0.
00159301 , maxerr 0.
17480469 , 95pct<0.0030, median<0.0012
For 7B perplexity with BLAS enabled we get 6.2038 after 655 chunks.
Quantization is slow (~90 seconds on my Mac for 7B) as not
multi-threaded as in PR #896.
* ggml : satisfy the sanitizer builds
Not sure why this makes them fail
* Better follow ggml conventions for function names
* Fixed type as per reviewer comment
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 19 Apr 2023 17:10:08 +0000 (20:10 +0300)]
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
* ggml : use 8-bit precision for Q4_1 intermediate results (ARM)
* ggml : optimize ggml_vec_dot_q4_1_q8_0() via vmalq_n_f32
56 ms/token with Q4_1 !
* ggml : AVX2 implementation of ggml_vec_dot_q4_1_q8_0 (#1051)
* gitignore : ignore ppl-*.txt files
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Wed, 19 Apr 2023 16:07:54 +0000 (19:07 +0300)]
readme : add warning about Q4_2 and Q4_3
Stephan Walter [Wed, 19 Apr 2023 16:06:37 +0000 (16:06 +0000)]
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
* Q4 cleanup
* Remove unused AVX512 Q4_0 code
slaren [Wed, 19 Apr 2023 09:22:45 +0000 (11:22 +0200)]
Add NVIDIA cuBLAS support (#1044)
slaren [Tue, 18 Apr 2023 22:53:24 +0000 (00:53 +0200)]
Multi-threaded ggml_cpy (#1035)
* Multi-threaded ggml_cpy
* Update ggml.c
Co-authored-by: Georgi Gerganov <redacted>
* Also fix wdata offset in ggml_compute_forward_add_q_f32
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 18 Apr 2023 20:54:57 +0000 (23:54 +0300)]
ggml : add new Q4_2 quantization (ARM only) (#1046)
* ggml : Q4_2 ARM
* ggml : add ggml_is_quantized()
* llama : update llama_type_name() with Q4_2 entry
* ggml : speed-up q4_2
- 4 threads: ~100ms -> ~90ms
- 8 threads: ~55ms -> ~50ms
* ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
Georgi Gerganov [Tue, 18 Apr 2023 20:11:23 +0000 (23:11 +0300)]
ggml : scratch that - vmlaq_n_f32 is always better
Had a background process that was messing with the timings
Georgi Gerganov [Tue, 18 Apr 2023 20:00:08 +0000 (23:00 +0300)]
gitignore : vdot
Georgi Gerganov [Tue, 18 Apr 2023 19:59:17 +0000 (22:59 +0300)]
ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulators
Kawrakow [Tue, 18 Apr 2023 19:00:14 +0000 (21:00 +0200)]
Adding a simple program to measure speed of dot products (#1041)
On my Mac, the direct Q4_1 product is marginally slower
(~69 vs ~55 us for Q4_0). The SIMD-ified ggml version
is now almost 2X slower (~121 us).
On a Ryzen 7950X CPU, the direct product for Q4_1 quantization
is faster than the AVX2 implementation (~60 vs ~62 us).
---------
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Tue, 18 Apr 2023 17:10:26 +0000 (20:10 +0300)]
readme : update hot topics about new LoRA functionality
Georgi Gerganov [Mon, 17 Apr 2023 15:00:10 +0000 (18:00 +0300)]
ci : do not run on drafts
Ivan Komarov [Tue, 18 Apr 2023 01:15:50 +0000 (03:15 +0200)]
Do not close file after mmap (Windows version) (#1034)
Atsushi Tatsuma [Mon, 17 Apr 2023 19:34:35 +0000 (04:34 +0900)]
readme : add Ruby bindings (#1029)
Cameron [Mon, 17 Apr 2023 18:26:23 +0000 (11:26 -0700)]
add 4_0 to default outfile namestr dict (#1031)
this came up when trying to convert the gpt4all-lora-unfiltered-quantized.bin file
slaren [Mon, 17 Apr 2023 15:28:55 +0000 (17:28 +0200)]
Add LoRA support (#820)
Arik Poznanski [Mon, 17 Apr 2023 14:41:53 +0000 (17:41 +0300)]
llama : well-defined static initialization of complex objects (#927)
* Replaced static initialization of complex objects with a initialization on first use. This prevents an undefined behavior on program run, for example, crash in Release build, works in Debug build
* replaced use of auto with exact type to avoid using -std=c++14
* Made the assessors functions for static maps be static const
Georgi Gerganov [Mon, 17 Apr 2023 14:31:06 +0000 (17:31 +0300)]
quantize-stats : fix bug in --type argument
Georgi Gerganov [Mon, 17 Apr 2023 13:16:23 +0000 (16:16 +0300)]
ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.c
Ivan Komarov [Mon, 17 Apr 2023 13:10:57 +0000 (15:10 +0200)]
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
slaren [Sun, 16 Apr 2023 19:27:38 +0000 (21:27 +0200)]
Fix: do not close file on mmap (#1017)
Georgi Gerganov [Sun, 16 Apr 2023 10:58:48 +0000 (13:58 +0300)]
stdout : vertical align outputs for better readibility
Pavol Rusnak [Sun, 16 Apr 2023 10:13:00 +0000 (12:13 +0200)]
examples: add missing <ctime> include for time() (#1011)
nanahi [Sun, 16 Apr 2023 09:13:42 +0000 (17:13 +0800)]
Fix msys2 build error and warnings (#1009)
comex [Sat, 15 Apr 2023 21:53:21 +0000 (14:53 -0700)]
convert.py: Fix loading safetensors and ggml format on Windows (#991)
Calling `mmap.mmap` on Windows apparently resets the file offset of the
raw file object (and makes the BufferedReader return a *negative* file
offset). For safetensors, avoid using the file offset after calling
mmap. For GGML format, explicitly save and restore the offset.
Fixes #966.