]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agofix build-info.h for git submodules (#1289)
kuvaus [Wed, 3 May 2023 00:43:43 +0000 (03:43 +0300)]
fix build-info.h for git submodules (#1289)

* make git build info work with submodules

---------

Co-authored-by: Green Sky <redacted>
2 years agofix missing parameters in `llama_init_from_gpt_params` (#1293)
slaren [Tue, 2 May 2023 23:36:45 +0000 (01:36 +0200)]
fix missing parameters in `llama_init_from_gpt_params` (#1293)

2 years agoexamples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans [Tue, 2 May 2023 20:39:51 +0000 (22:39 +0200)]
examples : add llama_init_from_gpt_params() common function (#1290)

Signed-off-by: deadprogram <redacted>
2 years agollama : fix compile warnings
Georgi Gerganov [Tue, 2 May 2023 20:09:08 +0000 (23:09 +0300)]
llama : fix compile warnings

2 years agoggml : fix 32-bit ARM
Georgi Gerganov [Tue, 2 May 2023 19:14:50 +0000 (22:14 +0300)]
ggml : fix 32-bit ARM

2 years agoexamples : improve vertical alignment of a few variables (#1286)
Ron Evans [Tue, 2 May 2023 17:53:52 +0000 (19:53 +0200)]
examples : improve vertical alignment of a few variables (#1286)

Signed-off-by: deadprogram <redacted>
2 years agoggml : fix ppc64le build error and make cmake detect Power processors (#1284)
Marvin Gießing [Tue, 2 May 2023 16:42:16 +0000 (18:42 +0200)]
ggml : fix ppc64le build error and make cmake detect Power processors (#1284)

* Fix ppc64le build issue

* Added support to detect ppc64* processors

2 years agollama : allow 0 as a seed number. (#1275)
Robert Brisita [Tue, 2 May 2023 16:23:44 +0000 (12:23 -0400)]
llama : allow 0 as a seed number. (#1275)

2 years agomain : switch input_noecho to input_echo to remove negation (#979)
Ron Evans [Tue, 2 May 2023 16:13:26 +0000 (18:13 +0200)]
main : switch input_noecho to input_echo to remove negation (#979)

Signed-off-by: deadprogram <redacted>
2 years agoggml: add names to tensors (#1268)
slaren [Tue, 2 May 2023 14:03:00 +0000 (16:03 +0200)]
ggml: add names to tensors (#1268)

* ggml: add names to tensors

* minor improvements to dot file formatting

2 years agoAdd git-based build information for better issue tracking (#1232)
DannyDaemonic [Mon, 1 May 2023 16:23:47 +0000 (09:23 -0700)]
Add git-based build information for better issue tracking (#1232)

* Add git-based build information for better issue tracking

* macOS fix

* "build (hash)" and "CMAKE_SOURCE_DIR" changes

* Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages

* Fix conditional dependency on missing target

* Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile

* 4 space indenting for cmake, attempt to clean up my mess in Makefile

* Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it

2 years agocuBLAS: refactor and optimize f16 mat mul performance (#1259)
slaren [Mon, 1 May 2023 16:11:07 +0000 (18:11 +0200)]
cuBLAS: refactor and optimize f16 mat mul performance (#1259)

* cuBLAS: refactor, convert fp16 to fp32 on device

* cuBLAS: use multiple streams, choose smartly between mul_mat_q and mul_mat_f16

* fix build

* cuBLAS: update block_q5_1

2 years agollama : update stubs for systems without mmap and mlock (#1266)
xloem [Mon, 1 May 2023 12:58:51 +0000 (08:58 -0400)]
llama : update stubs for systems without mmap and mlock (#1266)

Co-authored-by: John Doe <redacted>
2 years agoggml : fix ggml_used_mem() (#1264)
Kerfuffle [Mon, 1 May 2023 11:56:07 +0000 (05:56 -0600)]
ggml : fix ggml_used_mem() (#1264)

2 years agollama : fix session load / save (#1263)
Georgi Gerganov [Mon, 1 May 2023 11:54:59 +0000 (14:54 +0300)]
llama : fix session load / save (#1263)

2 years agocuBLAS: fall back to pageable memory if pinned alloc fails (#1233)
slaren [Mon, 1 May 2023 11:32:22 +0000 (13:32 +0200)]
cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)

* cuBLAS: fall back to pageable memory if pinned alloc fails

* cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set

2 years agollama : let context be const when accessing const data (#1261)
Alex Klinkhamer [Mon, 1 May 2023 07:24:20 +0000 (00:24 -0700)]
llama : let context be const when accessing const data (#1261)

2 years agoggml : fix UB (int << 31)
Georgi Gerganov [Sun, 30 Apr 2023 19:28:51 +0000 (22:28 +0300)]
ggml : fix UB (int << 31)

2 years agobuild: add armv{6,7,8} support to cmake (#1251)
Pavol Rusnak [Sun, 30 Apr 2023 18:48:38 +0000 (20:48 +0200)]
build: add armv{6,7,8} support to cmake (#1251)

- flags copied from Makefile
- updated comments in both CMakeLists.txt and Makefile to match reality

2 years agocommon : better default number of threads (#934)
jon-chuang [Sun, 30 Apr 2023 18:41:35 +0000 (14:41 -0400)]
common : better default number of threads (#934)

* commit

* fix

* try-catch

* apply code review

* improve

* improve

* add macos headers

* done

* remove color

* fix windows

* minor

* fix

* Apply suggestions from code review

Co-authored-by: DannyDaemonic <redacted>
* remove

* minor

* minor

---------

Co-authored-by: jon-chuang <redacted>
Co-authored-by: DannyDaemonic <redacted>
2 years agoggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)
0cc4m [Sun, 30 Apr 2023 18:34:52 +0000 (20:34 +0200)]
ggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)

* Implement q5_0, q5_1 and q8_0

* Work around q5_0 OpenCL issue

* Fix q8_0 dequant kernel

* Move cl kernels into ggml-opencl.c

* Use two memcpy calls for q5_0 buffer transfer

2 years agoggml : add Q5 WASM SIMD + GGML_FTYPE
Georgi Gerganov [Sun, 30 Apr 2023 16:07:00 +0000 (19:07 +0300)]
ggml : add Q5 WASM SIMD + GGML_FTYPE

2 years agoVarious fixes to mat_mul benchmark (#1253)
Stephan Walter [Sun, 30 Apr 2023 12:32:37 +0000 (12:32 +0000)]
Various fixes to mat_mul benchmark (#1253)

2 years agoggml : fix labels for GGML_OP_ALIBI
Georgi Gerganov [Sun, 30 Apr 2023 07:25:46 +0000 (10:25 +0300)]
ggml : fix labels for GGML_OP_ALIBI

2 years agoggml : fix 32-bit ARM NEON
Georgi Gerganov [Sat, 29 Apr 2023 18:34:23 +0000 (21:34 +0300)]
ggml : fix 32-bit ARM NEON

2 years agoggml : use vzip instead of vuzp for consistency
Georgi Gerganov [Sat, 29 Apr 2023 18:12:56 +0000 (21:12 +0300)]
ggml : use vzip instead of vuzp for consistency

2 years agoggml : fix visibility and unused warnings
Georgi Gerganov [Sat, 29 Apr 2023 16:28:36 +0000 (19:28 +0300)]
ggml : fix visibility and unused warnings

2 years agoggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)
Georgi Gerganov [Sat, 29 Apr 2023 15:43:42 +0000 (18:43 +0300)]
ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)

2 years agoggml : adjust mul_mat_f16 work memory (#1226)
Georgi Gerganov [Sat, 29 Apr 2023 15:43:28 +0000 (18:43 +0300)]
ggml : adjust mul_mat_f16 work memory (#1226)

* llama : minor - remove explicity int64_t cast

* ggml : reduce memory buffer for F16 mul_mat when not using cuBLAS

* ggml : add asserts to guard for incorrect wsize

2 years agobuild : fix reference to old llama_util.h
Georgi Gerganov [Sat, 29 Apr 2023 10:53:12 +0000 (13:53 +0300)]
build : fix reference to old llama_util.h

2 years agoexamples : fix save-load-state + rename llama-util.h
Georgi Gerganov [Sat, 29 Apr 2023 10:48:11 +0000 (13:48 +0300)]
examples : fix save-load-state + rename llama-util.h

2 years agocommon : change default parameters to pre-#1126 (#1223)
Georgi Gerganov [Sat, 29 Apr 2023 06:51:06 +0000 (09:51 +0300)]
common : change default parameters to pre-#1126 (#1223)

2 years agollama : new sampling algorithms (#1126)
Ivan Stepanov [Sat, 29 Apr 2023 05:34:41 +0000 (08:34 +0300)]
llama : new sampling algorithms (#1126)

* Sample interface, new samplers.

New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat

Ignore EOS fix: -inf should be used.

* mirostat

* Added --logit-bias and --no-penalize-nl, removed std::span

* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)

Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)

* Save and load example adjust

* Tests

* Windows build fix

* Windows test fix

2 years agocuBLAS: use host pinned memory and dequantize while copying (#1207)
slaren [Sat, 29 Apr 2023 00:04:18 +0000 (02:04 +0200)]
cuBLAS: use host pinned memory and dequantize while copying (#1207)

* cuBLAS: dequantize simultaneously while copying memory

* cuBLAS: use host pinned memory

* cuBLAS: improve ggml_compute_forward_mul_mat_f16_f32 with pinned memory

* cuBLAS: also pin kv cache

* fix rebase

2 years agocuBLAS: non-contiguous tensor support (#1215)
Henri Vasserman [Fri, 28 Apr 2023 23:31:56 +0000 (02:31 +0300)]
cuBLAS: non-contiguous tensor support (#1215)

* Cuda: non-contiguous tensor support

* remove extra stuff

* rename

* fix error

* more fixes, now OpenBLAS and CLBlast build too

* now then?

2 years agoRemove Q4_3 which is no better than Q5 (#1218)
Stephan Walter [Fri, 28 Apr 2023 23:10:43 +0000 (23:10 +0000)]
Remove Q4_3 which is no better than Q5 (#1218)

2 years agoreadme : update hot topics
Georgi Gerganov [Fri, 28 Apr 2023 18:32:52 +0000 (21:32 +0300)]
readme : update hot topics

2 years agoggml : sync ggml (ggml_alibi)
Georgi Gerganov [Fri, 28 Apr 2023 17:37:43 +0000 (20:37 +0300)]
ggml : sync ggml (ggml_alibi)

2 years agoexamples : add Jeopardy example (#1168)
CRD716 [Fri, 28 Apr 2023 16:13:33 +0000 (11:13 -0500)]
examples : add Jeopardy example (#1168)

* Basic Setup

* Prevent Results.txt from coming up

* Prefixes, Line separators, etc

* editorcheck

* introduction to give more consistent results

* Basic graph thing

* Grading, ready for testing!

* Y'all ready to get funky?

* fix column removal stuff

* missed a few

2 years agollama : add session file format and saved sessions in main (#1169)
Evan Jones [Fri, 28 Apr 2023 15:59:37 +0000 (11:59 -0400)]
llama : add session file format and saved sessions in main (#1169)

2 years agoggml : add helper debug printf in soft_max
Georgi Gerganov [Fri, 28 Apr 2023 14:58:44 +0000 (17:58 +0300)]
ggml : add helper debug printf in soft_max

2 years agoggml : add CLBlast support (#1164)
0cc4m [Fri, 28 Apr 2023 14:57:16 +0000 (16:57 +0200)]
ggml : add CLBlast support (#1164)

* Allow use of OpenCL GPU-based BLAS using ClBlast instead of OpenBLAS for context processing

* Improve ClBlast implementation, avoid recreating buffers, remove redundant transfers

* Finish merge of ClBlast support

* Move CLBlast implementation to separate file

Add buffer reuse code (adapted from slaren's cuda implementation)

* Add q4_2 and q4_3 CLBlast support, improve code

* Double CLBlast speed by disabling OpenBLAS thread workaround

Co-authored-by: Concedo <redacted>
Co-authored-by: slaren <redacted>
* Fix device selection env variable names

* Fix cast in opencl kernels

* Add CLBlast to CMakeLists.txt

* Replace buffer pool with static buffers a, b, qb, c

Fix compile warnings

* Fix typos, use GGML_TYPE defines, improve code

* Improve btype dequant kernel selection code, add error if type is unsupported

* Improve code quality

* Move internal stuff out of header
* Use internal enums instead of CLBlast enums
* Remove leftover C++ includes and defines
* Make event use easier to read

Co-authored-by: Henri Vasserman <redacted>
* Use c compiler for opencl files

* Simplify code, fix include

* First check error, then release event

* Make globals static, fix indentation

* Rename dequant kernels file to conform with other file names

* Fix import cl file name

---------

Co-authored-by: Concedo <redacted>
Co-authored-by: slaren <redacted>
Co-authored-by: Henri Vasserman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoCorrecting link to w64devkit (#1214)
Folko-Ven [Fri, 28 Apr 2023 14:22:48 +0000 (19:22 +0500)]
Correcting link to w64devkit (#1214)

Correcting link to w64devkit (change seeto to skeeto).

2 years agoAdd Manjaro CUDA include and lib dirs to Makefile (#1212)
Johannes Gäßler [Fri, 28 Apr 2023 13:40:32 +0000 (15:40 +0200)]
Add Manjaro CUDA include and lib dirs to Makefile (#1212)

2 years agoadd avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
Yann Follet [Fri, 28 Apr 2023 11:59:48 +0000 (19:59 +0800)]
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)

2 years agoggml : slightly faster AVX2 implementation for Q5 (#1197)
Stephan Walter [Wed, 26 Apr 2023 20:26:42 +0000 (20:26 +0000)]
ggml : slightly faster AVX2 implementation for Q5 (#1197)

2 years agoreadme : add quantization info
Georgi Gerganov [Wed, 26 Apr 2023 20:24:42 +0000 (23:24 +0300)]
readme : add quantization info

2 years agoggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov [Wed, 26 Apr 2023 20:14:13 +0000 (23:14 +0300)]
ggml : add Q5_0 and Q5_1 quantization (#1187)

* ggml : add Q5_0 quantization (cuBLAS only)

* ggml : fix Q5_0 qh -> uint32_t

* ggml : fix q5_0 histogram stats

* ggml : q5_0 scalar dot product

* ggml : q5_0 ARM NEON dot

* ggml : q5_0 more efficient ARM NEON using uint64_t masks

* ggml : rename Q5_0 -> Q5_1

* ggml : adding Q5_0 mode

* quantize : add Q5_0 and Q5_1 to map

* ggml : AVX2 optimizations for Q5_0, Q5_1 (#1195)

---------

Co-authored-by: Stephan Walter <redacted>
2 years agoAllow setting the rng seed after initialization. (#1184)
Ásgeir Bjarni Ingvarsson [Wed, 26 Apr 2023 20:08:43 +0000 (20:08 +0000)]
Allow setting the rng seed after initialization. (#1184)

The llama_set_state_data function restores the rng state to what it
was at the time llama_copy_state_data was called. But users may want
to restore the state and proceed with a different seed.

2 years agoUpdating build instructions to include BLAS support (#1183)
DaniAndTheWeb [Wed, 26 Apr 2023 20:03:03 +0000 (22:03 +0200)]
Updating build instructions to include BLAS support (#1183)

* Updated build information

First update to the build instructions to include BLAS.

* Update README.md

* Update information about BLAS

* Better BLAS explanation

Adding a clearer BLAS explanation and adding a link to download the CUDA toolkit.

* Better BLAS explanation

* BLAS for Mac

Specifying that BLAS is already supported on Macs using the Accelerate Framework.

* Clarify the effect of BLAS

* Windows Make instructions

Added the instructions to build with Make on Windows

* Fixing typo

* Fix trailing whitespace

2 years agoquantize : use `map` to assign quantization type from `string` (#1191)
Pavol Rusnak [Wed, 26 Apr 2023 16:43:27 +0000 (18:43 +0200)]
quantize : use `map` to assign quantization type from `string` (#1191)

instead of `int` (while `int` option still being supported)

This allows the following usage:

`./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0`

instead of:

`./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`

2 years agoUpdate SHA256SUMS after quantization change (#1181)
Stephan Walter [Tue, 25 Apr 2023 21:41:56 +0000 (21:41 +0000)]
Update SHA256SUMS after quantization change (#1181)

Co-authored-by: Pavol Rusnak <redacted>
2 years agopy : cast lora_alpha to int in convert-lora-to-ggml (#1170)
ostix360 [Tue, 25 Apr 2023 21:33:08 +0000 (23:33 +0200)]
py : cast lora_alpha to int in convert-lora-to-ggml (#1170)

Co-authored-by: Pavol Rusnak <redacted>
2 years agonix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)
Pavol Rusnak [Tue, 25 Apr 2023 21:19:57 +0000 (23:19 +0200)]
nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)

2 years agoggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
Georgi Gerganov [Tue, 25 Apr 2023 20:40:51 +0000 (23:40 +0300)]
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)

* ggml : add Q8_0 quantization format (rename the old one to Q8_1)

* tests : fix test-quantize-fns

* ggml : finalize Q8_0 implementation

* ggml : use q4_0_q8_0 and q4_2_q8_0

* ggml : fix Q8_0 dot product bug (ARM)

* ggml : Q8_0 unroll x2

* ggml : fix bug - using wrong block type

* ggml : extend quantize_fns_t with "vec_dot_type"

* ggml : fix Q8_0 to use 255 values out of 256

* ggml : fix assert using wrong QK4_2 instead of QK4_3

2 years agoggml : use full range for Q4_0 and Q4_2 quantization (#729)
unbounded [Tue, 25 Apr 2023 17:20:46 +0000 (19:20 +0200)]
ggml : use full range for Q4_0 and Q4_2 quantization (#729)

* Use full range for q4_0 quantization

By keeping the sign of the highest magnitude, we can make sure the
highest value maps to -8, which is currently unused.
This is a bit of a freebie since it is fully backwards compatible with
the current format.

* Update quantize_row_q4_0 for AVX/AVX2

* Update quantize_row_q4_0 for WASM

Untested

* Update quantize_row_q4_0 for Arm NEON

* Update quantize_row_q4_0 for PowerPC

Untested

* Use full range for q4_2 quantization

2 years agoggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
xaedes [Mon, 24 Apr 2023 21:02:02 +0000 (23:02 +0200)]
ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)

The sum over all rows is now computed instead of just the last row

2 years agoggml : export symbols (#1155)
Georgi Gerganov [Mon, 24 Apr 2023 19:18:25 +0000 (22:18 +0300)]
ggml : export symbols (#1155)

2 years agoexamples : add save_load_state example (#1150)
xaedes [Mon, 24 Apr 2023 16:23:31 +0000 (18:23 +0200)]
examples : add save_load_state example (#1150)

* add save_load_state example

* use <cstdio> instead of <iostream> and fprintf / printf instead of cout

* renamed save-load-state example files replacing underscores by dashes

2 years agollama : increase scratch buffer size for 65B (ref #1152)
Georgi Gerganov [Mon, 24 Apr 2023 15:47:03 +0000 (18:47 +0300)]
llama : increase scratch buffer size for 65B (ref #1152)

Temporary solution

2 years agoexamples/main README improvements and some light refactoring (#1131)
mgroeber9110 [Mon, 24 Apr 2023 15:45:32 +0000 (17:45 +0200)]
examples/main README improvements and some light refactoring (#1131)

2 years agoFix build for gcc 8 and test in CI (#1154)
Stephan Walter [Mon, 24 Apr 2023 15:38:26 +0000 (15:38 +0000)]
Fix build for gcc 8 and test in CI (#1154)

2 years agoFix cuda compilation (#1128)
slaren [Mon, 24 Apr 2023 15:29:58 +0000 (17:29 +0200)]
Fix cuda compilation (#1128)

* Fix: Issue with CUBLAS compilation error due to missing -fPIC flag

---------

Co-authored-by: B1gM8c <redacted>
2 years agollama : refactor get / set state + remove redundant kv cache API (#1143)
Georgi Gerganov [Mon, 24 Apr 2023 04:40:02 +0000 (07:40 +0300)]
llama : refactor get / set state + remove redundant kv cache API (#1143)

2 years agoFix LoRA acronym (#1145)
slaren [Sun, 23 Apr 2023 21:03:44 +0000 (23:03 +0200)]
Fix LoRA acronym (#1145)

2 years agoscripts : add helper scripts to synch ggml repo
Georgi Gerganov [Sun, 23 Apr 2023 16:57:09 +0000 (19:57 +0300)]
scripts : add helper scripts to synch ggml repo

2 years agoAdded README.md for main with examples and explanations (#1139)
DannyDaemonic [Sun, 23 Apr 2023 15:37:02 +0000 (08:37 -0700)]
Added README.md for main with examples and explanations (#1139)

2 years agoggml : do not print perf ops that have not been used at all
Georgi Gerganov [Sun, 23 Apr 2023 15:32:52 +0000 (18:32 +0300)]
ggml : do not print perf ops that have not been used at all

2 years agoggml : better PERF prints + support "LLAMA_PERF=1 make"
Georgi Gerganov [Sun, 23 Apr 2023 15:15:39 +0000 (18:15 +0300)]
ggml : better PERF prints + support "LLAMA_PERF=1 make"

2 years agoImprove AVX2 for vec_dot_q4_3_q8_0 (#1138)
Stephan Walter [Sun, 23 Apr 2023 11:01:03 +0000 (11:01 +0000)]
Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)

2 years agoreadme : update gpt4all instructions (#980)
Pavol Rusnak [Sun, 23 Apr 2023 08:21:26 +0000 (10:21 +0200)]
readme : update gpt4all instructions (#980)

2 years agoA better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX512 ...
Yishuo Wang [Sun, 23 Apr 2023 07:57:05 +0000 (15:57 +0800)]
A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX512 (#1119)

2 years agoggml : fix Q4_3 cuBLAS
Georgi Gerganov [Sat, 22 Apr 2023 13:31:56 +0000 (16:31 +0300)]
ggml : fix Q4_3 cuBLAS

2 years agoci : trigger CI for drafts, but not most PR actions (#1125)
Stephan Walter [Sat, 22 Apr 2023 13:12:29 +0000 (13:12 +0000)]
ci : trigger CI for drafts, but not most PR actions (#1125)

2 years agoFix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
Stephan Walter [Sat, 22 Apr 2023 10:54:13 +0000 (10:54 +0000)]
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)

2 years agoggml : unit test for quantization functions (#953)
unbounded [Sat, 22 Apr 2023 09:10:39 +0000 (11:10 +0200)]
ggml : unit test for quantization functions (#953)

* Unit test for quantization functions

Use the ggml_internal_get_quantize_fn function to loop through all
quantization formats and run a sanity check on the result.

Also add a microbenchmark that times these functions directly without
running the rest of the GGML graph.

* test-quantize-fns: CI fixes

Fix issues uncovered in CI
 - need to use sizes divisible by 32*8 for loop unrolling
 - use intrinsic header that should work on Mac

* test-quantize: remove

Per PR comment, subsumed by test-quantize-fns

* test-quantize: fix for q8_0 intermediates

2 years agollama : print timings on ctrl+c exit (#1021)
wbpxre150 [Sat, 22 Apr 2023 08:56:35 +0000 (16:56 +0800)]
llama : print timings on ctrl+c exit (#1021)

* print timings on ctrl+c exit

* remove redundant free memory call.

* add global pointer to ctx.

2 years agollama : have n_batch default to 512 (#1091)
eiery [Sat, 22 Apr 2023 08:27:05 +0000 (04:27 -0400)]
llama : have n_batch default to 512 (#1091)

* set default n_batch to 512 when using BLAS

* spacing

* alternate implementation of setting different n_batch for BLAS

* set n_batch to 512 for all cases

2 years agocmake : fix build under Windows when enable BUILD_SHARED_LIBS (#1100)
Howard Su [Sat, 22 Apr 2023 08:18:20 +0000 (16:18 +0800)]
cmake : fix build under Windows when enable BUILD_SHARED_LIBS (#1100)

* Fix build under Windows when enable BUILD_SHARED_LIBS

* Make AVX512 test on Windows to build the shared libs

2 years agoggml : fix AVX build + update to new Q8_0 format
Georgi Gerganov [Sat, 22 Apr 2023 08:08:12 +0000 (11:08 +0300)]
ggml : fix AVX build + update to new Q8_0 format

2 years agoggml : alternative Q4_3 implementation using modified Q8_0 (#1109)
Georgi Gerganov [Sat, 22 Apr 2023 07:55:35 +0000 (10:55 +0300)]
ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)

* ggml : prefer vzip to vuzp

This way we always use the same type of instruction across all quantizations

* ggml : alternative Q4_3 implementation using modified Q8_0

* ggml : fix Q4_3 scalar imlpementation

* ggml : slight improvement of Q4_3 - no need for loop unrolling

* ggml : fix AVX paths for Q8_0 quantization

2 years agoggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)
Stephan Walter [Sat, 22 Apr 2023 07:37:05 +0000 (07:37 +0000)]
ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)

* AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring

* finish AVX vectorization of quantize_row_q8_0

* Rename hsum_int_8 to hsum_i32_8

2 years agoexamples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience...
Clint Herron [Sat, 22 Apr 2023 06:54:33 +0000 (02:54 -0400)]
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107)

* Moving parameters to separate lines for readability.

* Increasing repeate_penalty to 1.1 to make alpaca more usable by default.

* Adding trailing newline.

2 years agollama : add api for getting/setting the complete state: rng, logits, embedding and...
xaedes [Sat, 22 Apr 2023 06:21:32 +0000 (08:21 +0200)]
llama : add api for getting/setting the complete state: rng, logits, embedding and kv_cache (#1105)

* reserve correct size for logits

* add functions to get and set the whole llama state:

including rng, logits, embedding and kv_cache

* remove unused variables

* remove trailing whitespace

* fix comment

2 years agoImprove cuBLAS performance by using a memory pool (#1094)
slaren [Fri, 21 Apr 2023 19:59:17 +0000 (21:59 +0200)]
Improve cuBLAS performance by using a memory pool (#1094)

* Improve cuBLAS performance by using a memory pool

* Move cuda specific definitions to ggml-cuda.h/cu

* Add CXX flags to nvcc

* Change memory pool synchronization mechanism to a spin lock
General code cleanup

2 years agollama : fixed rlimit error message (#888)
apaz [Fri, 21 Apr 2023 18:48:06 +0000 (13:48 -0500)]
llama : fixed rlimit error message (#888)

2 years agocmake : link threads publicly to ggml (#1042)
源文雨 [Fri, 21 Apr 2023 18:27:06 +0000 (02:27 +0800)]
cmake : link threads publicly to ggml (#1042)

* fix: ld link test-tokenizer-0 error

```
cmake3 --build . --config Release
[  5%] Built target ggml
[ 16%] Built target llama
[ 22%] Linking CXX executable ../bin/test-tokenizer-0
../libllama.a(ggml.c.o):在函数‘ggml_graph_compute’中:
ggml.c:(.text+0xf2db):对‘pthread_create’未定义的引用
ggml.c:(.text+0xf9d4):对‘pthread_join’未定义的引用
collect2: error: ld returned 1 exit status
gmake[2]: *** [bin/test-tokenizer-0] 错误 1
gmake[1]: *** [tests/CMakeFiles/test-tokenizer-0.dir/all] 错误 2
gmake: *** [all] 错误 2
```

* Update CMakeLists.txt

* Update CMakeLists.txt

* Update CMakeLists.txt

2 years agomain : evaluate tokens in batches after swapping context (#1014)
Alex Klinkhamer [Fri, 21 Apr 2023 18:18:09 +0000 (11:18 -0700)]
main : evaluate tokens in batches after swapping context (#1014)

* examples : evaluate tokens in batches after swapping context

* Update examples/main/main.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agollama : remember and restore kv cache data pointers (#1104)
xaedes [Fri, 21 Apr 2023 15:25:21 +0000 (17:25 +0200)]
llama : remember and restore kv cache data pointers (#1104)

because their value is stored in buf and overwritten by memcpy

2 years agoggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
Kawrakow [Fri, 21 Apr 2023 15:18:26 +0000 (17:18 +0200)]
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)

* A faster version for Q4_1 x Q8_0 dot products

The idea nehind being that Q8_0 quantized
values get used many times in the matrix multiplications
where they are involved. In the current implementations,
when we are evaluating the dot products, we need to compute
the sum of the quants in the Q8_0 vector, so the same
operation is repeated many times. Here we pre-compute
the sum during Q8_0 quantization, store it in the
now modified block_q8_0 struct, and then reuse this
result in the subsequent dot products.

In a synthetic benchmark (just compute a bunch of dot
products), this change speeds up the Q4_1 * Q8_0 dot
product by 80%, making the performance identical to
Q4_0 * Q8_0.

In practical application, I see a ~15% gain in speed for
token prediction on M2, and ~5% gain on Ryzen 7950X.
The speed gain in the prompt evaluation is much bigger
(around 50%).

I have only done the change for the scalar version,
ARM_NEON, and AVX2, so we still need an AVX implementation.

* Cleaning up

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoShow perplexity ETA in hours and minutes (#1096)
slaren [Fri, 21 Apr 2023 12:57:57 +0000 (14:57 +0200)]
Show perplexity ETA in hours and minutes (#1096)

2 years agollama : fix comment for "output.weight" tensor
Georgi Gerganov [Fri, 21 Apr 2023 07:23:36 +0000 (10:23 +0300)]
llama : fix comment for "output.weight" tensor

2 years agoAdd ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)
Stephan Walter [Thu, 20 Apr 2023 21:56:44 +0000 (21:56 +0000)]
Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)

* Add ggml-model-*.bin checksums for 7B, 13B, 30B
* Add ggml-model-*.bin checksums for 65B

---------

Co-authored-by: Pavol Rusnak <redacted>
2 years agoggml : sync ggml (add GPT-NeoX RoPE implementation)
Georgi Gerganov [Thu, 20 Apr 2023 20:32:59 +0000 (23:32 +0300)]
ggml : sync ggml (add GPT-NeoX RoPE implementation)

2 years agoggml : fix bug in ggml_compute_forward_dup_f32()
Georgi Gerganov [Thu, 20 Apr 2023 18:58:05 +0000 (21:58 +0300)]
ggml : fix bug in ggml_compute_forward_dup_f32()

2 years agoAdd Q4_3 support to cuBLAS (#1086)
slaren [Thu, 20 Apr 2023 18:49:53 +0000 (20:49 +0200)]
Add Q4_3 support to cuBLAS (#1086)

2 years agoggml : do not break cuBLAS build (Q4_3 is not yet implemented)
Georgi Gerganov [Thu, 20 Apr 2023 18:43:50 +0000 (21:43 +0300)]
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)

2 years agoggml : fix Q4_3 quantization
Georgi Gerganov [Thu, 20 Apr 2023 17:44:05 +0000 (20:44 +0300)]
ggml : fix Q4_3 quantization

Broke it during conflict resolution in last PR

2 years agollama : multi-threaded quantization (#1075)
Kawrakow [Thu, 20 Apr 2023 17:42:27 +0000 (19:42 +0200)]
llama : multi-threaded quantization (#1075)

* Multi-threading quantization.

Not much gain for simple quantizations, bit it will be important
for quantizations that require more CPU cycles.

* Multi-threading for quantize-stats

It now does the job in ~14 seconds on my Mac for
Q4_0, Q4_1 and Q4_2. Single-threaded it was taking
more than 2 minutes after adding the more elaborate
version of Q4_2.

* Reviewer comments

* Avoiding compiler confusion

After changing chunk_size to const int as suggested by
@ggerganov, clang and GCC starting to warn me that I don't
need to capture it in the lambda. So, I removed it from the
capture list. But that makes the MSVC build fail. So,
making it a constexpr to make every compiler happy.

* Still fighting with lambda captures in MSVC

---------

Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoggml : add Q4_3 quantization (#1082)
Georgi Gerganov [Thu, 20 Apr 2023 17:35:53 +0000 (20:35 +0300)]
ggml : add Q4_3 quantization (#1082)