]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Erik Scholz [Fri, 5 May 2023 20:56:09 +0000 (22:56 +0200)]
ci : add cublas to windows release (#1271)
Pavol Rusnak [Fri, 5 May 2023 14:43:36 +0000 (16:43 +0200)]
readme: add missing info (#1324)
Ionoclast Laboratories [Fri, 5 May 2023 12:18:21 +0000 (08:18 -0400)]
Fix for OpenCL / clbast builds on macOS. (#1329)
Benjamin Lecaillon [Fri, 5 May 2023 00:17:07 +0000 (02:17 +0200)]
Convert.py @staticmethod (#1327)
* Line 698 has one #staticmethod and should not
otherwise throw error at unpickle.load() as not callable
* Update convert.py
---------
Co-authored-by: Ivan Stepanov <redacted>
slaren [Thu, 4 May 2023 22:58:56 +0000 (00:58 +0200)]
quantize: make output filename optional, default to ggml-model-<ftype>.bin (#1301)
Ivan Stepanov [Thu, 4 May 2023 16:56:27 +0000 (19:56 +0300)]
Wrap exceptions in std::exception to verbose output on exception. (#1316)
Ivan Stepanov [Thu, 4 May 2023 16:54:37 +0000 (19:54 +0300)]
convert: support DT_BF16 tensors (#1309)
Co-authored-by: Pavol Rusnak <redacted>
44670 [Thu, 4 May 2023 16:33:31 +0000 (00:33 +0800)]
readme : add OpenBuddy link (#1321)
44670 [Thu, 4 May 2023 15:41:12 +0000 (23:41 +0800)]
main : add --in-suffix option (#1318)
* adding --in-suffix option
* print input suffix before generation
Ron Jailall [Thu, 4 May 2023 15:05:59 +0000 (11:05 -0400)]
ggml : change immintrin.h to intrin.h for compatibility (#1307)
* change immintrin.h to intrin.h for compatibility
Building on windows11 arm throws an error on this line. Seems like using intrin.h covers x86 and and arm
* conditional def of intrin.h
* fix typo in ggml.c
DannyDaemonic [Thu, 4 May 2023 12:08:25 +0000 (05:08 -0700)]
Only escape prompts when used with `-e` (#1311)
DannyDaemonic [Thu, 4 May 2023 10:02:59 +0000 (03:02 -0700)]
Update main's README.md with new features (#1296)
Tomas [Thu, 4 May 2023 10:02:30 +0000 (17:02 +0700)]
fix #1224 reverse prompt and multi line (#1297)
* fix reverse prompt and multi line
* Code Formatting
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 3 May 2023 20:24:20 +0000 (23:24 +0300)]
ggml : vectorize Q8_0 quantization
https://github.com/ggerganov/ggml/pull/127#issuecomment-
1533648531
khimaros [Wed, 3 May 2023 17:58:11 +0000 (10:58 -0700)]
examples : read chat prompts from a template file (#1196)
Georgi Gerganov [Wed, 3 May 2023 17:09:42 +0000 (20:09 +0300)]
minor : fix whitespaces (#1302)
Georgi Gerganov [Wed, 3 May 2023 15:43:23 +0000 (18:43 +0300)]
minor : fix trailing whitespaces
KASR [Wed, 3 May 2023 15:31:28 +0000 (17:31 +0200)]
scripts : platform independent script to verify sha256 checksums (#1203)
* python script to verify the checksum of the llama models
Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability.
* Update README.md
update to the readme for improved readability and to explain the usage of the python checksum verification script
* update the verification script
I've extended the script based on suggestions by @prusnak
The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks.
* minor improvment
small change so that the available ram is checked and not the total ram
* remove the part of the code that reads the file at once if enough ram is available
based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks.
* Update verify-checksum-models.py
quick fix to pass the git check
CRD716 [Wed, 3 May 2023 15:26:47 +0000 (10:26 -0500)]
examples : various prompt and example fixes (#1298)
* fix dan.txt
* miku prompt improvements
* use common characters
Evan Jones [Wed, 3 May 2023 02:26:13 +0000 (22:26 -0400)]
llama : only copy used KV cache in get / set state (#1272)
* llama : only copy used KV cache in get / set state
* switch to ggml for copying k, v
* avoid designated initializers
DannyDaemonic [Wed, 3 May 2023 01:46:20 +0000 (18:46 -0700)]
Process escape sequences given in prompts (#1173)
DannyDaemonic [Wed, 3 May 2023 01:01:57 +0000 (18:01 -0700)]
Handle signals properly on Windows (#1123)
DannyDaemonic [Wed, 3 May 2023 00:52:35 +0000 (17:52 -0700)]
Call sh on build-info.sh (#1294)
kuvaus [Wed, 3 May 2023 00:43:43 +0000 (03:43 +0300)]
fix build-info.h for git submodules (#1289)
* make git build info work with submodules
---------
Co-authored-by: Green Sky <redacted>
slaren [Tue, 2 May 2023 23:36:45 +0000 (01:36 +0200)]
fix missing parameters in `llama_init_from_gpt_params` (#1293)
Ron Evans [Tue, 2 May 2023 20:39:51 +0000 (22:39 +0200)]
examples : add llama_init_from_gpt_params() common function (#1290)
Signed-off-by: deadprogram <redacted>
Georgi Gerganov [Tue, 2 May 2023 20:09:08 +0000 (23:09 +0300)]
llama : fix compile warnings
Georgi Gerganov [Tue, 2 May 2023 19:14:50 +0000 (22:14 +0300)]
ggml : fix 32-bit ARM
Ron Evans [Tue, 2 May 2023 17:53:52 +0000 (19:53 +0200)]
examples : improve vertical alignment of a few variables (#1286)
Signed-off-by: deadprogram <redacted>
Marvin Gießing [Tue, 2 May 2023 16:42:16 +0000 (18:42 +0200)]
ggml : fix ppc64le build error and make cmake detect Power processors (#1284)
* Fix ppc64le build issue
* Added support to detect ppc64* processors
Robert Brisita [Tue, 2 May 2023 16:23:44 +0000 (12:23 -0400)]
llama : allow 0 as a seed number. (#1275)
Ron Evans [Tue, 2 May 2023 16:13:26 +0000 (18:13 +0200)]
main : switch input_noecho to input_echo to remove negation (#979)
Signed-off-by: deadprogram <redacted>
slaren [Tue, 2 May 2023 14:03:00 +0000 (16:03 +0200)]
ggml: add names to tensors (#1268)
* ggml: add names to tensors
* minor improvements to dot file formatting
DannyDaemonic [Mon, 1 May 2023 16:23:47 +0000 (09:23 -0700)]
Add git-based build information for better issue tracking (#1232)
* Add git-based build information for better issue tracking
* macOS fix
* "build (hash)" and "CMAKE_SOURCE_DIR" changes
* Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages
* Fix conditional dependency on missing target
* Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile
* 4 space indenting for cmake, attempt to clean up my mess in Makefile
* Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it
slaren [Mon, 1 May 2023 16:11:07 +0000 (18:11 +0200)]
cuBLAS: refactor and optimize f16 mat mul performance (#1259)
* cuBLAS: refactor, convert fp16 to fp32 on device
* cuBLAS: use multiple streams, choose smartly between mul_mat_q and mul_mat_f16
* fix build
* cuBLAS: update block_q5_1
xloem [Mon, 1 May 2023 12:58:51 +0000 (08:58 -0400)]
llama : update stubs for systems without mmap and mlock (#1266)
Co-authored-by: John Doe <redacted>
Kerfuffle [Mon, 1 May 2023 11:56:07 +0000 (05:56 -0600)]
ggml : fix ggml_used_mem() (#1264)
Georgi Gerganov [Mon, 1 May 2023 11:54:59 +0000 (14:54 +0300)]
llama : fix session load / save (#1263)
slaren [Mon, 1 May 2023 11:32:22 +0000 (13:32 +0200)]
cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)
* cuBLAS: fall back to pageable memory if pinned alloc fails
* cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set
Alex Klinkhamer [Mon, 1 May 2023 07:24:20 +0000 (00:24 -0700)]
llama : let context be const when accessing const data (#1261)
Georgi Gerganov [Sun, 30 Apr 2023 19:28:51 +0000 (22:28 +0300)]
ggml : fix UB (int << 31)
Pavol Rusnak [Sun, 30 Apr 2023 18:48:38 +0000 (20:48 +0200)]
build: add armv{6,7,8} support to cmake (#1251)
- flags copied from Makefile
- updated comments in both CMakeLists.txt and Makefile to match reality
jon-chuang [Sun, 30 Apr 2023 18:41:35 +0000 (14:41 -0400)]
common : better default number of threads (#934)
* commit
* fix
* try-catch
* apply code review
* improve
* improve
* add macos headers
* done
* remove color
* fix windows
* minor
* fix
* Apply suggestions from code review
Co-authored-by: DannyDaemonic <redacted>
* remove
* minor
* minor
---------
Co-authored-by: jon-chuang <redacted>
Co-authored-by: DannyDaemonic <redacted>
0cc4m [Sun, 30 Apr 2023 18:34:52 +0000 (20:34 +0200)]
ggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)
* Implement q5_0, q5_1 and q8_0
* Work around q5_0 OpenCL issue
* Fix q8_0 dequant kernel
* Move cl kernels into ggml-opencl.c
* Use two memcpy calls for q5_0 buffer transfer
Georgi Gerganov [Sun, 30 Apr 2023 16:07:00 +0000 (19:07 +0300)]
ggml : add Q5 WASM SIMD + GGML_FTYPE
Stephan Walter [Sun, 30 Apr 2023 12:32:37 +0000 (12:32 +0000)]
Various fixes to mat_mul benchmark (#1253)
Georgi Gerganov [Sun, 30 Apr 2023 07:25:46 +0000 (10:25 +0300)]
ggml : fix labels for GGML_OP_ALIBI
Georgi Gerganov [Sat, 29 Apr 2023 18:34:23 +0000 (21:34 +0300)]
ggml : fix 32-bit ARM NEON
Georgi Gerganov [Sat, 29 Apr 2023 18:12:56 +0000 (21:12 +0300)]
ggml : use vzip instead of vuzp for consistency
Georgi Gerganov [Sat, 29 Apr 2023 16:28:36 +0000 (19:28 +0300)]
ggml : fix visibility and unused warnings
Georgi Gerganov [Sat, 29 Apr 2023 15:43:42 +0000 (18:43 +0300)]
ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)
Georgi Gerganov [Sat, 29 Apr 2023 15:43:28 +0000 (18:43 +0300)]
ggml : adjust mul_mat_f16 work memory (#1226)
* llama : minor - remove explicity int64_t cast
* ggml : reduce memory buffer for F16 mul_mat when not using cuBLAS
* ggml : add asserts to guard for incorrect wsize
Georgi Gerganov [Sat, 29 Apr 2023 10:53:12 +0000 (13:53 +0300)]
build : fix reference to old llama_util.h
Georgi Gerganov [Sat, 29 Apr 2023 10:48:11 +0000 (13:48 +0300)]
examples : fix save-load-state + rename llama-util.h
Georgi Gerganov [Sat, 29 Apr 2023 06:51:06 +0000 (09:51 +0300)]
common : change default parameters to pre-#1126 (#1223)
Ivan Stepanov [Sat, 29 Apr 2023 05:34:41 +0000 (08:34 +0300)]
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
slaren [Sat, 29 Apr 2023 00:04:18 +0000 (02:04 +0200)]
cuBLAS: use host pinned memory and dequantize while copying (#1207)
* cuBLAS: dequantize simultaneously while copying memory
* cuBLAS: use host pinned memory
* cuBLAS: improve ggml_compute_forward_mul_mat_f16_f32 with pinned memory
* cuBLAS: also pin kv cache
* fix rebase
Henri Vasserman [Fri, 28 Apr 2023 23:31:56 +0000 (02:31 +0300)]
cuBLAS: non-contiguous tensor support (#1215)
* Cuda: non-contiguous tensor support
* remove extra stuff
* rename
* fix error
* more fixes, now OpenBLAS and CLBlast build too
* now then?
Stephan Walter [Fri, 28 Apr 2023 23:10:43 +0000 (23:10 +0000)]
Remove Q4_3 which is no better than Q5 (#1218)
Georgi Gerganov [Fri, 28 Apr 2023 18:32:52 +0000 (21:32 +0300)]
readme : update hot topics
Georgi Gerganov [Fri, 28 Apr 2023 17:37:43 +0000 (20:37 +0300)]
ggml : sync ggml (ggml_alibi)
CRD716 [Fri, 28 Apr 2023 16:13:33 +0000 (11:13 -0500)]
examples : add Jeopardy example (#1168)
* Basic Setup
* Prevent Results.txt from coming up
* Prefixes, Line separators, etc
* editorcheck
* introduction to give more consistent results
* Basic graph thing
* Grading, ready for testing!
* Y'all ready to get funky?
* fix column removal stuff
* missed a few
Evan Jones [Fri, 28 Apr 2023 15:59:37 +0000 (11:59 -0400)]
llama : add session file format and saved sessions in main (#1169)
Georgi Gerganov [Fri, 28 Apr 2023 14:58:44 +0000 (17:58 +0300)]
ggml : add helper debug printf in soft_max
0cc4m [Fri, 28 Apr 2023 14:57:16 +0000 (16:57 +0200)]
ggml : add CLBlast support (#1164)
* Allow use of OpenCL GPU-based BLAS using ClBlast instead of OpenBLAS for context processing
* Improve ClBlast implementation, avoid recreating buffers, remove redundant transfers
* Finish merge of ClBlast support
* Move CLBlast implementation to separate file
Add buffer reuse code (adapted from slaren's cuda implementation)
* Add q4_2 and q4_3 CLBlast support, improve code
* Double CLBlast speed by disabling OpenBLAS thread workaround
Co-authored-by: Concedo <redacted>
Co-authored-by: slaren <redacted>
* Fix device selection env variable names
* Fix cast in opencl kernels
* Add CLBlast to CMakeLists.txt
* Replace buffer pool with static buffers a, b, qb, c
Fix compile warnings
* Fix typos, use GGML_TYPE defines, improve code
* Improve btype dequant kernel selection code, add error if type is unsupported
* Improve code quality
* Move internal stuff out of header
* Use internal enums instead of CLBlast enums
* Remove leftover C++ includes and defines
* Make event use easier to read
Co-authored-by: Henri Vasserman <redacted>
* Use c compiler for opencl files
* Simplify code, fix include
* First check error, then release event
* Make globals static, fix indentation
* Rename dequant kernels file to conform with other file names
* Fix import cl file name
---------
Co-authored-by: Concedo <redacted>
Co-authored-by: slaren <redacted>
Co-authored-by: Henri Vasserman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Folko-Ven [Fri, 28 Apr 2023 14:22:48 +0000 (19:22 +0500)]
Correcting link to w64devkit (#1214)
Correcting link to w64devkit (change seeto to skeeto).
Johannes Gäßler [Fri, 28 Apr 2023 13:40:32 +0000 (15:40 +0200)]
Add Manjaro CUDA include and lib dirs to Makefile (#1212)
Yann Follet [Fri, 28 Apr 2023 11:59:48 +0000 (19:59 +0800)]
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
Stephan Walter [Wed, 26 Apr 2023 20:26:42 +0000 (20:26 +0000)]
ggml : slightly faster AVX2 implementation for Q5 (#1197)
Georgi Gerganov [Wed, 26 Apr 2023 20:24:42 +0000 (23:24 +0300)]
readme : add quantization info
Georgi Gerganov [Wed, 26 Apr 2023 20:14:13 +0000 (23:14 +0300)]
ggml : add Q5_0 and Q5_1 quantization (#1187)
* ggml : add Q5_0 quantization (cuBLAS only)
* ggml : fix Q5_0 qh -> uint32_t
* ggml : fix q5_0 histogram stats
* ggml : q5_0 scalar dot product
* ggml : q5_0 ARM NEON dot
* ggml : q5_0 more efficient ARM NEON using uint64_t masks
* ggml : rename Q5_0 -> Q5_1
* ggml : adding Q5_0 mode
* quantize : add Q5_0 and Q5_1 to map
* ggml : AVX2 optimizations for Q5_0, Q5_1 (#1195)
---------
Co-authored-by: Stephan Walter <redacted>
Ásgeir Bjarni Ingvarsson [Wed, 26 Apr 2023 20:08:43 +0000 (20:08 +0000)]
Allow setting the rng seed after initialization. (#1184)
The llama_set_state_data function restores the rng state to what it
was at the time llama_copy_state_data was called. But users may want
to restore the state and proceed with a different seed.
DaniAndTheWeb [Wed, 26 Apr 2023 20:03:03 +0000 (22:03 +0200)]
Updating build instructions to include BLAS support (#1183)
* Updated build information
First update to the build instructions to include BLAS.
* Update README.md
* Update information about BLAS
* Better BLAS explanation
Adding a clearer BLAS explanation and adding a link to download the CUDA toolkit.
* Better BLAS explanation
* BLAS for Mac
Specifying that BLAS is already supported on Macs using the Accelerate Framework.
* Clarify the effect of BLAS
* Windows Make instructions
Added the instructions to build with Make on Windows
* Fixing typo
* Fix trailing whitespace
Pavol Rusnak [Wed, 26 Apr 2023 16:43:27 +0000 (18:43 +0200)]
quantize : use `map` to assign quantization type from `string` (#1191)
instead of `int` (while `int` option still being supported)
This allows the following usage:
`./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0`
instead of:
`./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`
Stephan Walter [Tue, 25 Apr 2023 21:41:56 +0000 (21:41 +0000)]
Update SHA256SUMS after quantization change (#1181)
Co-authored-by: Pavol Rusnak <redacted>
ostix360 [Tue, 25 Apr 2023 21:33:08 +0000 (23:33 +0200)]
py : cast lora_alpha to int in convert-lora-to-ggml (#1170)
Co-authored-by: Pavol Rusnak <redacted>
Pavol Rusnak [Tue, 25 Apr 2023 21:19:57 +0000 (23:19 +0200)]
nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)
Georgi Gerganov [Tue, 25 Apr 2023 20:40:51 +0000 (23:40 +0300)]
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
* ggml : add Q8_0 quantization format (rename the old one to Q8_1)
* tests : fix test-quantize-fns
* ggml : finalize Q8_0 implementation
* ggml : use q4_0_q8_0 and q4_2_q8_0
* ggml : fix Q8_0 dot product bug (ARM)
* ggml : Q8_0 unroll x2
* ggml : fix bug - using wrong block type
* ggml : extend quantize_fns_t with "vec_dot_type"
* ggml : fix Q8_0 to use 255 values out of 256
* ggml : fix assert using wrong QK4_2 instead of QK4_3
unbounded [Tue, 25 Apr 2023 17:20:46 +0000 (19:20 +0200)]
ggml : use full range for Q4_0 and Q4_2 quantization (#729)
* Use full range for q4_0 quantization
By keeping the sign of the highest magnitude, we can make sure the
highest value maps to -8, which is currently unused.
This is a bit of a freebie since it is fully backwards compatible with
the current format.
* Update quantize_row_q4_0 for AVX/AVX2
* Update quantize_row_q4_0 for WASM
Untested
* Update quantize_row_q4_0 for Arm NEON
* Update quantize_row_q4_0 for PowerPC
Untested
* Use full range for q4_2 quantization
xaedes [Mon, 24 Apr 2023 21:02:02 +0000 (23:02 +0200)]
ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
The sum over all rows is now computed instead of just the last row
Georgi Gerganov [Mon, 24 Apr 2023 19:18:25 +0000 (22:18 +0300)]
ggml : export symbols (#1155)
xaedes [Mon, 24 Apr 2023 16:23:31 +0000 (18:23 +0200)]
examples : add save_load_state example (#1150)
* add save_load_state example
* use <cstdio> instead of <iostream> and fprintf / printf instead of cout
* renamed save-load-state example files replacing underscores by dashes
Georgi Gerganov [Mon, 24 Apr 2023 15:47:03 +0000 (18:47 +0300)]
llama : increase scratch buffer size for 65B (ref #1152)
Temporary solution
mgroeber9110 [Mon, 24 Apr 2023 15:45:32 +0000 (17:45 +0200)]
examples/main README improvements and some light refactoring (#1131)
Stephan Walter [Mon, 24 Apr 2023 15:38:26 +0000 (15:38 +0000)]
Fix build for gcc 8 and test in CI (#1154)
slaren [Mon, 24 Apr 2023 15:29:58 +0000 (17:29 +0200)]
Fix cuda compilation (#1128)
* Fix: Issue with CUBLAS compilation error due to missing -fPIC flag
---------
Co-authored-by: B1gM8c <redacted>
Georgi Gerganov [Mon, 24 Apr 2023 04:40:02 +0000 (07:40 +0300)]
llama : refactor get / set state + remove redundant kv cache API (#1143)
slaren [Sun, 23 Apr 2023 21:03:44 +0000 (23:03 +0200)]
Fix LoRA acronym (#1145)
Georgi Gerganov [Sun, 23 Apr 2023 16:57:09 +0000 (19:57 +0300)]
scripts : add helper scripts to synch ggml repo
DannyDaemonic [Sun, 23 Apr 2023 15:37:02 +0000 (08:37 -0700)]
Added README.md for main with examples and explanations (#1139)
Georgi Gerganov [Sun, 23 Apr 2023 15:32:52 +0000 (18:32 +0300)]
ggml : do not print perf ops that have not been used at all
Georgi Gerganov [Sun, 23 Apr 2023 15:15:39 +0000 (18:15 +0300)]
ggml : better PERF prints + support "LLAMA_PERF=1 make"
Stephan Walter [Sun, 23 Apr 2023 11:01:03 +0000 (11:01 +0000)]
Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)
Pavol Rusnak [Sun, 23 Apr 2023 08:21:26 +0000 (10:21 +0200)]
readme : update gpt4all instructions (#980)
Yishuo Wang [Sun, 23 Apr 2023 07:57:05 +0000 (15:57 +0800)]
A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX512 (#1119)
Georgi Gerganov [Sat, 22 Apr 2023 13:31:56 +0000 (16:31 +0300)]
ggml : fix Q4_3 cuBLAS
Stephan Walter [Sat, 22 Apr 2023 13:12:29 +0000 (13:12 +0000)]
ci : trigger CI for drafts, but not most PR actions (#1125)
Stephan Walter [Sat, 22 Apr 2023 10:54:13 +0000 (10:54 +0000)]
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
unbounded [Sat, 22 Apr 2023 09:10:39 +0000 (11:10 +0200)]
ggml : unit test for quantization functions (#953)
* Unit test for quantization functions
Use the ggml_internal_get_quantize_fn function to loop through all
quantization formats and run a sanity check on the result.
Also add a microbenchmark that times these functions directly without
running the rest of the GGML graph.
* test-quantize-fns: CI fixes
Fix issues uncovered in CI
- need to use sizes divisible by 32*8 for loop unrolling
- use intrinsic header that should work on Mac
* test-quantize: remove
Per PR comment, subsumed by test-quantize-fns
* test-quantize: fix for q8_0 intermediates
wbpxre150 [Sat, 22 Apr 2023 08:56:35 +0000 (16:56 +0800)]
llama : print timings on ctrl+c exit (#1021)
* print timings on ctrl+c exit
* remove redundant free memory call.
* add global pointer to ctx.