]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Ivan Stepanov [Wed, 5 Apr 2023 14:38:37 +0000 (17:38 +0300)]
make : missing host optimizations in CXXFLAGS (#763)
Adithya Balaji [Wed, 5 Apr 2023 14:36:12 +0000 (16:36 +0200)]
readme : update with CMake and windows example (#748)
* README: Update with CMake and windows example
* README: update with code-review for cmake build
at8u [Wed, 5 Apr 2023 14:32:42 +0000 (14:32 +0000)]
examples : add Miku.sh (#724)
* Add Miku.sh to examples
* Add missing line to prompt in Miku.sh
* Add --keep param to Miku.sh
* Remove '[end_of_conversation]' line from Miku.sh
No longer is necessary.
Andrew Duffy [Wed, 5 Apr 2023 10:44:24 +0000 (11:44 +0100)]
Add Accelerate/BLAS when using Swift (#765)
mgroeber9110 [Mon, 3 Apr 2023 16:00:55 +0000 (18:00 +0200)]
Windows: reactive sigint handler after each Ctrl-C (#736)
SebastianApel [Mon, 3 Apr 2023 07:52:28 +0000 (09:52 +0200)]
10+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)
* Performance improvement of AVX2 code
* Fixed problem with MSVC compiler
* Reviewer comments: removed double semicolon, deleted empty line 1962
Ivan Stepanov [Mon, 3 Apr 2023 00:19:04 +0000 (03:19 +0300)]
Define non-positive temperature behavior (#720)
bsilvereagle [Sun, 2 Apr 2023 22:13:03 +0000 (15:13 -0700)]
Remove torch GPU dependencies from the Docker.full image (#665)
By using `pip install torch --index-url https://download.pytorch.org/whl/cpu`
instead of `pip install torch` we can specify we want to install a CPU-only version
of PyTorch without any GPU dependencies. This reduces the size of the Docker image
from 7.32 GB to 1.62 GB
Thatcher Chamberlin [Sun, 2 Apr 2023 10:48:57 +0000 (06:48 -0400)]
Add a missing step to the gpt4all instructions (#690)
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
Christian Falch [Sun, 2 Apr 2023 10:23:04 +0000 (12:23 +0200)]
Added api for getting/setting the kv_cache (#685)
The api provides access methods for retrieving the current memory buffer for the kv_cache and its token number.
It also contains a method for setting the kv_cache from a memory buffer.
This makes it possible to load/save history - maybe support --cache-prompt paramater as well?
Co-authored-by: Pavol Rusnak <redacted>
Marian Cepok [Sun, 2 Apr 2023 10:21:31 +0000 (12:21 +0200)]
ggml : change ne to int64_t (#626)
Leonardo Neumann [Sun, 2 Apr 2023 07:56:20 +0000 (04:56 -0300)]
examples : add gpt4all script (#658)
Stephan Walter [Sun, 2 Apr 2023 07:18:53 +0000 (07:18 +0000)]
llama : do not allocate KV cache for "vocab_only == true" (#682)
Fixes sanitizer CI
Fabian [Sun, 2 Apr 2023 07:17:05 +0000 (09:17 +0200)]
make : use -march=native -mtune=native on x86 (#609)
Murilo Santana [Sun, 2 Apr 2023 02:41:12 +0000 (23:41 -0300)]
fix default params for examples/main (#697)
Ikko Eltociear Ashimine [Sat, 1 Apr 2023 16:38:18 +0000 (01:38 +0900)]
py: huggingface -> Hugging Face (#686)
rimoliga [Sat, 1 Apr 2023 14:57:30 +0000 (11:57 -0300)]
readme: replace termux links with homepage, play store is deprecated (#680)
Slaren [Fri, 31 Mar 2023 18:03:48 +0000 (20:03 +0200)]
Show error message when -f fails
Stephan Walter [Fri, 31 Mar 2023 19:19:16 +0000 (19:19 +0000)]
Enable -std= for cmake builds, fix warnings (#598)
slaren [Fri, 31 Mar 2023 15:55:52 +0000 (17:55 +0200)]
Optimize AVX2 ggml_vec_dot_q4_0 (#642)
perserk [Fri, 31 Mar 2023 11:55:44 +0000 (16:55 +0500)]
Add AVX acceleration (#617)
* ggml : add AVX quantize_row_q4_0()
* ggml : add AVX ggml_vec_dot_q4_0()
* ggml : refactor AVX part of ggml_vec_dot_q4_0()
https://github.com/ggerganov/llama.cpp/pull/617#issuecomment-
1489985645
Pavol Rusnak [Wed, 29 Mar 2023 19:31:24 +0000 (21:31 +0200)]
py : cleanup the code
- use f-strings where possible
- drop first param of encode/decode functions since "utf-8" is the default
Pavol Rusnak [Thu, 30 Mar 2023 22:52:06 +0000 (00:52 +0200)]
drop quantize.py (now that models are using a single file)
Georgi Gerganov [Thu, 30 Mar 2023 19:31:54 +0000 (22:31 +0300)]
readme : update supported models
Justine Tunney [Thu, 30 Mar 2023 12:42:56 +0000 (05:42 -0700)]
Introduce GGML migration tool for new file format
If you deleted your old Meta LLaMA .pth files, then the
migrate-ggml-2023-03-30-pr613.py script will allow you to convert your
old ggml files into the new mmap()'able format.
See #613
Justine Tunney [Thu, 30 Mar 2023 08:53:36 +0000 (01:53 -0700)]
Ensure --mlock works properly with mmap() support
Justine Tunney [Wed, 29 Mar 2023 20:51:37 +0000 (13:51 -0700)]
Make loading weights 10-100x faster
This is a breaking change that's going to give you three benefits:
1. Your inference commands should load 100x faster
2. You may be able to safely load models 2x larger
3. You can run many concurrent inference processes
This was accomplished by changing the file format so we can mmap()
weights directly into memory without having to read() or copy them
thereby ensuring the kernel can make its file cache pages directly
accessible to our inference processes; and secondly, that the file
cache pages are much less likely to get evicted (which would force
loads to hit disk) because they're no longer competing with memory
pages that were needlessly created by gigabytes of standard i/o.
The new file format supports single-file models like LLaMA 7b, and
it also supports multi-file models like LLaMA 13B. Our Python tool
now merges the foo.1, foo.2, etc. files back into a single file so
that the C++ code which maps it doesn't need to reshape data every
time. That's made llama.cpp so much simpler. Much of its load code
has now been deleted.
Furthermore, this change ensures that tensors are aligned properly
on a 32-byte boundary. That opens the door to seeing if we can get
additional performance gains on some microprocessors, by using ops
that require memory alignment.
Lastly note that both POSIX and the Windows platform are supported
Fixes #91
Slaren [Wed, 29 Mar 2023 20:22:36 +0000 (22:22 +0200)]
Initial windows support (untested)
Slaren [Wed, 29 Mar 2023 06:53:14 +0000 (08:53 +0200)]
Always initialize mm_addr and mm_length in llama_model
Slaren [Wed, 29 Mar 2023 06:31:26 +0000 (08:31 +0200)]
Unmap the file in llama_free
Slaren [Wed, 29 Mar 2023 04:18:18 +0000 (06:18 +0200)]
Make mmap_file static
Slaren [Wed, 29 Mar 2023 03:38:57 +0000 (05:38 +0200)]
Fix ggml_init_params in quantize
Slaren [Wed, 29 Mar 2023 00:03:43 +0000 (02:03 +0200)]
Add mmap support for model files
Stephan Walter [Thu, 30 Mar 2023 17:56:59 +0000 (17:56 +0000)]
cmake : properly invoke CTest (#629)
Casey Primozic [Thu, 30 Mar 2023 17:53:35 +0000 (10:53 -0700)]
Remove unused variable (#607)
* It seems some new warning were added recently that exposed this. I wrote the code that included this unused variable originally and it is indeed not needed.
david raistrick [Thu, 30 Mar 2023 17:34:45 +0000 (13:34 -0400)]
make : fix darwin f16c flags check (#615)
...there was no check. ported upstream from https://github.com/zanussbaum/gpt4all.cpp/pull/2 (I dont see any clean path for upstream patches)
Georgi Gerganov [Thu, 30 Mar 2023 17:27:32 +0000 (20:27 +0300)]
ggml : fix NEON signs (close #620, #622)
slaren [Thu, 30 Mar 2023 09:16:30 +0000 (11:16 +0200)]
Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)
anzz1 [Wed, 29 Mar 2023 20:44:39 +0000 (23:44 +0300)]
ci : re-enable AVX512 testing (Windows-MSVC) (#584)
* CI: Re-enable AVX512 testing (Windows-MSVC)
Now with 100% less base64 encoding
* plain __cpuid is enough here
Georgi Gerganov [Wed, 29 Mar 2023 19:15:34 +0000 (22:15 +0300)]
ggml : init time on first ggml_init() call
Georgi Gerganov [Wed, 29 Mar 2023 19:13:12 +0000 (22:13 +0300)]
llama : fix compile warnings when reading the vocab
Georgi Gerganov [Wed, 29 Mar 2023 19:10:01 +0000 (22:10 +0300)]
ggml : add ARM_NEON dequantize_row_q4_1()
Georgi Gerganov [Wed, 29 Mar 2023 19:03:02 +0000 (22:03 +0300)]
ggml : add ARM_NEON quantize_row_q4_1()
Georgi Gerganov [Wed, 29 Mar 2023 18:47:33 +0000 (21:47 +0300)]
ggml : add ARM_NEON ggml_vec_dot_q4_1()
Pavol Rusnak [Wed, 29 Mar 2023 18:09:25 +0000 (20:09 +0200)]
rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)
to match filenames of other converters
Thérence [Wed, 29 Mar 2023 17:21:09 +0000 (19:21 +0200)]
Create chat-13B.bat (#592)
* Create chat-13B.bat
Same script than chat-13B.sh, but for windows users.
Tested and working on windows 10/11 v 22H2
* Apply suggestions from code review
---------
Co-authored-by: anzz1 <redacted>
Georgi Gerganov [Wed, 29 Mar 2023 16:38:31 +0000 (19:38 +0300)]
readme : fix typos
Georgi Gerganov [Wed, 29 Mar 2023 16:37:20 +0000 (19:37 +0300)]
readme : add GPT4All instructions (close #588)
Georgi Gerganov [Wed, 29 Mar 2023 16:29:26 +0000 (19:29 +0300)]
py : add GPT4All conversion script
For now: copy-paste
Too much time for me to deduplicate the python code
Maël Kerbiriou [Wed, 29 Mar 2023 16:10:07 +0000 (18:10 +0200)]
llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)
Tobias Lütke [Wed, 29 Mar 2023 15:10:24 +0000 (17:10 +0200)]
add example of re-act pattern (#583)
* add example of re-act pattern
* spelling...
* fixed whitespace in reverse prompt issue
anzz1 [Wed, 29 Mar 2023 13:20:07 +0000 (16:20 +0300)]
Fix GCC warning about binary literal (#595)
0b10101010 -> 0xAA /*
0b10101010 */
anzz1 [Wed, 29 Mar 2023 13:19:29 +0000 (16:19 +0300)]
Fix typo in llama.h (#593)
anzz1 [Tue, 28 Mar 2023 19:44:29 +0000 (22:44 +0300)]
Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)
* Enable Fused-Multiply-Add (FMA) instructions on MSVC
__FMA__ macro does not exist in MSVC
* Enable F16C/CVT16 vector extensions on MSVC
__F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512
* MSVC cvt intrinsics
* Add __SSE3__ macro for MSVC too because why not
even though it's not currently used for anything when AVX is defined
anzz1 [Tue, 28 Mar 2023 19:43:25 +0000 (22:43 +0300)]
CI: fix subdirectory path globbing (#546)
- Changes in subdirectories will now be detecter properly
- (Windows-MSVC) AVX512 tests temporarily disabled
anzz1 [Tue, 28 Mar 2023 18:23:09 +0000 (21:23 +0300)]
llama : fix linkage with mingw (#551)
* Revert
7e53955 (#542)
Still needs to be fixed properly
* Fix linking on mingw32
slaren [Tue, 28 Mar 2023 18:06:03 +0000 (20:06 +0200)]
ggml : add AVX2 implementation of quantize_row_q4_1 (#515)
* Add AVX2 implementation of quantize_row_q4_1
* Actually use AVX2
* Make quantize_row_q4_1 static
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
thement [Tue, 28 Mar 2023 17:55:42 +0000 (19:55 +0200)]
py : add temporary script to convert old ggml files to newer version (#539)
Co-authored-by: Jakub Horak <redacted>
Tai Duc Nguyen [Tue, 28 Mar 2023 17:51:29 +0000 (13:51 -0400)]
py : add capabiliy to convert from ggml back to torch or hf format for further consumption/training/finetuning (#403)
Stephan Walter [Tue, 28 Mar 2023 17:13:01 +0000 (17:13 +0000)]
ggml : refactor quantized processing functions (#509)
* Refactor quantized processing functions
* ggml : minor
---------
Co-authored-by: Georgi Gerganov <redacted>
DooWoong Lee (David) [Tue, 28 Mar 2023 17:02:34 +0000 (02:02 +0900)]
py : removed unused `model` variable and verified that the code functions correctly with `vocab_only` setting. Also confirmed that the code works as expected after running with reduced memory usage due to deletion of no-longer-needed variable. (#547)
Georgi Gerganov [Tue, 28 Mar 2023 17:01:09 +0000 (20:01 +0300)]
ci : make ctest verbose, hopefully we see what is wrong with the sanitizer
Georgi Gerganov [Tue, 28 Mar 2023 16:51:55 +0000 (19:51 +0300)]
tests : free llama context at the end of the test
Stephan Walter [Tue, 28 Mar 2023 16:48:20 +0000 (16:48 +0000)]
all : be more strict about converting float to double (#458)
* Be more strict about converting float to double
* Test equivalence of round, SILU implementations
Test module is commented out in CMakeLists.txt because the tests may
take a long time, depending on how much the compiler optimizes.
* Fix softmax in perplexity.cpp
* all : prefer float over double where appropriate
* perplexity : add <cmath>
---------
Co-authored-by: Georgi Gerganov <redacted>
Jed Fox [Tue, 28 Mar 2023 16:39:01 +0000 (11:39 -0500)]
deploy : add a Package.swift for SwiftPM support (#393)
* Add a Package.swift for SwiftPM support
* Swap from exclusions to allowlist
Stephan Walter [Tue, 28 Mar 2023 15:56:03 +0000 (15:56 +0000)]
ggml : introduce structs for the q4 data blocks (#356)
* Introduce structs for the q4 data blocks
* ggml : rename quant struct variables + fix ARM_NEON
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 28 Mar 2023 15:34:35 +0000 (18:34 +0300)]
gitignore : add "embedding"
dotpy314 [Tue, 28 Mar 2023 15:06:28 +0000 (23:06 +0800)]
Check the existence of f16_model_path_base in quantize.py (#574)
Co-authored-by: Jincheng Miao <redacted>
slaren [Tue, 28 Mar 2023 14:26:55 +0000 (16:26 +0200)]
Fix usage of F16C intrinsics in AVX code (#563)
* Fix usage of F16C intrinsics in AVX code when F16C is not defined
anzz1 [Tue, 28 Mar 2023 14:09:55 +0000 (17:09 +0300)]
main.cpp fixes, refactoring (#571)
- main: entering empty line passes back control without new input in interactive/instruct modes
- instruct mode: keep prompt fix
- instruct mode: duplicate instruct prompt fix
- refactor: move common console code from main->common
RJ Adriaansen [Tue, 28 Mar 2023 06:11:09 +0000 (08:11 +0200)]
Add embedding example to Makefile (#540)
Marco Matthies [Mon, 27 Mar 2023 04:55:26 +0000 (06:55 +0200)]
Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)
Erik Scholz [Sun, 26 Mar 2023 15:48:40 +0000 (17:48 +0200)]
ci: add debug build to sanitizer build matrix (#527)
Stephan Walter [Sun, 26 Mar 2023 15:34:02 +0000 (15:34 +0000)]
Fix undefined variables in debug build, remove unused variables (#531)
Juan Calderon-Perez [Sun, 26 Mar 2023 14:48:42 +0000 (10:48 -0400)]
Add support for linux/arm64 platform during Docker Builds (#514)
* Add support for linux/arm64 platform
* Add platform to versioned builds
Stephan Walter [Sun, 26 Mar 2023 13:14:01 +0000 (13:14 +0000)]
Update README and comments for standalone perplexity tool (#525)
anzz1 [Sun, 26 Mar 2023 13:06:10 +0000 (16:06 +0300)]
[main] fix infinite generation (-n == -1) (#523)
Georgi Gerganov [Sun, 26 Mar 2023 07:20:49 +0000 (10:20 +0300)]
Add logo to README.md
Harald Fernengel [Sun, 26 Mar 2023 05:25:46 +0000 (07:25 +0200)]
Exit from interactive mode if input stream is bad (#491)
Allow exiting the interactive prompt also with CTRL-D on Unix and CTRL-Z
on Windows.
anzz1 [Sat, 25 Mar 2023 22:13:28 +0000 (00:13 +0200)]
CI: Run other sanitizer builds even if one fails (#511)
applies only to sanitizer builds so they wont be cancelled
jp-x-g [Sat, 25 Mar 2023 21:53:55 +0000 (14:53 -0700)]
Clarify console output in convert-pth-to-ggml.py (#512)
"Processing part 1 of 3" instead of "Processing part 0"
anzz1 [Sat, 25 Mar 2023 21:38:11 +0000 (23:38 +0200)]
CMake / CI additions (#497)
* CMake: Add AVX512 option
* CI: Add AVX/AVX512 builds (Windows)
(AVX512 tests can only be run when the worker happens to support it, building works anyway)
* CMake: Fix sanitizer linkage ( merged #468 )
* CI: Add sanitizer builds (Ubuntu)
* CI: Fix release tagging
(change @zendesk/action-create-release to @anzz1/action-create-release until upstream PR Added commitish as input zendesk/action-create-release#32 is merged)
anzz1 [Sat, 25 Mar 2023 20:29:22 +0000 (22:29 +0200)]
(Windows) Set console to UTF-8 on init (#420)
Sets console codepage to 65001 (CP_UTF8) on start for both input and output, should fix problems with UTF-8 characters.
Georgi Gerganov [Sat, 25 Mar 2023 19:53:39 +0000 (21:53 +0200)]
Fix colors enabling on WIN32
Georgi Gerganov [Sat, 25 Mar 2023 19:51:41 +0000 (21:51 +0200)]
If n_predict == -1, generate forever
Georgi Gerganov [Sat, 25 Mar 2023 19:36:22 +0000 (21:36 +0200)]
Inifinite generation via context swapping (#71)
Georgi Gerganov [Sat, 25 Mar 2023 18:51:14 +0000 (20:51 +0200)]
Cleanup STL headers + fix embedding examples + minor stuff
Georgi Gerganov [Sat, 25 Mar 2023 18:36:52 +0000 (20:36 +0200)]
Move chat scripts into "./examples"
slaren [Sat, 25 Mar 2023 18:31:48 +0000 (19:31 +0100)]
Add AVX2 implementation of dequantize_row_q4_1 (#505)
Georgi Gerganov [Sat, 25 Mar 2023 18:26:40 +0000 (20:26 +0200)]
Overhaul the examples structure
- main -> examples
- utils -> examples (renamed to "common")
- quantize -> examples
- separate tools for "perplexity" and "embedding"
Hope I didn't break something !
Georgi Gerganov [Sat, 25 Mar 2023 17:47:21 +0000 (19:47 +0200)]
Retire the ggml_mul_mat() branch for transposed src0 (#500)
* Retire the ggml_mul_mat() for transposed src0
- It can always be made contiguous with ggml_cpy()
- The code is now simplified
- The results are deterministic in respect to num threads
* SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502)
* Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON
* Fix dequantization - forgot to interleave the quants
Georgi Gerganov [Sat, 25 Mar 2023 15:16:50 +0000 (17:16 +0200)]
Disable prompt verbosity by default and add option to enable (#480)
slaren [Sat, 25 Mar 2023 15:06:49 +0000 (16:06 +0100)]
Add AVX2 implementation of dequantize_row_q4_0 (#467)
Georgi Gerganov [Sat, 25 Mar 2023 15:03:10 +0000 (17:03 +0200)]
Don't interefe with BLAS for large prompts by running only 1 thread
Georgi Gerganov [Sat, 25 Mar 2023 14:47:59 +0000 (16:47 +0200)]
Add longer DAN prompt for testing big batch numbers
slaren [Sat, 25 Mar 2023 14:34:23 +0000 (15:34 +0100)]
Add timings for the prompt evaluation (#478)
Georgi Gerganov [Sat, 25 Mar 2023 14:30:32 +0000 (16:30 +0200)]
Remove obsolete information from README
Georgi Gerganov [Sat, 25 Mar 2023 14:22:05 +0000 (16:22 +0200)]
Remove obsolete assert and fix compiler warning
Georgi Gerganov [Sat, 25 Mar 2023 14:09:54 +0000 (16:09 +0200)]
Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS
anzz1 [Sat, 25 Mar 2023 12:42:09 +0000 (14:42 +0200)]
bounds checking for input prefix (#492)