]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Pavol Rusnak [Thu, 30 Mar 2023 22:52:06 +0000 (00:52 +0200)]
drop quantize.py (now that models are using a single file)
Georgi Gerganov [Thu, 30 Mar 2023 19:31:54 +0000 (22:31 +0300)]
readme : update supported models
Justine Tunney [Thu, 30 Mar 2023 12:42:56 +0000 (05:42 -0700)]
Introduce GGML migration tool for new file format
If you deleted your old Meta LLaMA .pth files, then the
migrate-ggml-2023-03-30-pr613.py script will allow you to convert your
old ggml files into the new mmap()'able format.
See #613
Justine Tunney [Thu, 30 Mar 2023 08:53:36 +0000 (01:53 -0700)]
Ensure --mlock works properly with mmap() support
Justine Tunney [Wed, 29 Mar 2023 20:51:37 +0000 (13:51 -0700)]
Make loading weights 10-100x faster
This is a breaking change that's going to give you three benefits:
1. Your inference commands should load 100x faster
2. You may be able to safely load models 2x larger
3. You can run many concurrent inference processes
This was accomplished by changing the file format so we can mmap()
weights directly into memory without having to read() or copy them
thereby ensuring the kernel can make its file cache pages directly
accessible to our inference processes; and secondly, that the file
cache pages are much less likely to get evicted (which would force
loads to hit disk) because they're no longer competing with memory
pages that were needlessly created by gigabytes of standard i/o.
The new file format supports single-file models like LLaMA 7b, and
it also supports multi-file models like LLaMA 13B. Our Python tool
now merges the foo.1, foo.2, etc. files back into a single file so
that the C++ code which maps it doesn't need to reshape data every
time. That's made llama.cpp so much simpler. Much of its load code
has now been deleted.
Furthermore, this change ensures that tensors are aligned properly
on a 32-byte boundary. That opens the door to seeing if we can get
additional performance gains on some microprocessors, by using ops
that require memory alignment.
Lastly note that both POSIX and the Windows platform are supported
Fixes #91
Slaren [Wed, 29 Mar 2023 20:22:36 +0000 (22:22 +0200)]
Initial windows support (untested)
Slaren [Wed, 29 Mar 2023 06:53:14 +0000 (08:53 +0200)]
Always initialize mm_addr and mm_length in llama_model
Slaren [Wed, 29 Mar 2023 06:31:26 +0000 (08:31 +0200)]
Unmap the file in llama_free
Slaren [Wed, 29 Mar 2023 04:18:18 +0000 (06:18 +0200)]
Make mmap_file static
Slaren [Wed, 29 Mar 2023 03:38:57 +0000 (05:38 +0200)]
Fix ggml_init_params in quantize
Slaren [Wed, 29 Mar 2023 00:03:43 +0000 (02:03 +0200)]
Add mmap support for model files
Stephan Walter [Thu, 30 Mar 2023 17:56:59 +0000 (17:56 +0000)]
cmake : properly invoke CTest (#629)
Casey Primozic [Thu, 30 Mar 2023 17:53:35 +0000 (10:53 -0700)]
Remove unused variable (#607)
* It seems some new warning were added recently that exposed this. I wrote the code that included this unused variable originally and it is indeed not needed.
david raistrick [Thu, 30 Mar 2023 17:34:45 +0000 (13:34 -0400)]
make : fix darwin f16c flags check (#615)
...there was no check. ported upstream from https://github.com/zanussbaum/gpt4all.cpp/pull/2 (I dont see any clean path for upstream patches)
Georgi Gerganov [Thu, 30 Mar 2023 17:27:32 +0000 (20:27 +0300)]
ggml : fix NEON signs (close #620, #622)
slaren [Thu, 30 Mar 2023 09:16:30 +0000 (11:16 +0200)]
Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)
anzz1 [Wed, 29 Mar 2023 20:44:39 +0000 (23:44 +0300)]
ci : re-enable AVX512 testing (Windows-MSVC) (#584)
* CI: Re-enable AVX512 testing (Windows-MSVC)
Now with 100% less base64 encoding
* plain __cpuid is enough here
Georgi Gerganov [Wed, 29 Mar 2023 19:15:34 +0000 (22:15 +0300)]
ggml : init time on first ggml_init() call
Georgi Gerganov [Wed, 29 Mar 2023 19:13:12 +0000 (22:13 +0300)]
llama : fix compile warnings when reading the vocab
Georgi Gerganov [Wed, 29 Mar 2023 19:10:01 +0000 (22:10 +0300)]
ggml : add ARM_NEON dequantize_row_q4_1()
Georgi Gerganov [Wed, 29 Mar 2023 19:03:02 +0000 (22:03 +0300)]
ggml : add ARM_NEON quantize_row_q4_1()
Georgi Gerganov [Wed, 29 Mar 2023 18:47:33 +0000 (21:47 +0300)]
ggml : add ARM_NEON ggml_vec_dot_q4_1()
Pavol Rusnak [Wed, 29 Mar 2023 18:09:25 +0000 (20:09 +0200)]
rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)
to match filenames of other converters
Thérence [Wed, 29 Mar 2023 17:21:09 +0000 (19:21 +0200)]
Create chat-13B.bat (#592)
* Create chat-13B.bat
Same script than chat-13B.sh, but for windows users.
Tested and working on windows 10/11 v 22H2
* Apply suggestions from code review
---------
Co-authored-by: anzz1 <redacted>
Georgi Gerganov [Wed, 29 Mar 2023 16:38:31 +0000 (19:38 +0300)]
readme : fix typos
Georgi Gerganov [Wed, 29 Mar 2023 16:37:20 +0000 (19:37 +0300)]
readme : add GPT4All instructions (close #588)
Georgi Gerganov [Wed, 29 Mar 2023 16:29:26 +0000 (19:29 +0300)]
py : add GPT4All conversion script
For now: copy-paste
Too much time for me to deduplicate the python code
Maël Kerbiriou [Wed, 29 Mar 2023 16:10:07 +0000 (18:10 +0200)]
llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)
Tobias Lütke [Wed, 29 Mar 2023 15:10:24 +0000 (17:10 +0200)]
add example of re-act pattern (#583)
* add example of re-act pattern
* spelling...
* fixed whitespace in reverse prompt issue
anzz1 [Wed, 29 Mar 2023 13:20:07 +0000 (16:20 +0300)]
Fix GCC warning about binary literal (#595)
0b10101010 -> 0xAA /*
0b10101010 */
anzz1 [Wed, 29 Mar 2023 13:19:29 +0000 (16:19 +0300)]
Fix typo in llama.h (#593)
anzz1 [Tue, 28 Mar 2023 19:44:29 +0000 (22:44 +0300)]
Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)
* Enable Fused-Multiply-Add (FMA) instructions on MSVC
__FMA__ macro does not exist in MSVC
* Enable F16C/CVT16 vector extensions on MSVC
__F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512
* MSVC cvt intrinsics
* Add __SSE3__ macro for MSVC too because why not
even though it's not currently used for anything when AVX is defined
anzz1 [Tue, 28 Mar 2023 19:43:25 +0000 (22:43 +0300)]
CI: fix subdirectory path globbing (#546)
- Changes in subdirectories will now be detecter properly
- (Windows-MSVC) AVX512 tests temporarily disabled
anzz1 [Tue, 28 Mar 2023 18:23:09 +0000 (21:23 +0300)]
llama : fix linkage with mingw (#551)
* Revert
7e53955 (#542)
Still needs to be fixed properly
* Fix linking on mingw32
slaren [Tue, 28 Mar 2023 18:06:03 +0000 (20:06 +0200)]
ggml : add AVX2 implementation of quantize_row_q4_1 (#515)
* Add AVX2 implementation of quantize_row_q4_1
* Actually use AVX2
* Make quantize_row_q4_1 static
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
thement [Tue, 28 Mar 2023 17:55:42 +0000 (19:55 +0200)]
py : add temporary script to convert old ggml files to newer version (#539)
Co-authored-by: Jakub Horak <redacted>
Tai Duc Nguyen [Tue, 28 Mar 2023 17:51:29 +0000 (13:51 -0400)]
py : add capabiliy to convert from ggml back to torch or hf format for further consumption/training/finetuning (#403)
Stephan Walter [Tue, 28 Mar 2023 17:13:01 +0000 (17:13 +0000)]
ggml : refactor quantized processing functions (#509)
* Refactor quantized processing functions
* ggml : minor
---------
Co-authored-by: Georgi Gerganov <redacted>
DooWoong Lee (David) [Tue, 28 Mar 2023 17:02:34 +0000 (02:02 +0900)]
py : removed unused `model` variable and verified that the code functions correctly with `vocab_only` setting. Also confirmed that the code works as expected after running with reduced memory usage due to deletion of no-longer-needed variable. (#547)
Georgi Gerganov [Tue, 28 Mar 2023 17:01:09 +0000 (20:01 +0300)]
ci : make ctest verbose, hopefully we see what is wrong with the sanitizer
Georgi Gerganov [Tue, 28 Mar 2023 16:51:55 +0000 (19:51 +0300)]
tests : free llama context at the end of the test
Stephan Walter [Tue, 28 Mar 2023 16:48:20 +0000 (16:48 +0000)]
all : be more strict about converting float to double (#458)
* Be more strict about converting float to double
* Test equivalence of round, SILU implementations
Test module is commented out in CMakeLists.txt because the tests may
take a long time, depending on how much the compiler optimizes.
* Fix softmax in perplexity.cpp
* all : prefer float over double where appropriate
* perplexity : add <cmath>
---------
Co-authored-by: Georgi Gerganov <redacted>
Jed Fox [Tue, 28 Mar 2023 16:39:01 +0000 (11:39 -0500)]
deploy : add a Package.swift for SwiftPM support (#393)
* Add a Package.swift for SwiftPM support
* Swap from exclusions to allowlist
Stephan Walter [Tue, 28 Mar 2023 15:56:03 +0000 (15:56 +0000)]
ggml : introduce structs for the q4 data blocks (#356)
* Introduce structs for the q4 data blocks
* ggml : rename quant struct variables + fix ARM_NEON
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 28 Mar 2023 15:34:35 +0000 (18:34 +0300)]
gitignore : add "embedding"
dotpy314 [Tue, 28 Mar 2023 15:06:28 +0000 (23:06 +0800)]
Check the existence of f16_model_path_base in quantize.py (#574)
Co-authored-by: Jincheng Miao <redacted>
slaren [Tue, 28 Mar 2023 14:26:55 +0000 (16:26 +0200)]
Fix usage of F16C intrinsics in AVX code (#563)
* Fix usage of F16C intrinsics in AVX code when F16C is not defined
anzz1 [Tue, 28 Mar 2023 14:09:55 +0000 (17:09 +0300)]
main.cpp fixes, refactoring (#571)
- main: entering empty line passes back control without new input in interactive/instruct modes
- instruct mode: keep prompt fix
- instruct mode: duplicate instruct prompt fix
- refactor: move common console code from main->common
RJ Adriaansen [Tue, 28 Mar 2023 06:11:09 +0000 (08:11 +0200)]
Add embedding example to Makefile (#540)
Marco Matthies [Mon, 27 Mar 2023 04:55:26 +0000 (06:55 +0200)]
Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)
Erik Scholz [Sun, 26 Mar 2023 15:48:40 +0000 (17:48 +0200)]
ci: add debug build to sanitizer build matrix (#527)
Stephan Walter [Sun, 26 Mar 2023 15:34:02 +0000 (15:34 +0000)]
Fix undefined variables in debug build, remove unused variables (#531)
Juan Calderon-Perez [Sun, 26 Mar 2023 14:48:42 +0000 (10:48 -0400)]
Add support for linux/arm64 platform during Docker Builds (#514)
* Add support for linux/arm64 platform
* Add platform to versioned builds
Stephan Walter [Sun, 26 Mar 2023 13:14:01 +0000 (13:14 +0000)]
Update README and comments for standalone perplexity tool (#525)
anzz1 [Sun, 26 Mar 2023 13:06:10 +0000 (16:06 +0300)]
[main] fix infinite generation (-n == -1) (#523)
Georgi Gerganov [Sun, 26 Mar 2023 07:20:49 +0000 (10:20 +0300)]
Add logo to README.md
Harald Fernengel [Sun, 26 Mar 2023 05:25:46 +0000 (07:25 +0200)]
Exit from interactive mode if input stream is bad (#491)
Allow exiting the interactive prompt also with CTRL-D on Unix and CTRL-Z
on Windows.
anzz1 [Sat, 25 Mar 2023 22:13:28 +0000 (00:13 +0200)]
CI: Run other sanitizer builds even if one fails (#511)
applies only to sanitizer builds so they wont be cancelled
jp-x-g [Sat, 25 Mar 2023 21:53:55 +0000 (14:53 -0700)]
Clarify console output in convert-pth-to-ggml.py (#512)
"Processing part 1 of 3" instead of "Processing part 0"
anzz1 [Sat, 25 Mar 2023 21:38:11 +0000 (23:38 +0200)]
CMake / CI additions (#497)
* CMake: Add AVX512 option
* CI: Add AVX/AVX512 builds (Windows)
(AVX512 tests can only be run when the worker happens to support it, building works anyway)
* CMake: Fix sanitizer linkage ( merged #468 )
* CI: Add sanitizer builds (Ubuntu)
* CI: Fix release tagging
(change @zendesk/action-create-release to @anzz1/action-create-release until upstream PR Added commitish as input zendesk/action-create-release#32 is merged)
anzz1 [Sat, 25 Mar 2023 20:29:22 +0000 (22:29 +0200)]
(Windows) Set console to UTF-8 on init (#420)
Sets console codepage to 65001 (CP_UTF8) on start for both input and output, should fix problems with UTF-8 characters.
Georgi Gerganov [Sat, 25 Mar 2023 19:53:39 +0000 (21:53 +0200)]
Fix colors enabling on WIN32
Georgi Gerganov [Sat, 25 Mar 2023 19:51:41 +0000 (21:51 +0200)]
If n_predict == -1, generate forever
Georgi Gerganov [Sat, 25 Mar 2023 19:36:22 +0000 (21:36 +0200)]
Inifinite generation via context swapping (#71)
Georgi Gerganov [Sat, 25 Mar 2023 18:51:14 +0000 (20:51 +0200)]
Cleanup STL headers + fix embedding examples + minor stuff
Georgi Gerganov [Sat, 25 Mar 2023 18:36:52 +0000 (20:36 +0200)]
Move chat scripts into "./examples"
slaren [Sat, 25 Mar 2023 18:31:48 +0000 (19:31 +0100)]
Add AVX2 implementation of dequantize_row_q4_1 (#505)
Georgi Gerganov [Sat, 25 Mar 2023 18:26:40 +0000 (20:26 +0200)]
Overhaul the examples structure
- main -> examples
- utils -> examples (renamed to "common")
- quantize -> examples
- separate tools for "perplexity" and "embedding"
Hope I didn't break something !
Georgi Gerganov [Sat, 25 Mar 2023 17:47:21 +0000 (19:47 +0200)]
Retire the ggml_mul_mat() branch for transposed src0 (#500)
* Retire the ggml_mul_mat() for transposed src0
- It can always be made contiguous with ggml_cpy()
- The code is now simplified
- The results are deterministic in respect to num threads
* SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502)
* Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON
* Fix dequantization - forgot to interleave the quants
Georgi Gerganov [Sat, 25 Mar 2023 15:16:50 +0000 (17:16 +0200)]
Disable prompt verbosity by default and add option to enable (#480)
slaren [Sat, 25 Mar 2023 15:06:49 +0000 (16:06 +0100)]
Add AVX2 implementation of dequantize_row_q4_0 (#467)
Georgi Gerganov [Sat, 25 Mar 2023 15:03:10 +0000 (17:03 +0200)]
Don't interefe with BLAS for large prompts by running only 1 thread
Georgi Gerganov [Sat, 25 Mar 2023 14:47:59 +0000 (16:47 +0200)]
Add longer DAN prompt for testing big batch numbers
slaren [Sat, 25 Mar 2023 14:34:23 +0000 (15:34 +0100)]
Add timings for the prompt evaluation (#478)
Georgi Gerganov [Sat, 25 Mar 2023 14:30:32 +0000 (16:30 +0200)]
Remove obsolete information from README
Georgi Gerganov [Sat, 25 Mar 2023 14:22:05 +0000 (16:22 +0200)]
Remove obsolete assert and fix compiler warning
Georgi Gerganov [Sat, 25 Mar 2023 14:09:54 +0000 (16:09 +0200)]
Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS
anzz1 [Sat, 25 Mar 2023 12:42:09 +0000 (14:42 +0200)]
bounds checking for input prefix (#492)
anzz1 [Sat, 25 Mar 2023 12:03:19 +0000 (14:03 +0200)]
feat: '--in-prefix STRING' option (#426)
Prefix user inputs with a string
Jed Fox [Sat, 25 Mar 2023 05:26:28 +0000 (01:26 -0400)]
Add support for file load progress reporting callbacks (#434)
* File load progress reporting
* Move llama_progress_handler into llama_context_params
* Renames
* Use seekg to find file size instead
* More correct load progress
* Call progress callback more frequently
* Fix typo
Doomsdayrs [Sat, 25 Mar 2023 05:21:24 +0000 (01:21 -0400)]
Add missing struct annotation (#483)
`llama_sample_top_p_top_k` was missing the struct annotation on line 126.
This causes a compiler issue when being parsed by the Kotlin C interop generator.
This commit fixes the above issue by adding the struct annotation.
Chris Kuehl [Sat, 25 Mar 2023 04:38:14 +0000 (23:38 -0500)]
Fix crash for 65B model with pre-allocated memory (#485)
Georgi Gerganov [Fri, 24 Mar 2023 21:47:06 +0000 (23:47 +0200)]
Disable BLAS altogether - the bug is not just for qunatized mat mul
Georgi Gerganov [Fri, 24 Mar 2023 21:39:17 +0000 (23:39 +0200)]
Disable BLAS branch in mul_mat - seems there is a bug
Georgi Gerganov [Fri, 24 Mar 2023 21:17:58 +0000 (23:17 +0200)]
Immediately start processing the prompt before user input has been provided (#476)
Georgi Gerganov [Fri, 24 Mar 2023 21:17:37 +0000 (23:17 +0200)]
Reduce memory usage and allocate enough memory for largest context (#473)
* Reduce memory usage and allocate enough memory for large contexts
* Simpler scratch buffer usage
* Reenable BLAS for quantized mul_mat
* Fix number of layers in 30B and 65B
* Fix KV cache size for F32
Georgi Gerganov [Fri, 24 Mar 2023 16:23:56 +0000 (18:23 +0200)]
Temporary bump the memory buffer size - hopefully fix issues from
483bab2e
Gary Mulder [Fri, 24 Mar 2023 15:23:09 +0000 (15:23 +0000)]
Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
rabidcopy [Fri, 24 Mar 2023 15:22:39 +0000 (10:22 -0500)]
fix instruct mode (#445)
changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.
Georgi Gerganov [Fri, 24 Mar 2023 15:21:01 +0000 (17:21 +0200)]
Properly free llama_context on failure
Cameron Kaiser [Fri, 24 Mar 2023 15:19:26 +0000 (08:19 -0700)]
additional optimizations for POWER9 (#454)
comex [Fri, 24 Mar 2023 15:19:05 +0000 (08:19 -0700)]
Support calling mlock() on loaded model data on Linux and macOS (#453)
* Support calling mlock() on loaded model data on Linux and macOS
This is enabled by a new --mlock command line option.
Using mlock() disables swapping and memory compression for the model
data. Doing so can be useful on systems where the model takes up a
large fraction of system RAM. In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.
Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.
In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.
* Update llama.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Luciano [Fri, 24 Mar 2023 15:05:13 +0000 (08:05 -0700)]
Add embedding mode with arg flag. Currently working (#282)
* working but ugly
* add arg flag, not working on embedding mode
* typo
* Working! Thanks to @nullhook
* make params argument instead of hardcoded boolean. remove useless time check
* start doing the instructions but not finished. This probably doesnt compile
* Embeddings extraction support
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 24 Mar 2023 07:13:35 +0000 (09:13 +0200)]
Add link to Roadmap discussion
Georgi Gerganov [Fri, 24 Mar 2023 04:22:28 +0000 (06:22 +0200)]
Revert "Fix memory allocation issues and seg faults"
This reverts commit
4870e455b3653f7d7769fa5772b2c90ffad088df .
Will provide the correct fix later
Georgi Gerganov [Thu, 23 Mar 2023 22:11:53 +0000 (00:11 +0200)]
Fix memory allocation issues and seg faults
Georgi Gerganov [Thu, 23 Mar 2023 21:22:01 +0000 (23:22 +0200)]
Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Should make results reproducible for different number of threads and batch sizes
Jed Fox [Thu, 23 Mar 2023 20:42:52 +0000 (16:42 -0400)]
Fix quantize script not finding models in parent directory (#428)
Georgi Gerganov [Thu, 23 Mar 2023 20:39:44 +0000 (22:39 +0200)]
Remove oboslete command from Docker script
Georgi Gerganov [Thu, 23 Mar 2023 20:32:02 +0000 (22:32 +0200)]
Obsolete