]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agopy: huggingface -> Hugging Face (#686)
Ikko Eltociear Ashimine [Sat, 1 Apr 2023 16:38:18 +0000 (01:38 +0900)]
py: huggingface -> Hugging Face (#686)

2 years agoreadme: replace termux links with homepage, play store is deprecated (#680)
rimoliga [Sat, 1 Apr 2023 14:57:30 +0000 (11:57 -0300)]
readme: replace termux links with homepage, play store is deprecated (#680)

2 years agoShow error message when -f fails
Slaren [Fri, 31 Mar 2023 18:03:48 +0000 (20:03 +0200)]
Show error message when -f fails

2 years agoEnable -std= for cmake builds, fix warnings (#598)
Stephan Walter [Fri, 31 Mar 2023 19:19:16 +0000 (19:19 +0000)]
Enable -std= for cmake builds, fix warnings (#598)

2 years agoOptimize AVX2 ggml_vec_dot_q4_0 (#642)
slaren [Fri, 31 Mar 2023 15:55:52 +0000 (17:55 +0200)]
Optimize AVX2 ggml_vec_dot_q4_0 (#642)

2 years agoAdd AVX acceleration (#617)
perserk [Fri, 31 Mar 2023 11:55:44 +0000 (16:55 +0500)]
Add AVX acceleration (#617)

* ggml : add AVX quantize_row_q4_0()

* ggml : add AVX ggml_vec_dot_q4_0()

* ggml : refactor AVX part of ggml_vec_dot_q4_0()

https://github.com/ggerganov/llama.cpp/pull/617#issuecomment-1489985645

2 years agopy : cleanup the code
Pavol Rusnak [Wed, 29 Mar 2023 19:31:24 +0000 (21:31 +0200)]
py : cleanup the code

- use f-strings where possible
- drop first param of encode/decode functions since "utf-8" is the default

2 years agodrop quantize.py (now that models are using a single file)
Pavol Rusnak [Thu, 30 Mar 2023 22:52:06 +0000 (00:52 +0200)]
drop quantize.py (now that models are using a single file)

2 years agoreadme : update supported models
Georgi Gerganov [Thu, 30 Mar 2023 19:31:54 +0000 (22:31 +0300)]
readme : update supported models

2 years agoIntroduce GGML migration tool for new file format
Justine Tunney [Thu, 30 Mar 2023 12:42:56 +0000 (05:42 -0700)]
Introduce GGML migration tool for new file format

If you deleted your old Meta LLaMA .pth files, then the
migrate-ggml-2023-03-30-pr613.py script will allow you to convert your
old ggml files into the new mmap()'able format.

See #613

2 years agoEnsure --mlock works properly with mmap() support
Justine Tunney [Thu, 30 Mar 2023 08:53:36 +0000 (01:53 -0700)]
Ensure --mlock works properly with mmap() support

2 years agoMake loading weights 10-100x faster
Justine Tunney [Wed, 29 Mar 2023 20:51:37 +0000 (13:51 -0700)]
Make loading weights 10-100x faster

This is a breaking change that's going to give you three benefits:

1. Your inference commands should load 100x faster
2. You may be able to safely load models 2x larger
3. You can run many concurrent inference processes

This was accomplished by changing the file format so we can mmap()
weights directly into memory without having to read() or copy them
thereby ensuring the kernel can make its file cache pages directly
accessible to our inference processes; and secondly, that the file
cache pages are much less likely to get evicted (which would force
loads to hit disk) because they're no longer competing with memory
pages that were needlessly created by gigabytes of standard i/o.

The new file format supports single-file models like LLaMA 7b, and
it also supports multi-file models like LLaMA 13B. Our Python tool
now merges the foo.1, foo.2, etc. files back into a single file so
that the C++ code which maps it doesn't need to reshape data every
time. That's made llama.cpp so much simpler. Much of its load code
has now been deleted.

Furthermore, this change ensures that tensors are aligned properly
on a 32-byte boundary. That opens the door to seeing if we can get
additional performance gains on some microprocessors, by using ops
that require memory alignment.

Lastly note that both POSIX and the Windows platform are supported

Fixes #91

2 years agoInitial windows support (untested)
Slaren [Wed, 29 Mar 2023 20:22:36 +0000 (22:22 +0200)]
Initial windows support (untested)

2 years agoAlways initialize mm_addr and mm_length in llama_model
Slaren [Wed, 29 Mar 2023 06:53:14 +0000 (08:53 +0200)]
Always initialize mm_addr and mm_length in llama_model

2 years agoUnmap the file in llama_free
Slaren [Wed, 29 Mar 2023 06:31:26 +0000 (08:31 +0200)]
Unmap the file in llama_free

2 years agoMake mmap_file static
Slaren [Wed, 29 Mar 2023 04:18:18 +0000 (06:18 +0200)]
Make mmap_file static

2 years agoFix ggml_init_params in quantize
Slaren [Wed, 29 Mar 2023 03:38:57 +0000 (05:38 +0200)]
Fix ggml_init_params in quantize

2 years agoAdd mmap support for model files
Slaren [Wed, 29 Mar 2023 00:03:43 +0000 (02:03 +0200)]
Add mmap support for model files

2 years agocmake : properly invoke CTest (#629)
Stephan Walter [Thu, 30 Mar 2023 17:56:59 +0000 (17:56 +0000)]
cmake : properly invoke CTest (#629)

2 years agoRemove unused variable (#607)
Casey Primozic [Thu, 30 Mar 2023 17:53:35 +0000 (10:53 -0700)]
Remove unused variable (#607)

* It seems some new warning were added recently that exposed this.  I wrote the code that included this unused variable originally and it is indeed not needed.

2 years agomake : fix darwin f16c flags check (#615)
david raistrick [Thu, 30 Mar 2023 17:34:45 +0000 (13:34 -0400)]
make : fix darwin f16c flags check (#615)

...there was no check.  ported upstream from https://github.com/zanussbaum/gpt4all.cpp/pull/2 (I dont see any clean path for upstream patches)

2 years agoggml : fix NEON signs (close #620, #622)
Georgi Gerganov [Thu, 30 Mar 2023 17:27:32 +0000 (20:27 +0300)]
ggml : fix NEON signs (close #620, #622)

2 years agoFix GGML_F32Cx8_STORE in AVX without F16C path (#619)
slaren [Thu, 30 Mar 2023 09:16:30 +0000 (11:16 +0200)]
Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)

2 years agoci : re-enable AVX512 testing (Windows-MSVC) (#584)
anzz1 [Wed, 29 Mar 2023 20:44:39 +0000 (23:44 +0300)]
ci : re-enable AVX512 testing (Windows-MSVC) (#584)

* CI: Re-enable AVX512 testing (Windows-MSVC)

Now with 100% less base64 encoding

* plain __cpuid is enough here

2 years agoggml : init time on first ggml_init() call
Georgi Gerganov [Wed, 29 Mar 2023 19:15:34 +0000 (22:15 +0300)]
ggml : init time on first ggml_init() call

2 years agollama : fix compile warnings when reading the vocab
Georgi Gerganov [Wed, 29 Mar 2023 19:13:12 +0000 (22:13 +0300)]
llama : fix compile warnings when reading the vocab

2 years agoggml : add ARM_NEON dequantize_row_q4_1()
Georgi Gerganov [Wed, 29 Mar 2023 19:10:01 +0000 (22:10 +0300)]
ggml : add ARM_NEON dequantize_row_q4_1()

2 years agoggml : add ARM_NEON quantize_row_q4_1()
Georgi Gerganov [Wed, 29 Mar 2023 19:03:02 +0000 (22:03 +0300)]
ggml : add ARM_NEON quantize_row_q4_1()

2 years agoggml : add ARM_NEON ggml_vec_dot_q4_1()
Georgi Gerganov [Wed, 29 Mar 2023 18:47:33 +0000 (21:47 +0300)]
ggml : add ARM_NEON ggml_vec_dot_q4_1()

2 years agorename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)
Pavol Rusnak [Wed, 29 Mar 2023 18:09:25 +0000 (20:09 +0200)]
rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)

to match filenames of other converters

2 years agoCreate chat-13B.bat (#592)
Thérence [Wed, 29 Mar 2023 17:21:09 +0000 (19:21 +0200)]
Create chat-13B.bat (#592)

* Create chat-13B.bat

Same script than chat-13B.sh, but for windows users.
Tested and working on windows 10/11 v 22H2

* Apply suggestions from code review

---------

Co-authored-by: anzz1 <redacted>
2 years agoreadme : fix typos
Georgi Gerganov [Wed, 29 Mar 2023 16:38:31 +0000 (19:38 +0300)]
readme : fix typos

2 years agoreadme : add GPT4All instructions (close #588)
Georgi Gerganov [Wed, 29 Mar 2023 16:37:20 +0000 (19:37 +0300)]
readme : add GPT4All instructions (close #588)

2 years agopy : add GPT4All conversion script
Georgi Gerganov [Wed, 29 Mar 2023 16:29:26 +0000 (19:29 +0300)]
py : add GPT4All conversion script

For now: copy-paste
Too much time for me to deduplicate the python code

2 years agollama : use the same threshold for OpenBLAS and ggml thread limiting (#577)
Maël Kerbiriou [Wed, 29 Mar 2023 16:10:07 +0000 (18:10 +0200)]
llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)

2 years agoadd example of re-act pattern (#583)
Tobias Lütke [Wed, 29 Mar 2023 15:10:24 +0000 (17:10 +0200)]
add example of re-act pattern (#583)

* add example of re-act pattern

* spelling...

* fixed whitespace in reverse prompt issue

2 years agoFix GCC warning about binary literal (#595)
anzz1 [Wed, 29 Mar 2023 13:20:07 +0000 (16:20 +0300)]
Fix GCC warning about binary literal (#595)

0b10101010 -> 0xAA /* 0b10101010 */

2 years agoFix typo in llama.h (#593)
anzz1 [Wed, 29 Mar 2023 13:19:29 +0000 (16:19 +0300)]
Fix typo in llama.h (#593)

2 years agoEnable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)
anzz1 [Tue, 28 Mar 2023 19:44:29 +0000 (22:44 +0300)]
Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)

* Enable Fused-Multiply-Add (FMA) instructions on MSVC

__FMA__ macro does not exist in MSVC

* Enable F16C/CVT16 vector extensions on MSVC

__F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512

* MSVC cvt intrinsics

* Add __SSE3__ macro for MSVC too because why not

even though it's not currently used for anything when AVX is defined

2 years agoCI: fix subdirectory path globbing (#546)
anzz1 [Tue, 28 Mar 2023 19:43:25 +0000 (22:43 +0300)]
CI: fix subdirectory path globbing (#546)

- Changes in subdirectories will now be detecter properly
- (Windows-MSVC) AVX512 tests temporarily disabled

2 years agollama : fix linkage with mingw (#551)
anzz1 [Tue, 28 Mar 2023 18:23:09 +0000 (21:23 +0300)]
llama : fix linkage with mingw (#551)

* Revert 7e53955 (#542)

Still needs to be fixed properly

* Fix linking on mingw32

2 years agoggml : add AVX2 implementation of quantize_row_q4_1 (#515)
slaren [Tue, 28 Mar 2023 18:06:03 +0000 (20:06 +0200)]
ggml : add AVX2 implementation of quantize_row_q4_1 (#515)

* Add AVX2 implementation of quantize_row_q4_1

* Actually use AVX2

* Make quantize_row_q4_1 static

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agopy : add temporary script to convert old ggml files to newer version (#539)
thement [Tue, 28 Mar 2023 17:55:42 +0000 (19:55 +0200)]
py : add temporary script to convert old ggml files to newer version (#539)

Co-authored-by: Jakub Horak <redacted>
2 years agopy : add capabiliy to convert from ggml back to torch or hf format for further consum...
Tai Duc Nguyen [Tue, 28 Mar 2023 17:51:29 +0000 (13:51 -0400)]
py : add capabiliy to convert from ggml back to torch or hf format for further consumption/training/finetuning (#403)

2 years agoggml : refactor quantized processing functions (#509)
Stephan Walter [Tue, 28 Mar 2023 17:13:01 +0000 (17:13 +0000)]
ggml : refactor quantized processing functions (#509)

* Refactor quantized processing functions

* ggml : minor

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agopy : removed unused `model` variable and verified that the code functions correctly...
DooWoong Lee (David) [Tue, 28 Mar 2023 17:02:34 +0000 (02:02 +0900)]
py : removed unused `model` variable and verified that the code functions correctly with `vocab_only` setting. Also confirmed that the code works as expected after running with reduced memory usage due to deletion of no-longer-needed variable. (#547)

2 years agoci : make ctest verbose, hopefully we see what is wrong with the sanitizer
Georgi Gerganov [Tue, 28 Mar 2023 17:01:09 +0000 (20:01 +0300)]
ci : make ctest verbose, hopefully we see what is wrong with the sanitizer

2 years agotests : free llama context at the end of the test
Georgi Gerganov [Tue, 28 Mar 2023 16:51:55 +0000 (19:51 +0300)]
tests : free llama context at the end of the test

2 years agoall : be more strict about converting float to double (#458)
Stephan Walter [Tue, 28 Mar 2023 16:48:20 +0000 (16:48 +0000)]
all : be more strict about converting float to double (#458)

* Be more strict about converting float to double

* Test equivalence of round, SILU implementations

Test module is commented out in CMakeLists.txt because the tests may
take a long time, depending on how much the compiler optimizes.

* Fix softmax in perplexity.cpp

* all : prefer float over double where appropriate

* perplexity : add <cmath>

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agodeploy : add a Package.swift for SwiftPM support (#393)
Jed Fox [Tue, 28 Mar 2023 16:39:01 +0000 (11:39 -0500)]
deploy : add a Package.swift for SwiftPM support (#393)

* Add a Package.swift for SwiftPM support

* Swap from exclusions to allowlist

2 years agoggml : introduce structs for the q4 data blocks (#356)
Stephan Walter [Tue, 28 Mar 2023 15:56:03 +0000 (15:56 +0000)]
ggml : introduce structs for the q4 data blocks (#356)

* Introduce structs for the q4 data blocks

* ggml : rename quant struct variables + fix ARM_NEON

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agogitignore : add "embedding"
Georgi Gerganov [Tue, 28 Mar 2023 15:34:35 +0000 (18:34 +0300)]
gitignore : add "embedding"

2 years agoCheck the existence of f16_model_path_base in quantize.py (#574)
dotpy314 [Tue, 28 Mar 2023 15:06:28 +0000 (23:06 +0800)]
Check the existence of f16_model_path_base in quantize.py (#574)

Co-authored-by: Jincheng Miao <redacted>
2 years agoFix usage of F16C intrinsics in AVX code (#563)
slaren [Tue, 28 Mar 2023 14:26:55 +0000 (16:26 +0200)]
Fix usage of F16C intrinsics in AVX code (#563)

* Fix usage of F16C intrinsics in AVX code when F16C is not defined

2 years agomain.cpp fixes, refactoring (#571)
anzz1 [Tue, 28 Mar 2023 14:09:55 +0000 (17:09 +0300)]
main.cpp fixes, refactoring (#571)

- main: entering empty line passes back control without new input in interactive/instruct modes
- instruct mode: keep prompt fix
- instruct mode: duplicate instruct prompt fix
- refactor: move common console code from main->common

2 years agoAdd embedding example to Makefile (#540)
RJ Adriaansen [Tue, 28 Mar 2023 06:11:09 +0000 (08:11 +0200)]
Add embedding example to Makefile (#540)

2 years agoFix missing ggml link in cmake for examples/* on w64-mingw32 (#542)
Marco Matthies [Mon, 27 Mar 2023 04:55:26 +0000 (06:55 +0200)]
Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)

2 years agoci: add debug build to sanitizer build matrix (#527)
Erik Scholz [Sun, 26 Mar 2023 15:48:40 +0000 (17:48 +0200)]
ci: add debug build to sanitizer build matrix (#527)

2 years agoFix undefined variables in debug build, remove unused variables (#531)
Stephan Walter [Sun, 26 Mar 2023 15:34:02 +0000 (15:34 +0000)]
Fix undefined variables in debug build, remove unused variables (#531)

2 years agoAdd support for linux/arm64 platform during Docker Builds (#514)
Juan Calderon-Perez [Sun, 26 Mar 2023 14:48:42 +0000 (10:48 -0400)]
Add support for linux/arm64 platform during Docker Builds (#514)

* Add support for linux/arm64 platform

* Add platform to versioned builds

2 years agoUpdate README and comments for standalone perplexity tool (#525)
Stephan Walter [Sun, 26 Mar 2023 13:14:01 +0000 (13:14 +0000)]
Update README and comments for standalone perplexity tool (#525)

2 years ago[main] fix infinite generation (-n == -1) (#523)
anzz1 [Sun, 26 Mar 2023 13:06:10 +0000 (16:06 +0300)]
[main] fix infinite generation (-n == -1) (#523)

2 years agoAdd logo to README.md
Georgi Gerganov [Sun, 26 Mar 2023 07:20:49 +0000 (10:20 +0300)]
Add logo to README.md

2 years agoExit from interactive mode if input stream is bad (#491)
Harald Fernengel [Sun, 26 Mar 2023 05:25:46 +0000 (07:25 +0200)]
Exit from interactive mode if input stream is bad (#491)

Allow exiting the interactive prompt also with CTRL-D on Unix and CTRL-Z
on Windows.

2 years agoCI: Run other sanitizer builds even if one fails (#511)
anzz1 [Sat, 25 Mar 2023 22:13:28 +0000 (00:13 +0200)]
CI: Run other sanitizer builds even if one fails (#511)

applies only to sanitizer builds so they wont be cancelled

2 years agoClarify console output in convert-pth-to-ggml.py (#512)
jp-x-g [Sat, 25 Mar 2023 21:53:55 +0000 (14:53 -0700)]
Clarify console output in convert-pth-to-ggml.py (#512)

"Processing part 1 of 3" instead of "Processing part 0"

2 years agoCMake / CI additions (#497)
anzz1 [Sat, 25 Mar 2023 21:38:11 +0000 (23:38 +0200)]
CMake / CI additions (#497)

* CMake: Add AVX512 option

* CI: Add AVX/AVX512 builds (Windows)
(AVX512 tests can only be run when the worker happens to support it, building works anyway)

* CMake: Fix sanitizer linkage ( merged #468 )

* CI: Add sanitizer builds (Ubuntu)

* CI: Fix release tagging
(change @zendesk/action-create-release to @anzz1/action-create-release until upstream PR Added commitish as input zendesk/action-create-release#32 is merged)

2 years ago(Windows) Set console to UTF-8 on init (#420)
anzz1 [Sat, 25 Mar 2023 20:29:22 +0000 (22:29 +0200)]
(Windows) Set console to UTF-8 on init (#420)

Sets console codepage to 65001 (CP_UTF8) on start for both input and output, should fix problems with UTF-8 characters.

2 years agoFix colors enabling on WIN32
Georgi Gerganov [Sat, 25 Mar 2023 19:53:39 +0000 (21:53 +0200)]
Fix colors enabling on WIN32

2 years agoIf n_predict == -1, generate forever
Georgi Gerganov [Sat, 25 Mar 2023 19:51:41 +0000 (21:51 +0200)]
If n_predict == -1, generate forever

2 years agoInifinite generation via context swapping (#71)
Georgi Gerganov [Sat, 25 Mar 2023 19:36:22 +0000 (21:36 +0200)]
Inifinite generation via context swapping (#71)

2 years agoCleanup STL headers + fix embedding examples + minor stuff
Georgi Gerganov [Sat, 25 Mar 2023 18:51:14 +0000 (20:51 +0200)]
Cleanup STL headers + fix embedding examples + minor stuff

2 years agoMove chat scripts into "./examples"
Georgi Gerganov [Sat, 25 Mar 2023 18:36:52 +0000 (20:36 +0200)]
Move chat scripts into "./examples"

2 years agoAdd AVX2 implementation of dequantize_row_q4_1 (#505)
slaren [Sat, 25 Mar 2023 18:31:48 +0000 (19:31 +0100)]
Add AVX2 implementation of dequantize_row_q4_1 (#505)

2 years agoOverhaul the examples structure
Georgi Gerganov [Sat, 25 Mar 2023 18:26:40 +0000 (20:26 +0200)]
Overhaul the examples structure

- main -> examples
- utils -> examples (renamed to "common")
- quantize -> examples
- separate tools for "perplexity" and "embedding"

Hope I didn't break something !

2 years agoRetire the ggml_mul_mat() branch for transposed src0 (#500)
Georgi Gerganov [Sat, 25 Mar 2023 17:47:21 +0000 (19:47 +0200)]
Retire the ggml_mul_mat() branch for transposed src0 (#500)

* Retire the ggml_mul_mat() for transposed src0

- It can always be made contiguous with ggml_cpy()
- The code is now simplified
- The results are deterministic in respect to num threads

* SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502)

* Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON

* Fix dequantization - forgot to interleave the quants

2 years agoDisable prompt verbosity by default and add option to enable (#480)
Georgi Gerganov [Sat, 25 Mar 2023 15:16:50 +0000 (17:16 +0200)]
Disable prompt verbosity by default and add option to enable (#480)

2 years agoAdd AVX2 implementation of dequantize_row_q4_0 (#467)
slaren [Sat, 25 Mar 2023 15:06:49 +0000 (16:06 +0100)]
Add AVX2 implementation of dequantize_row_q4_0 (#467)

2 years agoDon't interefe with BLAS for large prompts by running only 1 thread
Georgi Gerganov [Sat, 25 Mar 2023 15:03:10 +0000 (17:03 +0200)]
Don't interefe with BLAS for large prompts by running only 1 thread

2 years agoAdd longer DAN prompt for testing big batch numbers
Georgi Gerganov [Sat, 25 Mar 2023 14:47:59 +0000 (16:47 +0200)]
Add longer DAN prompt for testing big batch numbers

2 years agoAdd timings for the prompt evaluation (#478)
slaren [Sat, 25 Mar 2023 14:34:23 +0000 (15:34 +0100)]
Add timings for the prompt evaluation (#478)

2 years agoRemove obsolete information from README
Georgi Gerganov [Sat, 25 Mar 2023 14:30:32 +0000 (16:30 +0200)]
Remove obsolete information from README

2 years agoRemove obsolete assert and fix compiler warning
Georgi Gerganov [Sat, 25 Mar 2023 14:22:05 +0000 (16:22 +0200)]
Remove obsolete assert and fix compiler warning

2 years agoFix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS
Georgi Gerganov [Sat, 25 Mar 2023 14:09:54 +0000 (16:09 +0200)]
Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS

2 years agobounds checking for input prefix (#492)
anzz1 [Sat, 25 Mar 2023 12:42:09 +0000 (14:42 +0200)]
bounds checking for input prefix (#492)

2 years agofeat: '--in-prefix STRING' option (#426)
anzz1 [Sat, 25 Mar 2023 12:03:19 +0000 (14:03 +0200)]
feat: '--in-prefix STRING' option (#426)

Prefix user inputs with a string

2 years agoAdd support for file load progress reporting callbacks (#434)
Jed Fox [Sat, 25 Mar 2023 05:26:28 +0000 (01:26 -0400)]
Add support for file load progress reporting callbacks (#434)

* File load progress reporting

* Move llama_progress_handler into llama_context_params

* Renames

* Use seekg to find file size instead

* More correct load progress

* Call progress callback more frequently

* Fix typo

2 years agoAdd missing struct annotation (#483)
Doomsdayrs [Sat, 25 Mar 2023 05:21:24 +0000 (01:21 -0400)]
Add missing struct annotation (#483)

`llama_sample_top_p_top_k` was missing the struct annotation on line 126.

This causes a compiler issue when being parsed by the Kotlin C interop generator.

This commit fixes the above issue by adding the struct annotation.

2 years agoFix crash for 65B model with pre-allocated memory (#485)
Chris Kuehl [Sat, 25 Mar 2023 04:38:14 +0000 (23:38 -0500)]
Fix crash for 65B model with pre-allocated memory (#485)

2 years agoDisable BLAS altogether - the bug is not just for qunatized mat mul
Georgi Gerganov [Fri, 24 Mar 2023 21:47:06 +0000 (23:47 +0200)]
Disable BLAS altogether - the bug is not just for qunatized mat mul

2 years agoDisable BLAS branch in mul_mat - seems there is a bug
Georgi Gerganov [Fri, 24 Mar 2023 21:39:17 +0000 (23:39 +0200)]
Disable BLAS branch in mul_mat - seems there is a bug

2 years agoImmediately start processing the prompt before user input has been provided (#476)
Georgi Gerganov [Fri, 24 Mar 2023 21:17:58 +0000 (23:17 +0200)]
Immediately start processing the prompt before user input has been provided (#476)

2 years agoReduce memory usage and allocate enough memory for largest context (#473)
Georgi Gerganov [Fri, 24 Mar 2023 21:17:37 +0000 (23:17 +0200)]
Reduce memory usage and allocate enough memory for largest context (#473)

* Reduce memory usage and allocate enough memory for large contexts

* Simpler scratch buffer usage

* Reenable BLAS for quantized mul_mat

* Fix number of layers in 30B and 65B

* Fix KV cache size for F32

2 years agoTemporary bump the memory buffer size - hopefully fix issues from 483bab2e
Georgi Gerganov [Fri, 24 Mar 2023 16:23:56 +0000 (18:23 +0200)]
Temporary bump the memory buffer size - hopefully fix issues from 483bab2e

2 years agoUpdate README.md (#444)
Gary Mulder [Fri, 24 Mar 2023 15:23:09 +0000 (15:23 +0000)]
Update README.md (#444)

Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.

2 years agofix instruct mode (#445)
rabidcopy [Fri, 24 Mar 2023 15:22:39 +0000 (10:22 -0500)]
fix instruct mode (#445)

changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.

2 years agoProperly free llama_context on failure
Georgi Gerganov [Fri, 24 Mar 2023 15:21:01 +0000 (17:21 +0200)]
Properly free llama_context on failure

2 years agoadditional optimizations for POWER9 (#454)
Cameron Kaiser [Fri, 24 Mar 2023 15:19:26 +0000 (08:19 -0700)]
additional optimizations for POWER9 (#454)

2 years agoSupport calling mlock() on loaded model data on Linux and macOS (#453)
comex [Fri, 24 Mar 2023 15:19:05 +0000 (08:19 -0700)]
Support calling mlock() on loaded model data on Linux and macOS (#453)

* Support calling mlock() on loaded model data on Linux and macOS

This is enabled by a new --mlock command line option.

Using mlock() disables swapping and memory compression for the model
data.  Doing so can be useful on systems where the model takes up a
large fraction of system RAM.  In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.

Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.

In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.

* Update llama.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoAdd embedding mode with arg flag. Currently working (#282)
Luciano [Fri, 24 Mar 2023 15:05:13 +0000 (08:05 -0700)]
Add embedding mode with arg flag. Currently working (#282)

* working but ugly

* add arg flag, not working on embedding mode

* typo

* Working! Thanks to @nullhook

* make params argument instead of hardcoded boolean. remove useless time check

* start doing the instructions but not finished. This probably doesnt compile

* Embeddings extraction support

---------

Co-authored-by: Georgi Gerganov <redacted>