]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agoEnable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)
anzz1 [Tue, 28 Mar 2023 19:44:29 +0000 (22:44 +0300)]
Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)

* Enable Fused-Multiply-Add (FMA) instructions on MSVC

__FMA__ macro does not exist in MSVC

* Enable F16C/CVT16 vector extensions on MSVC

__F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512

* MSVC cvt intrinsics

* Add __SSE3__ macro for MSVC too because why not

even though it's not currently used for anything when AVX is defined

2 years agoCI: fix subdirectory path globbing (#546)
anzz1 [Tue, 28 Mar 2023 19:43:25 +0000 (22:43 +0300)]
CI: fix subdirectory path globbing (#546)

- Changes in subdirectories will now be detecter properly
- (Windows-MSVC) AVX512 tests temporarily disabled

2 years agollama : fix linkage with mingw (#551)
anzz1 [Tue, 28 Mar 2023 18:23:09 +0000 (21:23 +0300)]
llama : fix linkage with mingw (#551)

* Revert 7e53955 (#542)

Still needs to be fixed properly

* Fix linking on mingw32

2 years agoggml : add AVX2 implementation of quantize_row_q4_1 (#515)
slaren [Tue, 28 Mar 2023 18:06:03 +0000 (20:06 +0200)]
ggml : add AVX2 implementation of quantize_row_q4_1 (#515)

* Add AVX2 implementation of quantize_row_q4_1

* Actually use AVX2

* Make quantize_row_q4_1 static

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agopy : add temporary script to convert old ggml files to newer version (#539)
thement [Tue, 28 Mar 2023 17:55:42 +0000 (19:55 +0200)]
py : add temporary script to convert old ggml files to newer version (#539)

Co-authored-by: Jakub Horak <redacted>
2 years agopy : add capabiliy to convert from ggml back to torch or hf format for further consum...
Tai Duc Nguyen [Tue, 28 Mar 2023 17:51:29 +0000 (13:51 -0400)]
py : add capabiliy to convert from ggml back to torch or hf format for further consumption/training/finetuning (#403)

2 years agoggml : refactor quantized processing functions (#509)
Stephan Walter [Tue, 28 Mar 2023 17:13:01 +0000 (17:13 +0000)]
ggml : refactor quantized processing functions (#509)

* Refactor quantized processing functions

* ggml : minor

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agopy : removed unused `model` variable and verified that the code functions correctly...
DooWoong Lee (David) [Tue, 28 Mar 2023 17:02:34 +0000 (02:02 +0900)]
py : removed unused `model` variable and verified that the code functions correctly with `vocab_only` setting. Also confirmed that the code works as expected after running with reduced memory usage due to deletion of no-longer-needed variable. (#547)

2 years agoci : make ctest verbose, hopefully we see what is wrong with the sanitizer
Georgi Gerganov [Tue, 28 Mar 2023 17:01:09 +0000 (20:01 +0300)]
ci : make ctest verbose, hopefully we see what is wrong with the sanitizer

2 years agotests : free llama context at the end of the test
Georgi Gerganov [Tue, 28 Mar 2023 16:51:55 +0000 (19:51 +0300)]
tests : free llama context at the end of the test

2 years agoall : be more strict about converting float to double (#458)
Stephan Walter [Tue, 28 Mar 2023 16:48:20 +0000 (16:48 +0000)]
all : be more strict about converting float to double (#458)

* Be more strict about converting float to double

* Test equivalence of round, SILU implementations

Test module is commented out in CMakeLists.txt because the tests may
take a long time, depending on how much the compiler optimizes.

* Fix softmax in perplexity.cpp

* all : prefer float over double where appropriate

* perplexity : add <cmath>

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agodeploy : add a Package.swift for SwiftPM support (#393)
Jed Fox [Tue, 28 Mar 2023 16:39:01 +0000 (11:39 -0500)]
deploy : add a Package.swift for SwiftPM support (#393)

* Add a Package.swift for SwiftPM support

* Swap from exclusions to allowlist

2 years agoggml : introduce structs for the q4 data blocks (#356)
Stephan Walter [Tue, 28 Mar 2023 15:56:03 +0000 (15:56 +0000)]
ggml : introduce structs for the q4 data blocks (#356)

* Introduce structs for the q4 data blocks

* ggml : rename quant struct variables + fix ARM_NEON

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agogitignore : add "embedding"
Georgi Gerganov [Tue, 28 Mar 2023 15:34:35 +0000 (18:34 +0300)]
gitignore : add "embedding"

2 years agoCheck the existence of f16_model_path_base in quantize.py (#574)
dotpy314 [Tue, 28 Mar 2023 15:06:28 +0000 (23:06 +0800)]
Check the existence of f16_model_path_base in quantize.py (#574)

Co-authored-by: Jincheng Miao <redacted>
2 years agoFix usage of F16C intrinsics in AVX code (#563)
slaren [Tue, 28 Mar 2023 14:26:55 +0000 (16:26 +0200)]
Fix usage of F16C intrinsics in AVX code (#563)

* Fix usage of F16C intrinsics in AVX code when F16C is not defined

2 years agomain.cpp fixes, refactoring (#571)
anzz1 [Tue, 28 Mar 2023 14:09:55 +0000 (17:09 +0300)]
main.cpp fixes, refactoring (#571)

- main: entering empty line passes back control without new input in interactive/instruct modes
- instruct mode: keep prompt fix
- instruct mode: duplicate instruct prompt fix
- refactor: move common console code from main->common

2 years agoAdd embedding example to Makefile (#540)
RJ Adriaansen [Tue, 28 Mar 2023 06:11:09 +0000 (08:11 +0200)]
Add embedding example to Makefile (#540)

2 years agoFix missing ggml link in cmake for examples/* on w64-mingw32 (#542)
Marco Matthies [Mon, 27 Mar 2023 04:55:26 +0000 (06:55 +0200)]
Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)

2 years agoci: add debug build to sanitizer build matrix (#527)
Erik Scholz [Sun, 26 Mar 2023 15:48:40 +0000 (17:48 +0200)]
ci: add debug build to sanitizer build matrix (#527)

2 years agoFix undefined variables in debug build, remove unused variables (#531)
Stephan Walter [Sun, 26 Mar 2023 15:34:02 +0000 (15:34 +0000)]
Fix undefined variables in debug build, remove unused variables (#531)

2 years agoAdd support for linux/arm64 platform during Docker Builds (#514)
Juan Calderon-Perez [Sun, 26 Mar 2023 14:48:42 +0000 (10:48 -0400)]
Add support for linux/arm64 platform during Docker Builds (#514)

* Add support for linux/arm64 platform

* Add platform to versioned builds

2 years agoUpdate README and comments for standalone perplexity tool (#525)
Stephan Walter [Sun, 26 Mar 2023 13:14:01 +0000 (13:14 +0000)]
Update README and comments for standalone perplexity tool (#525)

2 years ago[main] fix infinite generation (-n == -1) (#523)
anzz1 [Sun, 26 Mar 2023 13:06:10 +0000 (16:06 +0300)]
[main] fix infinite generation (-n == -1) (#523)

2 years agoAdd logo to README.md
Georgi Gerganov [Sun, 26 Mar 2023 07:20:49 +0000 (10:20 +0300)]
Add logo to README.md

2 years agoExit from interactive mode if input stream is bad (#491)
Harald Fernengel [Sun, 26 Mar 2023 05:25:46 +0000 (07:25 +0200)]
Exit from interactive mode if input stream is bad (#491)

Allow exiting the interactive prompt also with CTRL-D on Unix and CTRL-Z
on Windows.

2 years agoCI: Run other sanitizer builds even if one fails (#511)
anzz1 [Sat, 25 Mar 2023 22:13:28 +0000 (00:13 +0200)]
CI: Run other sanitizer builds even if one fails (#511)

applies only to sanitizer builds so they wont be cancelled

2 years agoClarify console output in convert-pth-to-ggml.py (#512)
jp-x-g [Sat, 25 Mar 2023 21:53:55 +0000 (14:53 -0700)]
Clarify console output in convert-pth-to-ggml.py (#512)

"Processing part 1 of 3" instead of "Processing part 0"

2 years agoCMake / CI additions (#497)
anzz1 [Sat, 25 Mar 2023 21:38:11 +0000 (23:38 +0200)]
CMake / CI additions (#497)

* CMake: Add AVX512 option

* CI: Add AVX/AVX512 builds (Windows)
(AVX512 tests can only be run when the worker happens to support it, building works anyway)

* CMake: Fix sanitizer linkage ( merged #468 )

* CI: Add sanitizer builds (Ubuntu)

* CI: Fix release tagging
(change @zendesk/action-create-release to @anzz1/action-create-release until upstream PR Added commitish as input zendesk/action-create-release#32 is merged)

2 years ago(Windows) Set console to UTF-8 on init (#420)
anzz1 [Sat, 25 Mar 2023 20:29:22 +0000 (22:29 +0200)]
(Windows) Set console to UTF-8 on init (#420)

Sets console codepage to 65001 (CP_UTF8) on start for both input and output, should fix problems with UTF-8 characters.

2 years agoFix colors enabling on WIN32
Georgi Gerganov [Sat, 25 Mar 2023 19:53:39 +0000 (21:53 +0200)]
Fix colors enabling on WIN32

2 years agoIf n_predict == -1, generate forever
Georgi Gerganov [Sat, 25 Mar 2023 19:51:41 +0000 (21:51 +0200)]
If n_predict == -1, generate forever

2 years agoInifinite generation via context swapping (#71)
Georgi Gerganov [Sat, 25 Mar 2023 19:36:22 +0000 (21:36 +0200)]
Inifinite generation via context swapping (#71)

2 years agoCleanup STL headers + fix embedding examples + minor stuff
Georgi Gerganov [Sat, 25 Mar 2023 18:51:14 +0000 (20:51 +0200)]
Cleanup STL headers + fix embedding examples + minor stuff

2 years agoMove chat scripts into "./examples"
Georgi Gerganov [Sat, 25 Mar 2023 18:36:52 +0000 (20:36 +0200)]
Move chat scripts into "./examples"

2 years agoAdd AVX2 implementation of dequantize_row_q4_1 (#505)
slaren [Sat, 25 Mar 2023 18:31:48 +0000 (19:31 +0100)]
Add AVX2 implementation of dequantize_row_q4_1 (#505)

2 years agoOverhaul the examples structure
Georgi Gerganov [Sat, 25 Mar 2023 18:26:40 +0000 (20:26 +0200)]
Overhaul the examples structure

- main -> examples
- utils -> examples (renamed to "common")
- quantize -> examples
- separate tools for "perplexity" and "embedding"

Hope I didn't break something !

2 years agoRetire the ggml_mul_mat() branch for transposed src0 (#500)
Georgi Gerganov [Sat, 25 Mar 2023 17:47:21 +0000 (19:47 +0200)]
Retire the ggml_mul_mat() branch for transposed src0 (#500)

* Retire the ggml_mul_mat() for transposed src0

- It can always be made contiguous with ggml_cpy()
- The code is now simplified
- The results are deterministic in respect to num threads

* SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502)

* Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON

* Fix dequantization - forgot to interleave the quants

2 years agoDisable prompt verbosity by default and add option to enable (#480)
Georgi Gerganov [Sat, 25 Mar 2023 15:16:50 +0000 (17:16 +0200)]
Disable prompt verbosity by default and add option to enable (#480)

2 years agoAdd AVX2 implementation of dequantize_row_q4_0 (#467)
slaren [Sat, 25 Mar 2023 15:06:49 +0000 (16:06 +0100)]
Add AVX2 implementation of dequantize_row_q4_0 (#467)

2 years agoDon't interefe with BLAS for large prompts by running only 1 thread
Georgi Gerganov [Sat, 25 Mar 2023 15:03:10 +0000 (17:03 +0200)]
Don't interefe with BLAS for large prompts by running only 1 thread

2 years agoAdd longer DAN prompt for testing big batch numbers
Georgi Gerganov [Sat, 25 Mar 2023 14:47:59 +0000 (16:47 +0200)]
Add longer DAN prompt for testing big batch numbers

2 years agoAdd timings for the prompt evaluation (#478)
slaren [Sat, 25 Mar 2023 14:34:23 +0000 (15:34 +0100)]
Add timings for the prompt evaluation (#478)

2 years agoRemove obsolete information from README
Georgi Gerganov [Sat, 25 Mar 2023 14:30:32 +0000 (16:30 +0200)]
Remove obsolete information from README

2 years agoRemove obsolete assert and fix compiler warning
Georgi Gerganov [Sat, 25 Mar 2023 14:22:05 +0000 (16:22 +0200)]
Remove obsolete assert and fix compiler warning

2 years agoFix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS
Georgi Gerganov [Sat, 25 Mar 2023 14:09:54 +0000 (16:09 +0200)]
Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS

2 years agobounds checking for input prefix (#492)
anzz1 [Sat, 25 Mar 2023 12:42:09 +0000 (14:42 +0200)]
bounds checking for input prefix (#492)

2 years agofeat: '--in-prefix STRING' option (#426)
anzz1 [Sat, 25 Mar 2023 12:03:19 +0000 (14:03 +0200)]
feat: '--in-prefix STRING' option (#426)

Prefix user inputs with a string

2 years agoAdd support for file load progress reporting callbacks (#434)
Jed Fox [Sat, 25 Mar 2023 05:26:28 +0000 (01:26 -0400)]
Add support for file load progress reporting callbacks (#434)

* File load progress reporting

* Move llama_progress_handler into llama_context_params

* Renames

* Use seekg to find file size instead

* More correct load progress

* Call progress callback more frequently

* Fix typo

2 years agoAdd missing struct annotation (#483)
Doomsdayrs [Sat, 25 Mar 2023 05:21:24 +0000 (01:21 -0400)]
Add missing struct annotation (#483)

`llama_sample_top_p_top_k` was missing the struct annotation on line 126.

This causes a compiler issue when being parsed by the Kotlin C interop generator.

This commit fixes the above issue by adding the struct annotation.

2 years agoFix crash for 65B model with pre-allocated memory (#485)
Chris Kuehl [Sat, 25 Mar 2023 04:38:14 +0000 (23:38 -0500)]
Fix crash for 65B model with pre-allocated memory (#485)

2 years agoDisable BLAS altogether - the bug is not just for qunatized mat mul
Georgi Gerganov [Fri, 24 Mar 2023 21:47:06 +0000 (23:47 +0200)]
Disable BLAS altogether - the bug is not just for qunatized mat mul

2 years agoDisable BLAS branch in mul_mat - seems there is a bug
Georgi Gerganov [Fri, 24 Mar 2023 21:39:17 +0000 (23:39 +0200)]
Disable BLAS branch in mul_mat - seems there is a bug

2 years agoImmediately start processing the prompt before user input has been provided (#476)
Georgi Gerganov [Fri, 24 Mar 2023 21:17:58 +0000 (23:17 +0200)]
Immediately start processing the prompt before user input has been provided (#476)

2 years agoReduce memory usage and allocate enough memory for largest context (#473)
Georgi Gerganov [Fri, 24 Mar 2023 21:17:37 +0000 (23:17 +0200)]
Reduce memory usage and allocate enough memory for largest context (#473)

* Reduce memory usage and allocate enough memory for large contexts

* Simpler scratch buffer usage

* Reenable BLAS for quantized mul_mat

* Fix number of layers in 30B and 65B

* Fix KV cache size for F32

2 years agoTemporary bump the memory buffer size - hopefully fix issues from 483bab2e
Georgi Gerganov [Fri, 24 Mar 2023 16:23:56 +0000 (18:23 +0200)]
Temporary bump the memory buffer size - hopefully fix issues from 483bab2e

2 years agoUpdate README.md (#444)
Gary Mulder [Fri, 24 Mar 2023 15:23:09 +0000 (15:23 +0000)]
Update README.md (#444)

Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.

2 years agofix instruct mode (#445)
rabidcopy [Fri, 24 Mar 2023 15:22:39 +0000 (10:22 -0500)]
fix instruct mode (#445)

changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.

2 years agoProperly free llama_context on failure
Georgi Gerganov [Fri, 24 Mar 2023 15:21:01 +0000 (17:21 +0200)]
Properly free llama_context on failure

2 years agoadditional optimizations for POWER9 (#454)
Cameron Kaiser [Fri, 24 Mar 2023 15:19:26 +0000 (08:19 -0700)]
additional optimizations for POWER9 (#454)

2 years agoSupport calling mlock() on loaded model data on Linux and macOS (#453)
comex [Fri, 24 Mar 2023 15:19:05 +0000 (08:19 -0700)]
Support calling mlock() on loaded model data on Linux and macOS (#453)

* Support calling mlock() on loaded model data on Linux and macOS

This is enabled by a new --mlock command line option.

Using mlock() disables swapping and memory compression for the model
data.  Doing so can be useful on systems where the model takes up a
large fraction of system RAM.  In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.

Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.

In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.

* Update llama.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoAdd embedding mode with arg flag. Currently working (#282)
Luciano [Fri, 24 Mar 2023 15:05:13 +0000 (08:05 -0700)]
Add embedding mode with arg flag. Currently working (#282)

* working but ugly

* add arg flag, not working on embedding mode

* typo

* Working! Thanks to @nullhook

* make params argument instead of hardcoded boolean. remove useless time check

* start doing the instructions but not finished. This probably doesnt compile

* Embeddings extraction support

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoAdd link to Roadmap discussion
Georgi Gerganov [Fri, 24 Mar 2023 07:13:35 +0000 (09:13 +0200)]
Add link to Roadmap discussion

2 years agoRevert "Fix memory allocation issues and seg faults"
Georgi Gerganov [Fri, 24 Mar 2023 04:22:28 +0000 (06:22 +0200)]
Revert "Fix memory allocation issues and seg faults"

This reverts commit 4870e455b3653f7d7769fa5772b2c90ffad088df.

Will provide the correct fix later

2 years agoFix memory allocation issues and seg faults
Georgi Gerganov [Thu, 23 Mar 2023 22:11:53 +0000 (00:11 +0200)]
Fix memory allocation issues and seg faults

2 years agoAvoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Georgi Gerganov [Thu, 23 Mar 2023 21:22:01 +0000 (23:22 +0200)]
Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)

Should make results reproducible for different number of threads and batch sizes

2 years agoFix quantize script not finding models in parent directory (#428)
Jed Fox [Thu, 23 Mar 2023 20:42:52 +0000 (16:42 -0400)]
Fix quantize script not finding models in parent directory (#428)

2 years agoRemove oboslete command from Docker script
Georgi Gerganov [Thu, 23 Mar 2023 20:39:44 +0000 (22:39 +0200)]
Remove oboslete command from Docker script

2 years agoObsolete
Georgi Gerganov [Thu, 23 Mar 2023 20:32:02 +0000 (22:32 +0200)]
Obsolete

2 years agoReplace EOS with newline to prevent context/memory being flushed by EOS in interactiv...
rabidcopy [Thu, 23 Mar 2023 20:22:47 +0000 (15:22 -0500)]
Replace EOS with newline to prevent context/memory being flushed by EOS in interactive mode (#333)

* Improve interactive mode's coherence after EOS

Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached.
Not sure what token 13 is or why it seems to help. See conversation for examples.

* Make newline token a constant

* dynamically determine newline token

* relocate previous newline token const

* cleanup whitespace

* print a new line on end of text in interactive

this may need to be looked into further when not using a reverse prompt

* only print manual newline with reverse prompt

fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise

* alternate approach to replace end of text tokens

* Inject the reverse prompt again after eos in interactive mode

* tokenize reverse prompt when needed

makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330

* tokenize and inject only first reverse prompt

thanks to tjohnman

* tokenize first reverse prompt once

* add newline token

* add newline token

* tokenize/inject reverse prompt for refactor

this doesn't seem right though

* tokenize nothing for antiprompt if no reverse

* Update main.cpp

* Update main.cpp

* tokenize and inject reverse prompt as needed

this doesn't seem to work if the reverse prompt is tokenized outside earlier on

* not needed

* remove newline token

* remove newline token

* tokenize newline token

* add space to comment

* Update main.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoFix GPTQ converter (#423)
Timmy Knight [Thu, 23 Mar 2023 20:18:13 +0000 (10:18 -1000)]
Fix GPTQ converter (#423)

* Fix GPTQ converter

* Fix comment

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoGenerate library with CMake (#430)
nusu-github [Thu, 23 Mar 2023 20:16:48 +0000 (05:16 +0900)]
Generate library with CMake (#430)

* Generate library with CMake

BUILD_SHARED_LIBS to allow llama library to be generated.

* Turn ON PIC when BUILD_SHARED_LIBS is ON

2 years agoCommand line args bounds checking (#424)
anzz1 [Thu, 23 Mar 2023 17:54:28 +0000 (19:54 +0200)]
Command line args bounds checking (#424)

* command line args bounds checking

* unknown and invalid param exit codes 0 -> 1

2 years agoFix Nix build
Ben Siraphob [Wed, 22 Mar 2023 05:37:02 +0000 (00:37 -0500)]
Fix Nix build

2 years agoRevert "Delete SHA256SUMS for now" (#429)
Stephan Walter [Thu, 23 Mar 2023 14:15:48 +0000 (14:15 +0000)]
Revert "Delete SHA256SUMS for now" (#429)

* Revert "Delete SHA256SUMS for now (#416)"

This reverts commit 8eea5ae0e5f31238a97c79ea9103c27647380e37.

* Remove ggml files until they can be verified
* Remove alpaca json
* Add also model/tokenizer.model to SHA256SUMS + update README

---------

Co-authored-by: Pavol Rusnak <redacted>
2 years agoFix Makefile echo escape codes (by removing them). (#418)
Kerfuffle [Thu, 23 Mar 2023 11:41:32 +0000 (05:41 -0600)]
Fix Makefile echo escape codes (by removing them). (#418)

2 years agoMove model section from issue template to README.md (#421)
Gary Mulder [Thu, 23 Mar 2023 11:30:40 +0000 (11:30 +0000)]
Move model section from issue template to README.md (#421)

* Update custom.md

* Removed Model section as it is better placed in README.md

* Updates to README.md model section

* Inserted text that was removed from  issue template about obtaining models from FB and links to papers describing the various models

* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway

* Updated the perplexity section to point at Perplexity scores #406 discussion

2 years agoDelete SHA256SUMS for now (#416)
anzz1 [Thu, 23 Mar 2023 10:26:19 +0000 (12:26 +0200)]
Delete SHA256SUMS for now (#416)

Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format
Re-add after #374 is resolved

2 years agoAdjust repetition penalty ..
Georgi Gerganov [Thu, 23 Mar 2023 08:46:58 +0000 (10:46 +0200)]
Adjust repetition penalty ..

2 years agoAdd link to recent podcast about whisper.cpp and llama.cpp
Georgi Gerganov [Thu, 23 Mar 2023 07:48:51 +0000 (09:48 +0200)]
Add link to recent podcast about whisper.cpp and llama.cpp

2 years agoCI: CMake: Separate build and test steps (#376)
anzz1 [Thu, 23 Mar 2023 02:20:34 +0000 (04:20 +0200)]
CI: CMake: Separate build and test steps (#376)

* CI: Separate Build and Test steps (CMake)

* CI: Make sure build passes before running tests (CMake)

* CI: Standardise step id names

2 years agoFix instruct mode broken by PR #354 (#409)
tjohnman [Thu, 23 Mar 2023 00:30:23 +0000 (01:30 +0100)]
Fix instruct mode broken by PR #354 (#409)

Co-authored-by: Johnman <redacted>
2 years agoUpdate issue template so people will use it (#404)
Gary Mulder [Wed, 22 Mar 2023 19:06:18 +0000 (19:06 +0000)]
Update issue template so people will use it (#404)

2 years agoDeduplicate q4 quantization functions (#383)
Stephan Walter [Wed, 22 Mar 2023 17:29:06 +0000 (17:29 +0000)]
Deduplicate q4 quantization functions (#383)

* Deduplicate q4 quantization functions

* Use const; add basic test

* Re-enable quantization test

* Disable AVX2 flags in CI

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agofix: add POSIX functionality for Linux compilation (#51)
Valentyn Bezshapkin [Wed, 22 Mar 2023 17:20:25 +0000 (18:20 +0100)]
fix: add POSIX functionality for Linux compilation (#51)

* fix: add POSIX functionality for Linux compilation

* fix: older standard for compatibility

2 years agoDon't force immediate interactive without `-i` (#354)
tjohnman [Wed, 22 Mar 2023 17:16:35 +0000 (18:16 +0100)]
Don't force immediate interactive without `-i` (#354)

* Don't force immediate interactive without -i

Sometimes we might want to use a reverse prompt but we want to let the
model generate tokens right after the initial prompt. So we don't force
user input mode if the -i flag wasn't specified and instead let it run
until we encounter the reverse prompt.

This gives use some more flexibility, since it doesn't force the user to
enter a newline if they want to let the model generate text right after
the initial prompt and only be asked for input if the reverse prompt is
encountered.

The `--interactive-first` flag is reintroduced to force the old
behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it
can be specified more than once).

* Update help output.

---------

Co-authored-by: Johnman <redacted>
2 years agocmake: make llama an actual library (#392)
Erik Scholz [Wed, 22 Mar 2023 16:37:10 +0000 (17:37 +0100)]
cmake: make llama an actual library (#392)

2 years agofix perplexity after c-api refactor (#390)
Erik Scholz [Wed, 22 Mar 2023 16:09:38 +0000 (17:09 +0100)]
fix perplexity after c-api refactor (#390)

* preallocate a buffer of fitting size for tokenization (utils.cpp)

* don't create a new std::string (especially here, where it's usually large)

2 years agoAdd details on perplexity to README.md (#395)
Gary Linscott [Wed, 22 Mar 2023 15:53:54 +0000 (08:53 -0700)]
Add details on perplexity to README.md (#395)

2 years agoAdd missing header for memcpy (#386)
Yusuf Kağan Hanoğlu [Wed, 22 Mar 2023 08:55:45 +0000 (11:55 +0300)]
Add missing header for memcpy (#386)

fixed: memcpy is not defined

2 years agoWhen seed <= 0 - use the clock to generate one
Georgi Gerganov [Wed, 22 Mar 2023 05:47:15 +0000 (07:47 +0200)]
When seed <= 0 - use the clock to generate one

2 years agoInit llama_context_params properly from CLI (#370)
Georgi Gerganov [Wed, 22 Mar 2023 05:45:00 +0000 (07:45 +0200)]
Init llama_context_params properly from CLI (#370)

2 years agoRemove temporary notice and update hot topics
Georgi Gerganov [Wed, 22 Mar 2023 05:34:02 +0000 (07:34 +0200)]
Remove temporary notice and update hot topics

2 years agoIntroduce C-style API (#370)
Georgi Gerganov [Wed, 22 Mar 2023 05:32:36 +0000 (07:32 +0200)]
Introduce C-style API (#370)

* Major refactoring - introduce C-style API

* Clean up

* Add <cassert>

* Add <iterator>

* Add <algorithm> ....

* Fix timing reporting and accumulation

* Measure eval time only for single-token calls

* Change llama_tokenize return meaning

2 years agoAdd SHA256SUMS file and instructions to README how to obtain and verify the downloads
Gary Mulder [Mon, 20 Mar 2023 20:14:06 +0000 (20:14 +0000)]
Add SHA256SUMS file and instructions to README how to obtain and verify the downloads

Hashes created using:

sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS

2 years agoFix bin dir for win ci
anzz1 [Tue, 21 Mar 2023 21:49:24 +0000 (23:49 +0200)]
Fix bin dir for win ci

2 years agospecify build type for ctest on windows (#371)
Erik Scholz [Tue, 21 Mar 2023 21:34:25 +0000 (22:34 +0100)]
specify build type for ctest on windows (#371)

2 years agoAdd notice about pending change
Georgi Gerganov [Tue, 21 Mar 2023 20:57:35 +0000 (22:57 +0200)]
Add notice about pending change

2 years agofix typo in chatLLaMa (#368)
Mathieu Nayrolles [Tue, 21 Mar 2023 20:52:27 +0000 (16:52 -0400)]
fix typo in chatLLaMa (#368)

The prompt contains a typo where 'alound' is used instead of 'aloud'.

2 years agoUpdate issue templates
Georgi Gerganov [Tue, 21 Mar 2023 17:47:27 +0000 (19:47 +0200)]
Update issue templates