]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agoAdd link to Roadmap discussion
Georgi Gerganov [Fri, 24 Mar 2023 07:13:35 +0000 (09:13 +0200)]
Add link to Roadmap discussion

2 years agoRevert "Fix memory allocation issues and seg faults"
Georgi Gerganov [Fri, 24 Mar 2023 04:22:28 +0000 (06:22 +0200)]
Revert "Fix memory allocation issues and seg faults"

This reverts commit 4870e455b3653f7d7769fa5772b2c90ffad088df.

Will provide the correct fix later

2 years agoFix memory allocation issues and seg faults
Georgi Gerganov [Thu, 23 Mar 2023 22:11:53 +0000 (00:11 +0200)]
Fix memory allocation issues and seg faults

2 years agoAvoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Georgi Gerganov [Thu, 23 Mar 2023 21:22:01 +0000 (23:22 +0200)]
Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)

Should make results reproducible for different number of threads and batch sizes

2 years agoFix quantize script not finding models in parent directory (#428)
Jed Fox [Thu, 23 Mar 2023 20:42:52 +0000 (16:42 -0400)]
Fix quantize script not finding models in parent directory (#428)

2 years agoRemove oboslete command from Docker script
Georgi Gerganov [Thu, 23 Mar 2023 20:39:44 +0000 (22:39 +0200)]
Remove oboslete command from Docker script

2 years agoObsolete
Georgi Gerganov [Thu, 23 Mar 2023 20:32:02 +0000 (22:32 +0200)]
Obsolete

2 years agoReplace EOS with newline to prevent context/memory being flushed by EOS in interactiv...
rabidcopy [Thu, 23 Mar 2023 20:22:47 +0000 (15:22 -0500)]
Replace EOS with newline to prevent context/memory being flushed by EOS in interactive mode (#333)

* Improve interactive mode's coherence after EOS

Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached.
Not sure what token 13 is or why it seems to help. See conversation for examples.

* Make newline token a constant

* dynamically determine newline token

* relocate previous newline token const

* cleanup whitespace

* print a new line on end of text in interactive

this may need to be looked into further when not using a reverse prompt

* only print manual newline with reverse prompt

fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise

* alternate approach to replace end of text tokens

* Inject the reverse prompt again after eos in interactive mode

* tokenize reverse prompt when needed

makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330

* tokenize and inject only first reverse prompt

thanks to tjohnman

* tokenize first reverse prompt once

* add newline token

* add newline token

* tokenize/inject reverse prompt for refactor

this doesn't seem right though

* tokenize nothing for antiprompt if no reverse

* Update main.cpp

* Update main.cpp

* tokenize and inject reverse prompt as needed

this doesn't seem to work if the reverse prompt is tokenized outside earlier on

* not needed

* remove newline token

* remove newline token

* tokenize newline token

* add space to comment

* Update main.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoFix GPTQ converter (#423)
Timmy Knight [Thu, 23 Mar 2023 20:18:13 +0000 (10:18 -1000)]
Fix GPTQ converter (#423)

* Fix GPTQ converter

* Fix comment

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoGenerate library with CMake (#430)
nusu-github [Thu, 23 Mar 2023 20:16:48 +0000 (05:16 +0900)]
Generate library with CMake (#430)

* Generate library with CMake

BUILD_SHARED_LIBS to allow llama library to be generated.

* Turn ON PIC when BUILD_SHARED_LIBS is ON

2 years agoCommand line args bounds checking (#424)
anzz1 [Thu, 23 Mar 2023 17:54:28 +0000 (19:54 +0200)]
Command line args bounds checking (#424)

* command line args bounds checking

* unknown and invalid param exit codes 0 -> 1

2 years agoFix Nix build
Ben Siraphob [Wed, 22 Mar 2023 05:37:02 +0000 (00:37 -0500)]
Fix Nix build

2 years agoRevert "Delete SHA256SUMS for now" (#429)
Stephan Walter [Thu, 23 Mar 2023 14:15:48 +0000 (14:15 +0000)]
Revert "Delete SHA256SUMS for now" (#429)

* Revert "Delete SHA256SUMS for now (#416)"

This reverts commit 8eea5ae0e5f31238a97c79ea9103c27647380e37.

* Remove ggml files until they can be verified
* Remove alpaca json
* Add also model/tokenizer.model to SHA256SUMS + update README

---------

Co-authored-by: Pavol Rusnak <redacted>
2 years agoFix Makefile echo escape codes (by removing them). (#418)
Kerfuffle [Thu, 23 Mar 2023 11:41:32 +0000 (05:41 -0600)]
Fix Makefile echo escape codes (by removing them). (#418)

2 years agoMove model section from issue template to README.md (#421)
Gary Mulder [Thu, 23 Mar 2023 11:30:40 +0000 (11:30 +0000)]
Move model section from issue template to README.md (#421)

* Update custom.md

* Removed Model section as it is better placed in README.md

* Updates to README.md model section

* Inserted text that was removed from  issue template about obtaining models from FB and links to papers describing the various models

* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway

* Updated the perplexity section to point at Perplexity scores #406 discussion

2 years agoDelete SHA256SUMS for now (#416)
anzz1 [Thu, 23 Mar 2023 10:26:19 +0000 (12:26 +0200)]
Delete SHA256SUMS for now (#416)

Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format
Re-add after #374 is resolved

2 years agoAdjust repetition penalty ..
Georgi Gerganov [Thu, 23 Mar 2023 08:46:58 +0000 (10:46 +0200)]
Adjust repetition penalty ..

2 years agoAdd link to recent podcast about whisper.cpp and llama.cpp
Georgi Gerganov [Thu, 23 Mar 2023 07:48:51 +0000 (09:48 +0200)]
Add link to recent podcast about whisper.cpp and llama.cpp

2 years agoCI: CMake: Separate build and test steps (#376)
anzz1 [Thu, 23 Mar 2023 02:20:34 +0000 (04:20 +0200)]
CI: CMake: Separate build and test steps (#376)

* CI: Separate Build and Test steps (CMake)

* CI: Make sure build passes before running tests (CMake)

* CI: Standardise step id names

2 years agoFix instruct mode broken by PR #354 (#409)
tjohnman [Thu, 23 Mar 2023 00:30:23 +0000 (01:30 +0100)]
Fix instruct mode broken by PR #354 (#409)

Co-authored-by: Johnman <redacted>
2 years agoUpdate issue template so people will use it (#404)
Gary Mulder [Wed, 22 Mar 2023 19:06:18 +0000 (19:06 +0000)]
Update issue template so people will use it (#404)

2 years agoDeduplicate q4 quantization functions (#383)
Stephan Walter [Wed, 22 Mar 2023 17:29:06 +0000 (17:29 +0000)]
Deduplicate q4 quantization functions (#383)

* Deduplicate q4 quantization functions

* Use const; add basic test

* Re-enable quantization test

* Disable AVX2 flags in CI

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agofix: add POSIX functionality for Linux compilation (#51)
Valentyn Bezshapkin [Wed, 22 Mar 2023 17:20:25 +0000 (18:20 +0100)]
fix: add POSIX functionality for Linux compilation (#51)

* fix: add POSIX functionality for Linux compilation

* fix: older standard for compatibility

2 years agoDon't force immediate interactive without `-i` (#354)
tjohnman [Wed, 22 Mar 2023 17:16:35 +0000 (18:16 +0100)]
Don't force immediate interactive without `-i` (#354)

* Don't force immediate interactive without -i

Sometimes we might want to use a reverse prompt but we want to let the
model generate tokens right after the initial prompt. So we don't force
user input mode if the -i flag wasn't specified and instead let it run
until we encounter the reverse prompt.

This gives use some more flexibility, since it doesn't force the user to
enter a newline if they want to let the model generate text right after
the initial prompt and only be asked for input if the reverse prompt is
encountered.

The `--interactive-first` flag is reintroduced to force the old
behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it
can be specified more than once).

* Update help output.

---------

Co-authored-by: Johnman <redacted>
2 years agocmake: make llama an actual library (#392)
Erik Scholz [Wed, 22 Mar 2023 16:37:10 +0000 (17:37 +0100)]
cmake: make llama an actual library (#392)

2 years agofix perplexity after c-api refactor (#390)
Erik Scholz [Wed, 22 Mar 2023 16:09:38 +0000 (17:09 +0100)]
fix perplexity after c-api refactor (#390)

* preallocate a buffer of fitting size for tokenization (utils.cpp)

* don't create a new std::string (especially here, where it's usually large)

2 years agoAdd details on perplexity to README.md (#395)
Gary Linscott [Wed, 22 Mar 2023 15:53:54 +0000 (08:53 -0700)]
Add details on perplexity to README.md (#395)

2 years agoAdd missing header for memcpy (#386)
Yusuf Kağan Hanoğlu [Wed, 22 Mar 2023 08:55:45 +0000 (11:55 +0300)]
Add missing header for memcpy (#386)

fixed: memcpy is not defined

2 years agoWhen seed <= 0 - use the clock to generate one
Georgi Gerganov [Wed, 22 Mar 2023 05:47:15 +0000 (07:47 +0200)]
When seed <= 0 - use the clock to generate one

2 years agoInit llama_context_params properly from CLI (#370)
Georgi Gerganov [Wed, 22 Mar 2023 05:45:00 +0000 (07:45 +0200)]
Init llama_context_params properly from CLI (#370)

2 years agoRemove temporary notice and update hot topics
Georgi Gerganov [Wed, 22 Mar 2023 05:34:02 +0000 (07:34 +0200)]
Remove temporary notice and update hot topics

2 years agoIntroduce C-style API (#370)
Georgi Gerganov [Wed, 22 Mar 2023 05:32:36 +0000 (07:32 +0200)]
Introduce C-style API (#370)

* Major refactoring - introduce C-style API

* Clean up

* Add <cassert>

* Add <iterator>

* Add <algorithm> ....

* Fix timing reporting and accumulation

* Measure eval time only for single-token calls

* Change llama_tokenize return meaning

2 years agoAdd SHA256SUMS file and instructions to README how to obtain and verify the downloads
Gary Mulder [Mon, 20 Mar 2023 20:14:06 +0000 (20:14 +0000)]
Add SHA256SUMS file and instructions to README how to obtain and verify the downloads

Hashes created using:

sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS

2 years agoFix bin dir for win ci
anzz1 [Tue, 21 Mar 2023 21:49:24 +0000 (23:49 +0200)]
Fix bin dir for win ci

2 years agospecify build type for ctest on windows (#371)
Erik Scholz [Tue, 21 Mar 2023 21:34:25 +0000 (22:34 +0100)]
specify build type for ctest on windows (#371)

2 years agoAdd notice about pending change
Georgi Gerganov [Tue, 21 Mar 2023 20:57:35 +0000 (22:57 +0200)]
Add notice about pending change

2 years agofix typo in chatLLaMa (#368)
Mathieu Nayrolles [Tue, 21 Mar 2023 20:52:27 +0000 (16:52 -0400)]
fix typo in chatLLaMa (#368)

The prompt contains a typo where 'alound' is used instead of 'aloud'.

2 years agoUpdate issue templates
Georgi Gerganov [Tue, 21 Mar 2023 17:47:27 +0000 (19:47 +0200)]
Update issue templates

2 years agoWe could use std::unordered_map over std::map (#305)
Fabio R. Sluzala [Tue, 21 Mar 2023 17:21:50 +0000 (14:21 -0300)]
We could use std::unordered_map over std::map (#305)

* Improve performance by changing std::map to std::unordered_map and std::map<id, token> id_to_token; to std::vector<token> id_to_token;

* fix last commit on gpt_vocab_init add vocab.id_to_token.resize(vocab.token_to_id.size());

* Removed include <map>

* Nest struct token score inside gpt_vocab

* renamed token to tok

2 years agoFix color codes emitting mid-UTF8 code. (#312)
Matvey Soloviev [Tue, 21 Mar 2023 17:11:01 +0000 (18:11 +0100)]
Fix color codes emitting mid-UTF8 code. (#312)

2 years agoImporter for GPTQ quantized LLaMA models (#301)
comex [Tue, 21 Mar 2023 16:42:25 +0000 (09:42 -0700)]
Importer for GPTQ quantized LLaMA models (#301)

* [WIP, broken] Importer for GPTQ quantized LLaMA models

Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa

Current status: Something is busted.  The output starts out decent, but
quickly degrades into gibberish.  This doesn't happen with either the
original GPTQ-for-LLaMa using the same weights, or llama.cpp when using
weights quantized by its own quantizer.  Is there a bug in the
conversion script that somehow only comes into play with a large context
size?

I did notice one potential issue.  It's clearly not the main cause of
the gibberish, since it doesn't happen when using q4_1 weights quantized
by llama.cpp itself, but it seems concerning.  When doing a matrix
multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when
the multiplication is not done with BLAS, the intermediate results are
stored in the smaller format rather than f32.  This seems like an
unnecessary waste of precision, especially in the q4_1 case.

I was originally hoping to validate the results by matching the Python
implementation's output exactly, but precision and non-associativity
issues make this very difficult, including when performing matrix
multiplications and, especially, computing norms.

Anyway, design details:

The models being imported store per-layer weights in essentially q4_1
format, although the addend and scale are shared across an entire row
rather than every group of 32 weights.  This script duplicates the
addend and scale to match ggml's expectations, at the cost of wasting
some memory.

However, there are two differences which I accommodated changing the
output format (and adding corresponding support to main.cpp) rather than
having the script match the existing one:

- The tok_embeddings and output weights (i.e. the weights that aren't
  per-layer) are f16 instead of q4_1.  They could be converted to q4_1,
  and the impact of the loss of precision would probably be low, but
  this would rule out exactly matching the Python implementation's
  output for validation.

- There is no sharding, since the input doesn't have it, and for a
  CPU-only implementation it seems more useful to avoid having to deal
  with multiple files.

The new format is differentiated from existing q4_1 format by changing
the 'f16' header flag to a new value, 4.  That said, I think a cleaner
approach would be to change main.cpp to support loading each tensor with
an arbitrary sharding configuration and type rather than hardcoding
specific combinations of types.  So far I've wasted too much time
debugging to try implementing this...

* Add missing permutation.  Now it works.

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoCompute perplexity over prompt (#270)
Gary Linscott [Tue, 21 Mar 2023 16:27:42 +0000 (09:27 -0700)]
Compute perplexity over prompt (#270)

* Compute perplexity over prompt

* More accurate perplexity calculation - over all logits in the context window (so 512x more tokens!)

* Output all perplexitiies

* Add timing/ETA

2 years agoAdd chatLLaMa script (#198)
Jean-Christophe Hoelt [Tue, 21 Mar 2023 16:23:15 +0000 (18:23 +0200)]
Add chatLLaMa script (#198)

* Add chatLLaMa script

* Fix shellcheck errors and do some cleanup

* Move chatLLaMa script to `examples` directory

* Reduce chatLLaMa context size to 2048

Ref d7def1a7524f712e5ebb7cd02bab0f13aa56a7f9

* Include n_predict to 2048 in examples/chatLLaMa

2 years agomakefile: Fix CPU feature detection on Haiku (#218)
Alex von Gluck IV [Tue, 21 Mar 2023 16:21:06 +0000 (11:21 -0500)]
makefile: Fix CPU feature detection on Haiku (#218)

2 years agoEnable ANSI colors on Windows 10+ (#311)
anzz1 [Tue, 21 Mar 2023 16:14:46 +0000 (18:14 +0200)]
Enable ANSI colors on Windows 10+ (#311)

* Enable ANSI colors on Windows 10+

On older versions function will silently fail without any ill effects

* Do not call SetConsoleMode if the mode is already set

* Update main.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoMinor style changes
Georgi Gerganov [Tue, 21 Mar 2023 16:10:32 +0000 (18:10 +0200)]
Minor style changes

2 years agoAdd chat.sh script
Georgi Gerganov [Tue, 21 Mar 2023 16:09:37 +0000 (18:09 +0200)]
Add chat.sh script

2 years agoCheck for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman [Tue, 21 Mar 2023 16:05:06 +0000 (17:05 +0100)]
Check for reverse prompt by characters instead of tokens (#292) (#330)

* Check for reverse prompt by characters instead of tokens (#292)

* Update main.cpp

Wording.

* Cleanup.

* Remove unnecessary use of std::stringstream.

---------

Co-authored-by: Johnman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoCheck for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman [Tue, 21 Mar 2023 16:04:43 +0000 (17:04 +0100)]
Check for reverse prompt by characters instead of tokens (#292) (#330)

* Check for reverse prompt by characters instead of tokens (#292)

* Update main.cpp

Wording.

* Cleanup.

* Remove unnecessary use of std::stringstream.

---------

Co-authored-by: Johnman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoFix convert script, warnings alpaca instructions, default params
Georgi Gerganov [Tue, 21 Mar 2023 15:59:16 +0000 (17:59 +0200)]
Fix convert script, warnings alpaca instructions, default params

2 years agoAdd OpenBSD support (#314)
Kevin Lo [Tue, 21 Mar 2023 15:50:09 +0000 (09:50 -0600)]
Add OpenBSD support (#314)

2 years agofix typo in comment (#318)
Mack Straight [Tue, 21 Mar 2023 15:49:43 +0000 (08:49 -0700)]
fix typo in comment (#318)

2 years agoMakefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)
Qingyou Meng [Tue, 21 Mar 2023 15:44:11 +0000 (23:44 +0800)]
Makefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)

2 years agocmdline option for custom amount of model parts (--n_parts N) (#348)
anzz1 [Tue, 21 Mar 2023 15:42:43 +0000 (17:42 +0200)]
cmdline option for custom amount of model parts (--n_parts N) (#348)

* cmdline option for custom amount of model parts (--n_parts N)

* Update main.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoUpdate IPFS links to quantized alpaca with new tokenizer format (#352)
Kevin Kwok [Tue, 21 Mar 2023 15:34:49 +0000 (08:34 -0700)]
Update IPFS links to quantized alpaca with new tokenizer format (#352)

2 years agoChange default repeat_penalty to 1.0
Georgi Gerganov [Tue, 21 Mar 2023 15:32:14 +0000 (17:32 +0200)]
Change default repeat_penalty to 1.0

I feel this penalty is not really helping.
Especially for the example from the README it makes results pretty bad

2 years agoAdd tokenizer test + revert to C++11 (#355)
Georgi Gerganov [Tue, 21 Mar 2023 15:29:41 +0000 (17:29 +0200)]
Add tokenizer test + revert to C++11 (#355)

* Add test-tokenizer-0 to do a few tokenizations - feel free to expand
* Added option to convert-pth-to-ggml.py script to dump just the vocabulary
* Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests)
* Added utility to load vocabulary file from previous point (temporary implementation)
* Avoid using std::string_view and drop back to C++11 (hope I didn't break something)
* Rename gpt_vocab -> llama_vocab
* All CMake binaries go into ./bin/ now

2 years agoAdd initial AVX512 support for dot product on Linux (#320)
Casey Primozic [Tue, 21 Mar 2023 14:35:42 +0000 (07:35 -0700)]
Add initial AVX512 support for dot product on Linux (#320)

 * Update Makefile to detect AVX512 support and add compiler flags if it's available
 * Based on existing AVX2 implementation, dot product on one 32-value block of 4-bit quantized ints at a time
 * Perform 8 bit -> 16 bit sign extension and multiply+add on 32 values at time instead of 16
 * Use built-in AVX512 horizontal reduce add to get sum at the end
 * Manual unrolling on inner dot product loop to reduce loop counter overhead

2 years agoAdding missing features of CMakeLists.txt & Refactoring (#131)
nusu-github [Tue, 21 Mar 2023 00:37:16 +0000 (09:37 +0900)]
Adding missing features of CMakeLists.txt & Refactoring (#131)

* Functionality addition CMakeLists.txt

Refactoring:
1. Simplify more options that are negation of negation.
LLAMA_NO_ACCELERATE -> LLAMA_ACCELERATE
2. Changed to an optional expression instead of forcing to enable AVX2 in MSVC.
3. Make CMAKE_CXX_STANDARD, which is different from Makefile, the same.
4. Use add_compile_options instead of adding options to CMAKE_C_FLAGS.
5. Make utils use target_link_libraries instead of directly referencing code.

Added features:
1. Added some options.
LLAMA_STATIC_LINK,LLAMA_NATIVE,LLAMA_LTO,LLAMA_GPROF,LLAMA_OPENBLAS

* Fix Accelerate link in CMake

* Windows build Fix

* C++11 to C++17

* Reflects C/C++ standard individually

* Change the version to 3.12

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoNix flake: set meta.mainProgram to llama
Ben Siraphob [Mon, 20 Mar 2023 21:44:30 +0000 (16:44 -0500)]
Nix flake: set meta.mainProgram to llama

2 years agoFixed tokenizer.model not found error when model dir is symlink (#325)
Qingyou Meng [Mon, 20 Mar 2023 19:33:10 +0000 (03:33 +0800)]
Fixed tokenizer.model not found error when model dir is symlink (#325)

2 years agomove file magic/version to header, print expected version (#319)
Mack Straight [Mon, 20 Mar 2023 19:26:01 +0000 (12:26 -0700)]
move file magic/version to header, print expected version (#319)

2 years agoDocker - Fix publish docker image in GitHub Registry (#235)
Bernat Vadell [Mon, 20 Mar 2023 17:05:20 +0000 (18:05 +0100)]
Docker - Fix publish docker image in GitHub Registry (#235)

* fix publish permission

* try to fix docker pipeline using as password github_token & username repository_owner

2 years agosentencepiece bpe compatible tokenizer (#252)
Mack Straight [Mon, 20 Mar 2023 10:17:23 +0000 (03:17 -0700)]
sentencepiece bpe compatible tokenizer (#252)

* potential out of bounds read

* fix quantize

* style

* Update convert-pth-to-ggml.py

* mild cleanup

* don't need the space-prefixing here rn since main.cpp already does it

* new file magic + version header field

* readme notice

* missing newlines

Co-authored-by: slaren <redacted>
2 years agoAdd tqdm to Python requirements (#293)
Stephan Walter [Mon, 20 Mar 2023 08:24:11 +0000 (08:24 +0000)]
Add tqdm to Python requirements (#293)

* Add tqdm to Python requirements
* Remove torchvision torchaudio, add requests

2 years agobugfix: default should not be interactive (#304)
cocktailpeanut [Sun, 19 Mar 2023 21:44:20 +0000 (17:44 -0400)]
bugfix: default should not be interactive (#304)

2 years agoRename script
Georgi Gerganov [Sun, 19 Mar 2023 19:58:51 +0000 (21:58 +0200)]
Rename script

2 years agoAdd temporary helper script for Alpaca chat
Georgi Gerganov [Sun, 19 Mar 2023 19:57:28 +0000 (21:57 +0200)]
Add temporary helper script for Alpaca chat

2 years agofix coloring of last `n_batch` of prompt, and refactor line input (#221)
Rickey Bowers Jr [Sun, 19 Mar 2023 19:44:30 +0000 (13:44 -0600)]
fix coloring of last `n_batch` of prompt, and refactor line input (#221)

* fix coloring of last `n_batch` of prompt, and refactor line input
* forgot the newline that needs to be sent to the model
* (per #283) try to force flush of color reset in SIGINT handler

2 years agoSupport for multiple reverse prompts. (#299)
tjohnman [Sun, 19 Mar 2023 19:33:06 +0000 (20:33 +0100)]
Support for multiple reverse prompts. (#299)

Co-authored-by: Johnman <>
Co-authored-by: Johnman <redacted>
2 years agoImproved quantize script (#222)
Suaj Carrot [Sun, 19 Mar 2023 18:38:44 +0000 (12:38 -0600)]
Improved quantize script (#222)

* Improved quantize script

I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility.

* Fixes and improvements based on Matt's observations

Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented).

* Small fixes to the previous commit

* Corrected to use the original glob pattern

The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now.

* Added support for Windows and updated README to use this script

New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one.

* Fixed a typo and removed shell=True in the subprocess.run call

Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters.

* Corrected previous commit

* Small tweak: changed the name of the program in argparse

This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".

2 years agoMake prompt randomization optional. (#300)
tjohnman [Sun, 19 Mar 2023 18:36:19 +0000 (19:36 +0100)]
Make prompt randomization optional. (#300)

Co-authored-by: Johnman <>
2 years agoRespect the maximum number of tokens in interactive. (#298)
tjohnman [Sun, 19 Mar 2023 18:31:17 +0000 (19:31 +0100)]
Respect the maximum number of tokens in interactive. (#298)

Co-authored-by: Johnman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoAdd --ignore-eos parameter (#181)
slaren [Sun, 19 Mar 2023 18:22:48 +0000 (19:22 +0100)]
Add --ignore-eos parameter (#181)

Co-authored-by: Georgi Gerganov <redacted>
2 years agointeractive mode: print '\n' in sigint_handler, this flush stdout thus ensure color...
Qingyou Meng [Sun, 19 Mar 2023 18:10:00 +0000 (02:10 +0800)]
interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure color reset. (#283)

2 years agoCommand line switch to use F16 for memory_k and memory_v (refactor of #154) (#294)
Erik Scholz [Sun, 19 Mar 2023 17:57:00 +0000 (18:57 +0100)]
Command line switch to use F16 for memory_k and memory_v (refactor of #154) (#294)

* Use F16 for memory_k and memory_v

* add command line switch to use f16 instead of f32 for memory k+v

---------

Co-authored-by: Ty Everett <redacted>
2 years agoUpdate hot topics to mention Alpaca support
Georgi Gerganov [Sun, 19 Mar 2023 17:51:55 +0000 (19:51 +0200)]
Update hot topics to mention Alpaca support

2 years agoFix off-by-one bug (#115)
Georgi Gerganov [Sun, 19 Mar 2023 17:46:32 +0000 (19:46 +0200)]
Fix off-by-one bug (#115)

2 years agoFix python stuff (#109)
Georgi Gerganov [Sun, 19 Mar 2023 17:33:18 +0000 (19:33 +0200)]
Fix python stuff (#109)

2 years agoRefactoring `convert-pth-to-ggml.py`: more concise and readable (#109)
qunash [Sun, 19 Mar 2023 17:17:39 +0000 (20:17 +0300)]
Refactoring `convert-pth-to-ggml.py`: more concise and readable (#109)

* Refactor get_n_parts function to simplify code and improve readability

* Use f-strings instead of concatenation

* Refactoring: more concise and readable

* modularize

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoDrop trailing new line from file prompts (#80)
Georgi Gerganov [Sun, 19 Mar 2023 17:04:44 +0000 (19:04 +0200)]
Drop trailing new line from file prompts (#80)

2 years agoAdd instruction for using Alpaca (#240)
Georgi Gerganov [Sun, 19 Mar 2023 16:49:50 +0000 (18:49 +0200)]
Add instruction for using Alpaca (#240)

2 years agoAdd "--instruct" argument for usage with Alpaca (#240)
Georgi Gerganov [Sun, 19 Mar 2023 16:37:02 +0000 (18:37 +0200)]
Add "--instruct" argument for usage with Alpaca (#240)

Also start adding prompts in "./prompts"

2 years agoChange RMSNorm eps to 1e-6 (#173)
Georgi Gerganov [Sun, 19 Mar 2023 15:30:00 +0000 (17:30 +0200)]
Change RMSNorm eps to 1e-6 (#173)

I think this is what is used in the Python code

2 years agoWarn user if a context size greater than 2048 tokens is specified (#274)
Ronsor [Sun, 19 Mar 2023 00:10:47 +0000 (17:10 -0700)]
Warn user if a context size greater than 2048 tokens is specified (#274)

LLaMA doesn't support more than 2048 token context sizes, and going above that produces terrible results.

2 years agoFix typo in readme
Pavol Rusnak [Sat, 18 Mar 2023 21:39:46 +0000 (22:39 +0100)]
Fix typo in readme

2 years agoAdd note about Python 3.11 to readme
Pavol Rusnak [Sat, 18 Mar 2023 21:20:04 +0000 (22:20 +0100)]
Add note about Python 3.11 to readme

2 years agoAdd memory/disk requirements to readme
Pavol Rusnak [Sat, 18 Mar 2023 20:58:46 +0000 (21:58 +0100)]
Add memory/disk requirements to readme

2 years agoRemove unused code since n_vocab is model.hparams.n_vocab (#262)
Alex Nguyen [Sat, 18 Mar 2023 13:51:49 +0000 (20:51 +0700)]
Remove unused code since n_vocab is model.hparams.n_vocab (#262)

2 years agofixed warning with std::ignore about unused function result (#151)
Justin Suess [Sat, 18 Mar 2023 11:44:09 +0000 (07:44 -0400)]
fixed warning with std::ignore about unused function result (#151)

fixed warning with std::ignore about unused function result

2 years agoFix n^2 loop in tokenization (#254)
Gary Linscott [Sat, 18 Mar 2023 11:17:19 +0000 (04:17 -0700)]
Fix n^2 loop in tokenization (#254)

This causes long prompts to parse very slowly.

2 years agoCI Improvements (#230)
anzz1 [Sat, 18 Mar 2023 07:27:12 +0000 (09:27 +0200)]
CI Improvements (#230)

* CI Improvements

Manual build feature, autoreleases for Windows

* better CI naming convention

use branch name in releases and tags

2 years agoNix flake (#40)
Niklas Korz [Fri, 17 Mar 2023 22:03:48 +0000 (23:03 +0100)]
Nix flake (#40)

* Nix flake

* Nix: only add Accelerate framework on macOS

* Nix: development shel, direnv and compatibility

* Nix: use python packages supplied by withPackages

* Nix: remove channel compatibility

* Nix: fix ARM neon dotproduct on macOS

---------

Co-authored-by: Pavol Rusnak <redacted>
2 years agoImplement non-greedy tokenizer that tries to maximize token lengths (#242)
thement [Fri, 17 Mar 2023 20:05:58 +0000 (21:05 +0100)]
Implement non-greedy tokenizer that tries to maximize token lengths (#242)

* Implement non-greedy tokenizer that tries to maximize token lengths

* Insert single space in front of the prompt

- this is to match original llama tokenizer behavior

---------

Co-authored-by: Jakub Horak <redacted>
2 years agoDefault to 4 threads (#243)
Georgi Gerganov [Fri, 17 Mar 2023 19:46:46 +0000 (21:46 +0200)]
Default to 4 threads (#243)

2 years agoUpdate Contributing section
Georgi Gerganov [Fri, 17 Mar 2023 18:30:04 +0000 (20:30 +0200)]
Update Contributing section

2 years agoDon't tell users to use a bad number of threads (#243)
Stephan Walter [Fri, 17 Mar 2023 17:47:35 +0000 (17:47 +0000)]
Don't tell users to use a bad number of threads (#243)

The readme tells people to use the command line option "-t 8", causing 8
threads to be started. On systems with fewer than 8 cores, this causes a
significant slowdown. Remove the option from the example command lines
and use /proc/cpuinfo on Linux to determine a sensible default.

2 years agoadd ptread link to fix cmake build under linux (#114)
mmyjona [Fri, 17 Mar 2023 16:38:24 +0000 (00:38 +0800)]
add ptread link to fix cmake build under linux (#114)

* add ptread link to fix cmake build under linux

* add cmake to linux and macos platform

* separate make and cmake workflow

---------

Co-authored-by: Sebastián A <redacted>
2 years ago🚀 Dockerize llamacpp (#132)
Bernat Vadell [Fri, 17 Mar 2023 09:47:06 +0000 (10:47 +0100)]
🚀 Dockerize llamacpp (#132)

* feat: dockerize llamacpp

* feat: split build & runtime stages

* split dockerfile into main & tools

* add quantize into tool docker image

* Update .devops/tools.sh

Co-authored-by: Georgi Gerganov <redacted>
* add docker action pipeline

* change CI to publish at github docker registry

* fix name runs-on macOS-latest is macos-latest (lowercase)

* include docker versioned images

* fix github action docker

* fix docker.yml

* feat: include all-in-one command tool & update readme.md

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoQ4_1 quantization (#193)
Matvey Soloviev [Fri, 17 Mar 2023 04:48:39 +0000 (05:48 +0100)]
Q4_1 quantization (#193)

* Add AVX2 version of ggml_vec_dot_q4_1

* Small optimisations to q4_1 dot product (@Const-me)

* Rearrange Q4_1 quantization to work for multipart models. (Fix #152)

* Fix ggml_vec_mad_q4_1 too

* Fix non-vectorised q4_1 vec mul