]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agoRetire the ggml_mul_mat() branch for transposed src0 (#500)
Georgi Gerganov [Sat, 25 Mar 2023 17:47:21 +0000 (19:47 +0200)]
Retire the ggml_mul_mat() branch for transposed src0 (#500)

* Retire the ggml_mul_mat() for transposed src0

- It can always be made contiguous with ggml_cpy()
- The code is now simplified
- The results are deterministic in respect to num threads

* SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502)

* Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON

* Fix dequantization - forgot to interleave the quants

2 years agoDisable prompt verbosity by default and add option to enable (#480)
Georgi Gerganov [Sat, 25 Mar 2023 15:16:50 +0000 (17:16 +0200)]
Disable prompt verbosity by default and add option to enable (#480)

2 years agoAdd AVX2 implementation of dequantize_row_q4_0 (#467)
slaren [Sat, 25 Mar 2023 15:06:49 +0000 (16:06 +0100)]
Add AVX2 implementation of dequantize_row_q4_0 (#467)

2 years agoDon't interefe with BLAS for large prompts by running only 1 thread
Georgi Gerganov [Sat, 25 Mar 2023 15:03:10 +0000 (17:03 +0200)]
Don't interefe with BLAS for large prompts by running only 1 thread

2 years agoAdd longer DAN prompt for testing big batch numbers
Georgi Gerganov [Sat, 25 Mar 2023 14:47:59 +0000 (16:47 +0200)]
Add longer DAN prompt for testing big batch numbers

2 years agoAdd timings for the prompt evaluation (#478)
slaren [Sat, 25 Mar 2023 14:34:23 +0000 (15:34 +0100)]
Add timings for the prompt evaluation (#478)

2 years agoRemove obsolete information from README
Georgi Gerganov [Sat, 25 Mar 2023 14:30:32 +0000 (16:30 +0200)]
Remove obsolete information from README

2 years agoRemove obsolete assert and fix compiler warning
Georgi Gerganov [Sat, 25 Mar 2023 14:22:05 +0000 (16:22 +0200)]
Remove obsolete assert and fix compiler warning

2 years agoFix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS
Georgi Gerganov [Sat, 25 Mar 2023 14:09:54 +0000 (16:09 +0200)]
Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS

2 years agobounds checking for input prefix (#492)
anzz1 [Sat, 25 Mar 2023 12:42:09 +0000 (14:42 +0200)]
bounds checking for input prefix (#492)

2 years agofeat: '--in-prefix STRING' option (#426)
anzz1 [Sat, 25 Mar 2023 12:03:19 +0000 (14:03 +0200)]
feat: '--in-prefix STRING' option (#426)

Prefix user inputs with a string

2 years agoAdd support for file load progress reporting callbacks (#434)
Jed Fox [Sat, 25 Mar 2023 05:26:28 +0000 (01:26 -0400)]
Add support for file load progress reporting callbacks (#434)

* File load progress reporting

* Move llama_progress_handler into llama_context_params

* Renames

* Use seekg to find file size instead

* More correct load progress

* Call progress callback more frequently

* Fix typo

2 years agoAdd missing struct annotation (#483)
Doomsdayrs [Sat, 25 Mar 2023 05:21:24 +0000 (01:21 -0400)]
Add missing struct annotation (#483)

`llama_sample_top_p_top_k` was missing the struct annotation on line 126.

This causes a compiler issue when being parsed by the Kotlin C interop generator.

This commit fixes the above issue by adding the struct annotation.

2 years agoFix crash for 65B model with pre-allocated memory (#485)
Chris Kuehl [Sat, 25 Mar 2023 04:38:14 +0000 (23:38 -0500)]
Fix crash for 65B model with pre-allocated memory (#485)

2 years agoDisable BLAS altogether - the bug is not just for qunatized mat mul
Georgi Gerganov [Fri, 24 Mar 2023 21:47:06 +0000 (23:47 +0200)]
Disable BLAS altogether - the bug is not just for qunatized mat mul

2 years agoDisable BLAS branch in mul_mat - seems there is a bug
Georgi Gerganov [Fri, 24 Mar 2023 21:39:17 +0000 (23:39 +0200)]
Disable BLAS branch in mul_mat - seems there is a bug

2 years agoImmediately start processing the prompt before user input has been provided (#476)
Georgi Gerganov [Fri, 24 Mar 2023 21:17:58 +0000 (23:17 +0200)]
Immediately start processing the prompt before user input has been provided (#476)

2 years agoReduce memory usage and allocate enough memory for largest context (#473)
Georgi Gerganov [Fri, 24 Mar 2023 21:17:37 +0000 (23:17 +0200)]
Reduce memory usage and allocate enough memory for largest context (#473)

* Reduce memory usage and allocate enough memory for large contexts

* Simpler scratch buffer usage

* Reenable BLAS for quantized mul_mat

* Fix number of layers in 30B and 65B

* Fix KV cache size for F32

2 years agoTemporary bump the memory buffer size - hopefully fix issues from 483bab2e
Georgi Gerganov [Fri, 24 Mar 2023 16:23:56 +0000 (18:23 +0200)]
Temporary bump the memory buffer size - hopefully fix issues from 483bab2e

2 years agoUpdate README.md (#444)
Gary Mulder [Fri, 24 Mar 2023 15:23:09 +0000 (15:23 +0000)]
Update README.md (#444)

Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.

2 years agofix instruct mode (#445)
rabidcopy [Fri, 24 Mar 2023 15:22:39 +0000 (10:22 -0500)]
fix instruct mode (#445)

changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.

2 years agoProperly free llama_context on failure
Georgi Gerganov [Fri, 24 Mar 2023 15:21:01 +0000 (17:21 +0200)]
Properly free llama_context on failure

2 years agoadditional optimizations for POWER9 (#454)
Cameron Kaiser [Fri, 24 Mar 2023 15:19:26 +0000 (08:19 -0700)]
additional optimizations for POWER9 (#454)

2 years agoSupport calling mlock() on loaded model data on Linux and macOS (#453)
comex [Fri, 24 Mar 2023 15:19:05 +0000 (08:19 -0700)]
Support calling mlock() on loaded model data on Linux and macOS (#453)

* Support calling mlock() on loaded model data on Linux and macOS

This is enabled by a new --mlock command line option.

Using mlock() disables swapping and memory compression for the model
data.  Doing so can be useful on systems where the model takes up a
large fraction of system RAM.  In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.

Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.

In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.

* Update llama.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoAdd embedding mode with arg flag. Currently working (#282)
Luciano [Fri, 24 Mar 2023 15:05:13 +0000 (08:05 -0700)]
Add embedding mode with arg flag. Currently working (#282)

* working but ugly

* add arg flag, not working on embedding mode

* typo

* Working! Thanks to @nullhook

* make params argument instead of hardcoded boolean. remove useless time check

* start doing the instructions but not finished. This probably doesnt compile

* Embeddings extraction support

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoAdd link to Roadmap discussion
Georgi Gerganov [Fri, 24 Mar 2023 07:13:35 +0000 (09:13 +0200)]
Add link to Roadmap discussion

2 years agoRevert "Fix memory allocation issues and seg faults"
Georgi Gerganov [Fri, 24 Mar 2023 04:22:28 +0000 (06:22 +0200)]
Revert "Fix memory allocation issues and seg faults"

This reverts commit 4870e455b3653f7d7769fa5772b2c90ffad088df.

Will provide the correct fix later

2 years agoFix memory allocation issues and seg faults
Georgi Gerganov [Thu, 23 Mar 2023 22:11:53 +0000 (00:11 +0200)]
Fix memory allocation issues and seg faults

2 years agoAvoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Georgi Gerganov [Thu, 23 Mar 2023 21:22:01 +0000 (23:22 +0200)]
Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)

Should make results reproducible for different number of threads and batch sizes

2 years agoFix quantize script not finding models in parent directory (#428)
Jed Fox [Thu, 23 Mar 2023 20:42:52 +0000 (16:42 -0400)]
Fix quantize script not finding models in parent directory (#428)

2 years agoRemove oboslete command from Docker script
Georgi Gerganov [Thu, 23 Mar 2023 20:39:44 +0000 (22:39 +0200)]
Remove oboslete command from Docker script

2 years agoObsolete
Georgi Gerganov [Thu, 23 Mar 2023 20:32:02 +0000 (22:32 +0200)]
Obsolete

2 years agoReplace EOS with newline to prevent context/memory being flushed by EOS in interactiv...
rabidcopy [Thu, 23 Mar 2023 20:22:47 +0000 (15:22 -0500)]
Replace EOS with newline to prevent context/memory being flushed by EOS in interactive mode (#333)

* Improve interactive mode's coherence after EOS

Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached.
Not sure what token 13 is or why it seems to help. See conversation for examples.

* Make newline token a constant

* dynamically determine newline token

* relocate previous newline token const

* cleanup whitespace

* print a new line on end of text in interactive

this may need to be looked into further when not using a reverse prompt

* only print manual newline with reverse prompt

fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise

* alternate approach to replace end of text tokens

* Inject the reverse prompt again after eos in interactive mode

* tokenize reverse prompt when needed

makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330

* tokenize and inject only first reverse prompt

thanks to tjohnman

* tokenize first reverse prompt once

* add newline token

* add newline token

* tokenize/inject reverse prompt for refactor

this doesn't seem right though

* tokenize nothing for antiprompt if no reverse

* Update main.cpp

* Update main.cpp

* tokenize and inject reverse prompt as needed

this doesn't seem to work if the reverse prompt is tokenized outside earlier on

* not needed

* remove newline token

* remove newline token

* tokenize newline token

* add space to comment

* Update main.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoFix GPTQ converter (#423)
Timmy Knight [Thu, 23 Mar 2023 20:18:13 +0000 (10:18 -1000)]
Fix GPTQ converter (#423)

* Fix GPTQ converter

* Fix comment

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoGenerate library with CMake (#430)
nusu-github [Thu, 23 Mar 2023 20:16:48 +0000 (05:16 +0900)]
Generate library with CMake (#430)

* Generate library with CMake

BUILD_SHARED_LIBS to allow llama library to be generated.

* Turn ON PIC when BUILD_SHARED_LIBS is ON

2 years agoCommand line args bounds checking (#424)
anzz1 [Thu, 23 Mar 2023 17:54:28 +0000 (19:54 +0200)]
Command line args bounds checking (#424)

* command line args bounds checking

* unknown and invalid param exit codes 0 -> 1

2 years agoFix Nix build
Ben Siraphob [Wed, 22 Mar 2023 05:37:02 +0000 (00:37 -0500)]
Fix Nix build

2 years agoRevert "Delete SHA256SUMS for now" (#429)
Stephan Walter [Thu, 23 Mar 2023 14:15:48 +0000 (14:15 +0000)]
Revert "Delete SHA256SUMS for now" (#429)

* Revert "Delete SHA256SUMS for now (#416)"

This reverts commit 8eea5ae0e5f31238a97c79ea9103c27647380e37.

* Remove ggml files until they can be verified
* Remove alpaca json
* Add also model/tokenizer.model to SHA256SUMS + update README

---------

Co-authored-by: Pavol Rusnak <redacted>
2 years agoFix Makefile echo escape codes (by removing them). (#418)
Kerfuffle [Thu, 23 Mar 2023 11:41:32 +0000 (05:41 -0600)]
Fix Makefile echo escape codes (by removing them). (#418)

2 years agoMove model section from issue template to README.md (#421)
Gary Mulder [Thu, 23 Mar 2023 11:30:40 +0000 (11:30 +0000)]
Move model section from issue template to README.md (#421)

* Update custom.md

* Removed Model section as it is better placed in README.md

* Updates to README.md model section

* Inserted text that was removed from  issue template about obtaining models from FB and links to papers describing the various models

* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway

* Updated the perplexity section to point at Perplexity scores #406 discussion

2 years agoDelete SHA256SUMS for now (#416)
anzz1 [Thu, 23 Mar 2023 10:26:19 +0000 (12:26 +0200)]
Delete SHA256SUMS for now (#416)

Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format
Re-add after #374 is resolved

2 years agoAdjust repetition penalty ..
Georgi Gerganov [Thu, 23 Mar 2023 08:46:58 +0000 (10:46 +0200)]
Adjust repetition penalty ..

2 years agoAdd link to recent podcast about whisper.cpp and llama.cpp
Georgi Gerganov [Thu, 23 Mar 2023 07:48:51 +0000 (09:48 +0200)]
Add link to recent podcast about whisper.cpp and llama.cpp

2 years agoCI: CMake: Separate build and test steps (#376)
anzz1 [Thu, 23 Mar 2023 02:20:34 +0000 (04:20 +0200)]
CI: CMake: Separate build and test steps (#376)

* CI: Separate Build and Test steps (CMake)

* CI: Make sure build passes before running tests (CMake)

* CI: Standardise step id names

2 years agoFix instruct mode broken by PR #354 (#409)
tjohnman [Thu, 23 Mar 2023 00:30:23 +0000 (01:30 +0100)]
Fix instruct mode broken by PR #354 (#409)

Co-authored-by: Johnman <redacted>
2 years agoUpdate issue template so people will use it (#404)
Gary Mulder [Wed, 22 Mar 2023 19:06:18 +0000 (19:06 +0000)]
Update issue template so people will use it (#404)

2 years agoDeduplicate q4 quantization functions (#383)
Stephan Walter [Wed, 22 Mar 2023 17:29:06 +0000 (17:29 +0000)]
Deduplicate q4 quantization functions (#383)

* Deduplicate q4 quantization functions

* Use const; add basic test

* Re-enable quantization test

* Disable AVX2 flags in CI

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agofix: add POSIX functionality for Linux compilation (#51)
Valentyn Bezshapkin [Wed, 22 Mar 2023 17:20:25 +0000 (18:20 +0100)]
fix: add POSIX functionality for Linux compilation (#51)

* fix: add POSIX functionality for Linux compilation

* fix: older standard for compatibility

2 years agoDon't force immediate interactive without `-i` (#354)
tjohnman [Wed, 22 Mar 2023 17:16:35 +0000 (18:16 +0100)]
Don't force immediate interactive without `-i` (#354)

* Don't force immediate interactive without -i

Sometimes we might want to use a reverse prompt but we want to let the
model generate tokens right after the initial prompt. So we don't force
user input mode if the -i flag wasn't specified and instead let it run
until we encounter the reverse prompt.

This gives use some more flexibility, since it doesn't force the user to
enter a newline if they want to let the model generate text right after
the initial prompt and only be asked for input if the reverse prompt is
encountered.

The `--interactive-first` flag is reintroduced to force the old
behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it
can be specified more than once).

* Update help output.

---------

Co-authored-by: Johnman <redacted>
2 years agocmake: make llama an actual library (#392)
Erik Scholz [Wed, 22 Mar 2023 16:37:10 +0000 (17:37 +0100)]
cmake: make llama an actual library (#392)

2 years agofix perplexity after c-api refactor (#390)
Erik Scholz [Wed, 22 Mar 2023 16:09:38 +0000 (17:09 +0100)]
fix perplexity after c-api refactor (#390)

* preallocate a buffer of fitting size for tokenization (utils.cpp)

* don't create a new std::string (especially here, where it's usually large)

2 years agoAdd details on perplexity to README.md (#395)
Gary Linscott [Wed, 22 Mar 2023 15:53:54 +0000 (08:53 -0700)]
Add details on perplexity to README.md (#395)

2 years agoAdd missing header for memcpy (#386)
Yusuf Kağan Hanoğlu [Wed, 22 Mar 2023 08:55:45 +0000 (11:55 +0300)]
Add missing header for memcpy (#386)

fixed: memcpy is not defined

2 years agoWhen seed <= 0 - use the clock to generate one
Georgi Gerganov [Wed, 22 Mar 2023 05:47:15 +0000 (07:47 +0200)]
When seed <= 0 - use the clock to generate one

2 years agoInit llama_context_params properly from CLI (#370)
Georgi Gerganov [Wed, 22 Mar 2023 05:45:00 +0000 (07:45 +0200)]
Init llama_context_params properly from CLI (#370)

2 years agoRemove temporary notice and update hot topics
Georgi Gerganov [Wed, 22 Mar 2023 05:34:02 +0000 (07:34 +0200)]
Remove temporary notice and update hot topics

2 years agoIntroduce C-style API (#370)
Georgi Gerganov [Wed, 22 Mar 2023 05:32:36 +0000 (07:32 +0200)]
Introduce C-style API (#370)

* Major refactoring - introduce C-style API

* Clean up

* Add <cassert>

* Add <iterator>

* Add <algorithm> ....

* Fix timing reporting and accumulation

* Measure eval time only for single-token calls

* Change llama_tokenize return meaning

2 years agoAdd SHA256SUMS file and instructions to README how to obtain and verify the downloads
Gary Mulder [Mon, 20 Mar 2023 20:14:06 +0000 (20:14 +0000)]
Add SHA256SUMS file and instructions to README how to obtain and verify the downloads

Hashes created using:

sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS

2 years agoFix bin dir for win ci
anzz1 [Tue, 21 Mar 2023 21:49:24 +0000 (23:49 +0200)]
Fix bin dir for win ci

2 years agospecify build type for ctest on windows (#371)
Erik Scholz [Tue, 21 Mar 2023 21:34:25 +0000 (22:34 +0100)]
specify build type for ctest on windows (#371)

2 years agoAdd notice about pending change
Georgi Gerganov [Tue, 21 Mar 2023 20:57:35 +0000 (22:57 +0200)]
Add notice about pending change

2 years agofix typo in chatLLaMa (#368)
Mathieu Nayrolles [Tue, 21 Mar 2023 20:52:27 +0000 (16:52 -0400)]
fix typo in chatLLaMa (#368)

The prompt contains a typo where 'alound' is used instead of 'aloud'.

2 years agoUpdate issue templates
Georgi Gerganov [Tue, 21 Mar 2023 17:47:27 +0000 (19:47 +0200)]
Update issue templates

2 years agoWe could use std::unordered_map over std::map (#305)
Fabio R. Sluzala [Tue, 21 Mar 2023 17:21:50 +0000 (14:21 -0300)]
We could use std::unordered_map over std::map (#305)

* Improve performance by changing std::map to std::unordered_map and std::map<id, token> id_to_token; to std::vector<token> id_to_token;

* fix last commit on gpt_vocab_init add vocab.id_to_token.resize(vocab.token_to_id.size());

* Removed include <map>

* Nest struct token score inside gpt_vocab

* renamed token to tok

2 years agoFix color codes emitting mid-UTF8 code. (#312)
Matvey Soloviev [Tue, 21 Mar 2023 17:11:01 +0000 (18:11 +0100)]
Fix color codes emitting mid-UTF8 code. (#312)

2 years agoImporter for GPTQ quantized LLaMA models (#301)
comex [Tue, 21 Mar 2023 16:42:25 +0000 (09:42 -0700)]
Importer for GPTQ quantized LLaMA models (#301)

* [WIP, broken] Importer for GPTQ quantized LLaMA models

Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa

Current status: Something is busted.  The output starts out decent, but
quickly degrades into gibberish.  This doesn't happen with either the
original GPTQ-for-LLaMa using the same weights, or llama.cpp when using
weights quantized by its own quantizer.  Is there a bug in the
conversion script that somehow only comes into play with a large context
size?

I did notice one potential issue.  It's clearly not the main cause of
the gibberish, since it doesn't happen when using q4_1 weights quantized
by llama.cpp itself, but it seems concerning.  When doing a matrix
multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when
the multiplication is not done with BLAS, the intermediate results are
stored in the smaller format rather than f32.  This seems like an
unnecessary waste of precision, especially in the q4_1 case.

I was originally hoping to validate the results by matching the Python
implementation's output exactly, but precision and non-associativity
issues make this very difficult, including when performing matrix
multiplications and, especially, computing norms.

Anyway, design details:

The models being imported store per-layer weights in essentially q4_1
format, although the addend and scale are shared across an entire row
rather than every group of 32 weights.  This script duplicates the
addend and scale to match ggml's expectations, at the cost of wasting
some memory.

However, there are two differences which I accommodated changing the
output format (and adding corresponding support to main.cpp) rather than
having the script match the existing one:

- The tok_embeddings and output weights (i.e. the weights that aren't
  per-layer) are f16 instead of q4_1.  They could be converted to q4_1,
  and the impact of the loss of precision would probably be low, but
  this would rule out exactly matching the Python implementation's
  output for validation.

- There is no sharding, since the input doesn't have it, and for a
  CPU-only implementation it seems more useful to avoid having to deal
  with multiple files.

The new format is differentiated from existing q4_1 format by changing
the 'f16' header flag to a new value, 4.  That said, I think a cleaner
approach would be to change main.cpp to support loading each tensor with
an arbitrary sharding configuration and type rather than hardcoding
specific combinations of types.  So far I've wasted too much time
debugging to try implementing this...

* Add missing permutation.  Now it works.

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoCompute perplexity over prompt (#270)
Gary Linscott [Tue, 21 Mar 2023 16:27:42 +0000 (09:27 -0700)]
Compute perplexity over prompt (#270)

* Compute perplexity over prompt

* More accurate perplexity calculation - over all logits in the context window (so 512x more tokens!)

* Output all perplexitiies

* Add timing/ETA

2 years agoAdd chatLLaMa script (#198)
Jean-Christophe Hoelt [Tue, 21 Mar 2023 16:23:15 +0000 (18:23 +0200)]
Add chatLLaMa script (#198)

* Add chatLLaMa script

* Fix shellcheck errors and do some cleanup

* Move chatLLaMa script to `examples` directory

* Reduce chatLLaMa context size to 2048

Ref d7def1a7524f712e5ebb7cd02bab0f13aa56a7f9

* Include n_predict to 2048 in examples/chatLLaMa

2 years agomakefile: Fix CPU feature detection on Haiku (#218)
Alex von Gluck IV [Tue, 21 Mar 2023 16:21:06 +0000 (11:21 -0500)]
makefile: Fix CPU feature detection on Haiku (#218)

2 years agoEnable ANSI colors on Windows 10+ (#311)
anzz1 [Tue, 21 Mar 2023 16:14:46 +0000 (18:14 +0200)]
Enable ANSI colors on Windows 10+ (#311)

* Enable ANSI colors on Windows 10+

On older versions function will silently fail without any ill effects

* Do not call SetConsoleMode if the mode is already set

* Update main.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoMinor style changes
Georgi Gerganov [Tue, 21 Mar 2023 16:10:32 +0000 (18:10 +0200)]
Minor style changes

2 years agoAdd chat.sh script
Georgi Gerganov [Tue, 21 Mar 2023 16:09:37 +0000 (18:09 +0200)]
Add chat.sh script

2 years agoCheck for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman [Tue, 21 Mar 2023 16:05:06 +0000 (17:05 +0100)]
Check for reverse prompt by characters instead of tokens (#292) (#330)

* Check for reverse prompt by characters instead of tokens (#292)

* Update main.cpp

Wording.

* Cleanup.

* Remove unnecessary use of std::stringstream.

---------

Co-authored-by: Johnman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoCheck for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman [Tue, 21 Mar 2023 16:04:43 +0000 (17:04 +0100)]
Check for reverse prompt by characters instead of tokens (#292) (#330)

* Check for reverse prompt by characters instead of tokens (#292)

* Update main.cpp

Wording.

* Cleanup.

* Remove unnecessary use of std::stringstream.

---------

Co-authored-by: Johnman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoFix convert script, warnings alpaca instructions, default params
Georgi Gerganov [Tue, 21 Mar 2023 15:59:16 +0000 (17:59 +0200)]
Fix convert script, warnings alpaca instructions, default params

2 years agoAdd OpenBSD support (#314)
Kevin Lo [Tue, 21 Mar 2023 15:50:09 +0000 (09:50 -0600)]
Add OpenBSD support (#314)

2 years agofix typo in comment (#318)
Mack Straight [Tue, 21 Mar 2023 15:49:43 +0000 (08:49 -0700)]
fix typo in comment (#318)

2 years agoMakefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)
Qingyou Meng [Tue, 21 Mar 2023 15:44:11 +0000 (23:44 +0800)]
Makefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)

2 years agocmdline option for custom amount of model parts (--n_parts N) (#348)
anzz1 [Tue, 21 Mar 2023 15:42:43 +0000 (17:42 +0200)]
cmdline option for custom amount of model parts (--n_parts N) (#348)

* cmdline option for custom amount of model parts (--n_parts N)

* Update main.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoUpdate IPFS links to quantized alpaca with new tokenizer format (#352)
Kevin Kwok [Tue, 21 Mar 2023 15:34:49 +0000 (08:34 -0700)]
Update IPFS links to quantized alpaca with new tokenizer format (#352)

2 years agoChange default repeat_penalty to 1.0
Georgi Gerganov [Tue, 21 Mar 2023 15:32:14 +0000 (17:32 +0200)]
Change default repeat_penalty to 1.0

I feel this penalty is not really helping.
Especially for the example from the README it makes results pretty bad

2 years agoAdd tokenizer test + revert to C++11 (#355)
Georgi Gerganov [Tue, 21 Mar 2023 15:29:41 +0000 (17:29 +0200)]
Add tokenizer test + revert to C++11 (#355)

* Add test-tokenizer-0 to do a few tokenizations - feel free to expand
* Added option to convert-pth-to-ggml.py script to dump just the vocabulary
* Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests)
* Added utility to load vocabulary file from previous point (temporary implementation)
* Avoid using std::string_view and drop back to C++11 (hope I didn't break something)
* Rename gpt_vocab -> llama_vocab
* All CMake binaries go into ./bin/ now

2 years agoAdd initial AVX512 support for dot product on Linux (#320)
Casey Primozic [Tue, 21 Mar 2023 14:35:42 +0000 (07:35 -0700)]
Add initial AVX512 support for dot product on Linux (#320)

 * Update Makefile to detect AVX512 support and add compiler flags if it's available
 * Based on existing AVX2 implementation, dot product on one 32-value block of 4-bit quantized ints at a time
 * Perform 8 bit -> 16 bit sign extension and multiply+add on 32 values at time instead of 16
 * Use built-in AVX512 horizontal reduce add to get sum at the end
 * Manual unrolling on inner dot product loop to reduce loop counter overhead

2 years agoAdding missing features of CMakeLists.txt & Refactoring (#131)
nusu-github [Tue, 21 Mar 2023 00:37:16 +0000 (09:37 +0900)]
Adding missing features of CMakeLists.txt & Refactoring (#131)

* Functionality addition CMakeLists.txt

Refactoring:
1. Simplify more options that are negation of negation.
LLAMA_NO_ACCELERATE -> LLAMA_ACCELERATE
2. Changed to an optional expression instead of forcing to enable AVX2 in MSVC.
3. Make CMAKE_CXX_STANDARD, which is different from Makefile, the same.
4. Use add_compile_options instead of adding options to CMAKE_C_FLAGS.
5. Make utils use target_link_libraries instead of directly referencing code.

Added features:
1. Added some options.
LLAMA_STATIC_LINK,LLAMA_NATIVE,LLAMA_LTO,LLAMA_GPROF,LLAMA_OPENBLAS

* Fix Accelerate link in CMake

* Windows build Fix

* C++11 to C++17

* Reflects C/C++ standard individually

* Change the version to 3.12

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoNix flake: set meta.mainProgram to llama
Ben Siraphob [Mon, 20 Mar 2023 21:44:30 +0000 (16:44 -0500)]
Nix flake: set meta.mainProgram to llama

2 years agoFixed tokenizer.model not found error when model dir is symlink (#325)
Qingyou Meng [Mon, 20 Mar 2023 19:33:10 +0000 (03:33 +0800)]
Fixed tokenizer.model not found error when model dir is symlink (#325)

2 years agomove file magic/version to header, print expected version (#319)
Mack Straight [Mon, 20 Mar 2023 19:26:01 +0000 (12:26 -0700)]
move file magic/version to header, print expected version (#319)

2 years agoDocker - Fix publish docker image in GitHub Registry (#235)
Bernat Vadell [Mon, 20 Mar 2023 17:05:20 +0000 (18:05 +0100)]
Docker - Fix publish docker image in GitHub Registry (#235)

* fix publish permission

* try to fix docker pipeline using as password github_token & username repository_owner

2 years agosentencepiece bpe compatible tokenizer (#252)
Mack Straight [Mon, 20 Mar 2023 10:17:23 +0000 (03:17 -0700)]
sentencepiece bpe compatible tokenizer (#252)

* potential out of bounds read

* fix quantize

* style

* Update convert-pth-to-ggml.py

* mild cleanup

* don't need the space-prefixing here rn since main.cpp already does it

* new file magic + version header field

* readme notice

* missing newlines

Co-authored-by: slaren <redacted>
2 years agoAdd tqdm to Python requirements (#293)
Stephan Walter [Mon, 20 Mar 2023 08:24:11 +0000 (08:24 +0000)]
Add tqdm to Python requirements (#293)

* Add tqdm to Python requirements
* Remove torchvision torchaudio, add requests

2 years agobugfix: default should not be interactive (#304)
cocktailpeanut [Sun, 19 Mar 2023 21:44:20 +0000 (17:44 -0400)]
bugfix: default should not be interactive (#304)

2 years agoRename script
Georgi Gerganov [Sun, 19 Mar 2023 19:58:51 +0000 (21:58 +0200)]
Rename script

2 years agoAdd temporary helper script for Alpaca chat
Georgi Gerganov [Sun, 19 Mar 2023 19:57:28 +0000 (21:57 +0200)]
Add temporary helper script for Alpaca chat

2 years agofix coloring of last `n_batch` of prompt, and refactor line input (#221)
Rickey Bowers Jr [Sun, 19 Mar 2023 19:44:30 +0000 (13:44 -0600)]
fix coloring of last `n_batch` of prompt, and refactor line input (#221)

* fix coloring of last `n_batch` of prompt, and refactor line input
* forgot the newline that needs to be sent to the model
* (per #283) try to force flush of color reset in SIGINT handler

2 years agoSupport for multiple reverse prompts. (#299)
tjohnman [Sun, 19 Mar 2023 19:33:06 +0000 (20:33 +0100)]
Support for multiple reverse prompts. (#299)

Co-authored-by: Johnman <>
Co-authored-by: Johnman <redacted>
2 years agoImproved quantize script (#222)
Suaj Carrot [Sun, 19 Mar 2023 18:38:44 +0000 (12:38 -0600)]
Improved quantize script (#222)

* Improved quantize script

I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility.

* Fixes and improvements based on Matt's observations

Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented).

* Small fixes to the previous commit

* Corrected to use the original glob pattern

The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now.

* Added support for Windows and updated README to use this script

New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one.

* Fixed a typo and removed shell=True in the subprocess.run call

Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters.

* Corrected previous commit

* Small tweak: changed the name of the program in argparse

This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".

2 years agoMake prompt randomization optional. (#300)
tjohnman [Sun, 19 Mar 2023 18:36:19 +0000 (19:36 +0100)]
Make prompt randomization optional. (#300)

Co-authored-by: Johnman <>
2 years agoRespect the maximum number of tokens in interactive. (#298)
tjohnman [Sun, 19 Mar 2023 18:31:17 +0000 (19:31 +0100)]
Respect the maximum number of tokens in interactive. (#298)

Co-authored-by: Johnman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoAdd --ignore-eos parameter (#181)
slaren [Sun, 19 Mar 2023 18:22:48 +0000 (19:22 +0100)]
Add --ignore-eos parameter (#181)

Co-authored-by: Georgi Gerganov <redacted>
2 years agointeractive mode: print '\n' in sigint_handler, this flush stdout thus ensure color...
Qingyou Meng [Sun, 19 Mar 2023 18:10:00 +0000 (02:10 +0800)]
interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure color reset. (#283)