]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agoreadme : add C#/.NET bindings repo (#1409)
Rinne [Fri, 12 May 2023 05:39:40 +0000 (13:39 +0800)]
readme : add C#/.NET bindings repo (#1409)

2 years agoggml : remove bit shuffling (#1405)
Georgi Gerganov [Thu, 11 May 2023 21:23:08 +0000 (00:23 +0300)]
ggml : remove bit shuffling (#1405)

* ggml : remove Q4_0 bit shufling (ARM NEON)

* ggml : remove Q4_1 bit shuffling (ARM NEON + reference)

* ggml : nibbles_from_floats() + bytes_from_nibbles() (ARM NEON)

* ggml : remove Q4_2 bit shuffling (WIP, BROKEN)

* ggml : remove Q5_0 bit shuffling (ARM NEON)

* ggml : 2x faster scalar implementations

* ggml : remove Q5_1 bit shuffling (ARM NEON + scalar)

* ggml : simplify scalar dot

* ggml : remove WASM SIMD bit shuffling + remove vzip for ARM 32-bit

* ggml : fix Q4_1 quantization

* ggml : update cuBLAS + normalize variable names

* ggml : remove Q4_2 mode

* ggml : minor formatting

* ggml : fix Q5_0 quantization

* scripts : add script for measuring the time per token

* AVX implementations (#1370)

* ggml : uniform 5th bit extraction

* llama : produce error upon loading old model files

* llama : fix model magic/version write

* ggml : speed-up Q5_0 + Q5_1 at 4 threads

* ggml : preserve old Q4 and Q5 formats

* ggml : simplify Q8_1 - no need for low / high sums anymore

* ggml : fix Q8_0 and Q8_1 rounding

* Revert "AVX implementations (#1370)"

This reverts commit 948d124837f9d287d8490f41338e0e4cceb0814f.

* ggml : fix AVX2 implementation

* sha : update hashes for 7B and 13B

* readme : update timings + remove warning banner

* llama : update v2 PR number to 1405

* ggml : fix WASM comments

* ggml : back to original bit order

* readme : add note that Q4 and Q5 have been changed

* llama : fix return for unknown version

---------

Co-authored-by: Stephan Walter <redacted>
2 years agoprompts : model agnostic DAN (#1304)
CRD716 [Thu, 11 May 2023 15:10:19 +0000 (10:10 -0500)]
prompts : model agnostic DAN (#1304)

* add model-agnostic dan prompt

* quick readme update

* save a token

* Revert "quick readme update"

This reverts commit 8dc342c069cbdca8ce63ad974becec6fc844e1e4.

2 years agomain : add option to save full output to session (#1338)
Evan Jones [Wed, 10 May 2023 15:37:14 +0000 (11:37 -0400)]
main : add option to save full output to session (#1338)

* main : add option to save full output to session

* split behavior into --session and --prompt-cache

* restore original implementation with new names

* PR comments

* move the check for incompatible parameters to gpt_params_parse

* Fix whitespace

Co-authored-by: DannyDaemonic <redacted>
---------

Co-authored-by: DannyDaemonic <redacted>
2 years agoLocale fix for Windows (#1379)
DannyDaemonic [Tue, 9 May 2023 17:53:28 +0000 (10:53 -0700)]
Locale fix for Windows (#1379)

2 years agouse pause asm insn in busyloop to run the CPU (13600K) 10 °C cooler (#1314)
Sami Farin [Tue, 9 May 2023 12:29:20 +0000 (15:29 +0300)]
use pause asm insn in busyloop to run the CPU (13600K) 10 °C cooler (#1314)

* use pause asm insn in busyloop to run the CPU (13600K) 10 °C cooler

Tested with a 13B model.

* use _mm_pause() in busyloop

* use _mm_pause() in busyloop on x86_64 to reduce power consumption

2 years agoInterface improvements and `--multiline-input` (previously `--author-mode`) (#1040)
DannyDaemonic [Tue, 9 May 2023 02:45:48 +0000 (19:45 -0700)]
Interface improvements and `--multiline-input` (previously `--author-mode`) (#1040)

* Interface improvements
* Multiline input
* Track character width
* Works with all characters and control codes + Windows console fixes

2 years agoreadme : add notice about upcoming breaking change
Georgi Gerganov [Mon, 8 May 2023 19:52:18 +0000 (22:52 +0300)]
readme : add notice about upcoming breaking change

2 years agoreadme : add TOC and Pygmalion instructions (#1359)
AlpinDale [Mon, 8 May 2023 16:33:30 +0000 (21:03 +0430)]
readme : add TOC and Pygmalion instructions (#1359)

2 years agollama : fix hparams shadow (#1367)
Pavol Rusnak [Mon, 8 May 2023 14:48:21 +0000 (16:48 +0200)]
llama : fix hparams shadow (#1367)

fixes #1363

2 years agollama : require first token to be BOS (#1303)
Georgi Gerganov [Mon, 8 May 2023 14:41:54 +0000 (17:41 +0300)]
llama : require first token to be BOS (#1303)

* llama : require first token to be BOS

* scripts : add ppl-run-all.sh

* perplexity : add BOS for each chunk

* readme : update perplexity values after BOS fix

* perplexity : add clarifying comments

2 years agoconvert: add ability to convert safetensors files (#1276)
ubik2 [Mon, 8 May 2023 11:54:26 +0000 (04:54 -0700)]
convert: add ability to convert safetensors files (#1276)

* when loading a safetensors file, ignore the metadata header
* check for safetensors files first, and only use PyTorch versions when safetensors aren't available

2 years agoDocumented CUDA reproducibility, added warning (#1346)
Johannes Gäßler [Mon, 8 May 2023 00:42:01 +0000 (02:42 +0200)]
Documented CUDA reproducibility, added warning (#1346)

2 years agoCI: add Windows CLBlast and OpenBLAS builds (#1277)
Henri Vasserman [Sun, 7 May 2023 11:20:09 +0000 (14:20 +0300)]
CI: add Windows CLBlast and OpenBLAS builds (#1277)

* Add OpenCL and CLBlast support

* Add OpenBLAS support

* Remove testing from matrix

* change build name to 'clblast'

2 years agoggml : Allow usage of CLBlast alongside Accelerate.framework (#1336)
swittk [Sun, 7 May 2023 03:03:23 +0000 (10:03 +0700)]
ggml : Allow usage of CLBlast alongside Accelerate.framework (#1336)

Minor edit in ggml.c which originally would prevent OpenCL from loading completely if GGML_USE_ACCELERATE was defined.
Minor speedup in prompt eval time.

2 years agoRemove default arguments from sampling functions (#1343)
Jed Fox [Sat, 6 May 2023 21:01:47 +0000 (17:01 -0400)]
Remove default arguments from sampling functions (#1343)

2 years agomakefile: automatic Arch Linux detection (#1332)
DaniAndTheWeb [Fri, 5 May 2023 21:57:14 +0000 (23:57 +0200)]
makefile: automatic Arch Linux detection (#1332)

This commit is a port of a detection method used in koboldcpp's Makefile in order to automatically set the -lcblas option on Arch Linux

2 years agoci : add cublas to windows release (#1271)
Erik Scholz [Fri, 5 May 2023 20:56:09 +0000 (22:56 +0200)]
ci : add cublas to windows release (#1271)

2 years agoreadme: add missing info (#1324)
Pavol Rusnak [Fri, 5 May 2023 14:43:36 +0000 (16:43 +0200)]
readme: add missing info (#1324)

2 years agoFix for OpenCL / clbast builds on macOS. (#1329)
Ionoclast Laboratories [Fri, 5 May 2023 12:18:21 +0000 (08:18 -0400)]
Fix for OpenCL / clbast builds on macOS. (#1329)

2 years agoConvert.py @staticmethod (#1327)
Benjamin Lecaillon [Fri, 5 May 2023 00:17:07 +0000 (02:17 +0200)]
Convert.py @staticmethod (#1327)

* Line 698 has one #staticmethod and should not

otherwise throw error at unpickle.load() as not callable

* Update convert.py

---------

Co-authored-by: Ivan Stepanov <redacted>
2 years agoquantize: make output filename optional, default to ggml-model-<ftype>.bin (#1301)
slaren [Thu, 4 May 2023 22:58:56 +0000 (00:58 +0200)]
quantize: make output filename optional, default to ggml-model-<ftype>.bin (#1301)

2 years agoWrap exceptions in std::exception to verbose output on exception. (#1316)
Ivan Stepanov [Thu, 4 May 2023 16:56:27 +0000 (19:56 +0300)]
Wrap exceptions in std::exception to verbose output on exception. (#1316)

2 years agoconvert: support DT_BF16 tensors (#1309)
Ivan Stepanov [Thu, 4 May 2023 16:54:37 +0000 (19:54 +0300)]
convert: support DT_BF16 tensors (#1309)

Co-authored-by: Pavol Rusnak <redacted>
2 years agoreadme : add OpenBuddy link (#1321)
44670 [Thu, 4 May 2023 16:33:31 +0000 (00:33 +0800)]
readme : add OpenBuddy link (#1321)

2 years agomain : add --in-suffix option (#1318)
44670 [Thu, 4 May 2023 15:41:12 +0000 (23:41 +0800)]
main : add --in-suffix option (#1318)

* adding --in-suffix option

* print input suffix before generation

2 years agoggml : change immintrin.h to intrin.h for compatibility (#1307)
Ron Jailall [Thu, 4 May 2023 15:05:59 +0000 (11:05 -0400)]
ggml : change immintrin.h to intrin.h for compatibility (#1307)

* change immintrin.h to intrin.h for compatibility

Building on windows11 arm throws an error on this line. Seems like using intrin.h covers x86 and and arm

* conditional def of intrin.h

* fix typo in ggml.c

2 years agoOnly escape prompts when used with `-e` (#1311)
DannyDaemonic [Thu, 4 May 2023 12:08:25 +0000 (05:08 -0700)]
Only escape prompts when used with `-e` (#1311)

2 years agoUpdate main's README.md with new features (#1296)
DannyDaemonic [Thu, 4 May 2023 10:02:59 +0000 (03:02 -0700)]
Update main's README.md with new features (#1296)

2 years agofix #1224 reverse prompt and multi line (#1297)
Tomas [Thu, 4 May 2023 10:02:30 +0000 (17:02 +0700)]
fix #1224 reverse prompt and multi line (#1297)

* fix reverse prompt and multi line

* Code Formatting

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoggml : vectorize Q8_0 quantization
Georgi Gerganov [Wed, 3 May 2023 20:24:20 +0000 (23:24 +0300)]
ggml : vectorize Q8_0 quantization

https://github.com/ggerganov/ggml/pull/127#issuecomment-1533648531

2 years agoexamples : read chat prompts from a template file (#1196)
khimaros [Wed, 3 May 2023 17:58:11 +0000 (10:58 -0700)]
examples : read chat prompts from a template file (#1196)

2 years agominor : fix whitespaces (#1302)
Georgi Gerganov [Wed, 3 May 2023 17:09:42 +0000 (20:09 +0300)]
minor : fix whitespaces (#1302)

2 years agominor : fix trailing whitespaces
Georgi Gerganov [Wed, 3 May 2023 15:43:23 +0000 (18:43 +0300)]
minor : fix trailing whitespaces

2 years agoscripts : platform independent script to verify sha256 checksums (#1203)
KASR [Wed, 3 May 2023 15:31:28 +0000 (17:31 +0200)]
scripts : platform independent script to verify sha256 checksums (#1203)

* python script to verify the checksum of the llama models

Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability.

* Update README.md

update to the readme for improved readability and to explain the usage of the python checksum verification script

* update the verification script

I've extended the script based on suggestions by @prusnak

The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks.

* minor improvment

small change so that the available ram is checked and not the total ram

* remove the part of the code that reads the file at once if enough ram is available

based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks.

* Update verify-checksum-models.py

quick fix to pass the git check

2 years agoexamples : various prompt and example fixes (#1298)
CRD716 [Wed, 3 May 2023 15:26:47 +0000 (10:26 -0500)]
examples : various prompt and example fixes (#1298)

* fix dan.txt

* miku prompt improvements

* use common characters

2 years agollama : only copy used KV cache in get / set state (#1272)
Evan Jones [Wed, 3 May 2023 02:26:13 +0000 (22:26 -0400)]
llama : only copy used KV cache in get / set state (#1272)

* llama : only copy used KV cache in get / set state

* switch to ggml for copying k, v

* avoid designated initializers

2 years agoProcess escape sequences given in prompts (#1173)
DannyDaemonic [Wed, 3 May 2023 01:46:20 +0000 (18:46 -0700)]
Process escape sequences given in prompts (#1173)

2 years agoHandle signals properly on Windows (#1123)
DannyDaemonic [Wed, 3 May 2023 01:01:57 +0000 (18:01 -0700)]
Handle signals properly on Windows (#1123)

2 years agoCall sh on build-info.sh (#1294)
DannyDaemonic [Wed, 3 May 2023 00:52:35 +0000 (17:52 -0700)]
Call sh on build-info.sh (#1294)

2 years agofix build-info.h for git submodules (#1289)
kuvaus [Wed, 3 May 2023 00:43:43 +0000 (03:43 +0300)]
fix build-info.h for git submodules (#1289)

* make git build info work with submodules

---------

Co-authored-by: Green Sky <redacted>
2 years agofix missing parameters in `llama_init_from_gpt_params` (#1293)
slaren [Tue, 2 May 2023 23:36:45 +0000 (01:36 +0200)]
fix missing parameters in `llama_init_from_gpt_params` (#1293)

2 years agoexamples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans [Tue, 2 May 2023 20:39:51 +0000 (22:39 +0200)]
examples : add llama_init_from_gpt_params() common function (#1290)

Signed-off-by: deadprogram <redacted>
2 years agollama : fix compile warnings
Georgi Gerganov [Tue, 2 May 2023 20:09:08 +0000 (23:09 +0300)]
llama : fix compile warnings

2 years agoggml : fix 32-bit ARM
Georgi Gerganov [Tue, 2 May 2023 19:14:50 +0000 (22:14 +0300)]
ggml : fix 32-bit ARM

2 years agoexamples : improve vertical alignment of a few variables (#1286)
Ron Evans [Tue, 2 May 2023 17:53:52 +0000 (19:53 +0200)]
examples : improve vertical alignment of a few variables (#1286)

Signed-off-by: deadprogram <redacted>
2 years agoggml : fix ppc64le build error and make cmake detect Power processors (#1284)
Marvin Gießing [Tue, 2 May 2023 16:42:16 +0000 (18:42 +0200)]
ggml : fix ppc64le build error and make cmake detect Power processors (#1284)

* Fix ppc64le build issue

* Added support to detect ppc64* processors

2 years agollama : allow 0 as a seed number. (#1275)
Robert Brisita [Tue, 2 May 2023 16:23:44 +0000 (12:23 -0400)]
llama : allow 0 as a seed number. (#1275)

2 years agomain : switch input_noecho to input_echo to remove negation (#979)
Ron Evans [Tue, 2 May 2023 16:13:26 +0000 (18:13 +0200)]
main : switch input_noecho to input_echo to remove negation (#979)

Signed-off-by: deadprogram <redacted>
2 years agoggml: add names to tensors (#1268)
slaren [Tue, 2 May 2023 14:03:00 +0000 (16:03 +0200)]
ggml: add names to tensors (#1268)

* ggml: add names to tensors

* minor improvements to dot file formatting

2 years agoAdd git-based build information for better issue tracking (#1232)
DannyDaemonic [Mon, 1 May 2023 16:23:47 +0000 (09:23 -0700)]
Add git-based build information for better issue tracking (#1232)

* Add git-based build information for better issue tracking

* macOS fix

* "build (hash)" and "CMAKE_SOURCE_DIR" changes

* Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages

* Fix conditional dependency on missing target

* Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile

* 4 space indenting for cmake, attempt to clean up my mess in Makefile

* Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it

2 years agocuBLAS: refactor and optimize f16 mat mul performance (#1259)
slaren [Mon, 1 May 2023 16:11:07 +0000 (18:11 +0200)]
cuBLAS: refactor and optimize f16 mat mul performance (#1259)

* cuBLAS: refactor, convert fp16 to fp32 on device

* cuBLAS: use multiple streams, choose smartly between mul_mat_q and mul_mat_f16

* fix build

* cuBLAS: update block_q5_1

2 years agollama : update stubs for systems without mmap and mlock (#1266)
xloem [Mon, 1 May 2023 12:58:51 +0000 (08:58 -0400)]
llama : update stubs for systems without mmap and mlock (#1266)

Co-authored-by: John Doe <redacted>
2 years agoggml : fix ggml_used_mem() (#1264)
Kerfuffle [Mon, 1 May 2023 11:56:07 +0000 (05:56 -0600)]
ggml : fix ggml_used_mem() (#1264)

2 years agollama : fix session load / save (#1263)
Georgi Gerganov [Mon, 1 May 2023 11:54:59 +0000 (14:54 +0300)]
llama : fix session load / save (#1263)

2 years agocuBLAS: fall back to pageable memory if pinned alloc fails (#1233)
slaren [Mon, 1 May 2023 11:32:22 +0000 (13:32 +0200)]
cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)

* cuBLAS: fall back to pageable memory if pinned alloc fails

* cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set

2 years agollama : let context be const when accessing const data (#1261)
Alex Klinkhamer [Mon, 1 May 2023 07:24:20 +0000 (00:24 -0700)]
llama : let context be const when accessing const data (#1261)

2 years agoggml : fix UB (int << 31)
Georgi Gerganov [Sun, 30 Apr 2023 19:28:51 +0000 (22:28 +0300)]
ggml : fix UB (int << 31)

2 years agobuild: add armv{6,7,8} support to cmake (#1251)
Pavol Rusnak [Sun, 30 Apr 2023 18:48:38 +0000 (20:48 +0200)]
build: add armv{6,7,8} support to cmake (#1251)

- flags copied from Makefile
- updated comments in both CMakeLists.txt and Makefile to match reality

2 years agocommon : better default number of threads (#934)
jon-chuang [Sun, 30 Apr 2023 18:41:35 +0000 (14:41 -0400)]
common : better default number of threads (#934)

* commit

* fix

* try-catch

* apply code review

* improve

* improve

* add macos headers

* done

* remove color

* fix windows

* minor

* fix

* Apply suggestions from code review

Co-authored-by: DannyDaemonic <redacted>
* remove

* minor

* minor

---------

Co-authored-by: jon-chuang <redacted>
Co-authored-by: DannyDaemonic <redacted>
2 years agoggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)
0cc4m [Sun, 30 Apr 2023 18:34:52 +0000 (20:34 +0200)]
ggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)

* Implement q5_0, q5_1 and q8_0

* Work around q5_0 OpenCL issue

* Fix q8_0 dequant kernel

* Move cl kernels into ggml-opencl.c

* Use two memcpy calls for q5_0 buffer transfer

2 years agoggml : add Q5 WASM SIMD + GGML_FTYPE
Georgi Gerganov [Sun, 30 Apr 2023 16:07:00 +0000 (19:07 +0300)]
ggml : add Q5 WASM SIMD + GGML_FTYPE

2 years agoVarious fixes to mat_mul benchmark (#1253)
Stephan Walter [Sun, 30 Apr 2023 12:32:37 +0000 (12:32 +0000)]
Various fixes to mat_mul benchmark (#1253)

2 years agoggml : fix labels for GGML_OP_ALIBI
Georgi Gerganov [Sun, 30 Apr 2023 07:25:46 +0000 (10:25 +0300)]
ggml : fix labels for GGML_OP_ALIBI

2 years agoggml : fix 32-bit ARM NEON
Georgi Gerganov [Sat, 29 Apr 2023 18:34:23 +0000 (21:34 +0300)]
ggml : fix 32-bit ARM NEON

2 years agoggml : use vzip instead of vuzp for consistency
Georgi Gerganov [Sat, 29 Apr 2023 18:12:56 +0000 (21:12 +0300)]
ggml : use vzip instead of vuzp for consistency

2 years agoggml : fix visibility and unused warnings
Georgi Gerganov [Sat, 29 Apr 2023 16:28:36 +0000 (19:28 +0300)]
ggml : fix visibility and unused warnings

2 years agoggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)
Georgi Gerganov [Sat, 29 Apr 2023 15:43:42 +0000 (18:43 +0300)]
ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)

2 years agoggml : adjust mul_mat_f16 work memory (#1226)
Georgi Gerganov [Sat, 29 Apr 2023 15:43:28 +0000 (18:43 +0300)]
ggml : adjust mul_mat_f16 work memory (#1226)

* llama : minor - remove explicity int64_t cast

* ggml : reduce memory buffer for F16 mul_mat when not using cuBLAS

* ggml : add asserts to guard for incorrect wsize

2 years agobuild : fix reference to old llama_util.h
Georgi Gerganov [Sat, 29 Apr 2023 10:53:12 +0000 (13:53 +0300)]
build : fix reference to old llama_util.h

2 years agoexamples : fix save-load-state + rename llama-util.h
Georgi Gerganov [Sat, 29 Apr 2023 10:48:11 +0000 (13:48 +0300)]
examples : fix save-load-state + rename llama-util.h

2 years agocommon : change default parameters to pre-#1126 (#1223)
Georgi Gerganov [Sat, 29 Apr 2023 06:51:06 +0000 (09:51 +0300)]
common : change default parameters to pre-#1126 (#1223)

2 years agollama : new sampling algorithms (#1126)
Ivan Stepanov [Sat, 29 Apr 2023 05:34:41 +0000 (08:34 +0300)]
llama : new sampling algorithms (#1126)

* Sample interface, new samplers.

New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat

Ignore EOS fix: -inf should be used.

* mirostat

* Added --logit-bias and --no-penalize-nl, removed std::span

* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)

Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)

* Save and load example adjust

* Tests

* Windows build fix

* Windows test fix

2 years agocuBLAS: use host pinned memory and dequantize while copying (#1207)
slaren [Sat, 29 Apr 2023 00:04:18 +0000 (02:04 +0200)]
cuBLAS: use host pinned memory and dequantize while copying (#1207)

* cuBLAS: dequantize simultaneously while copying memory

* cuBLAS: use host pinned memory

* cuBLAS: improve ggml_compute_forward_mul_mat_f16_f32 with pinned memory

* cuBLAS: also pin kv cache

* fix rebase

2 years agocuBLAS: non-contiguous tensor support (#1215)
Henri Vasserman [Fri, 28 Apr 2023 23:31:56 +0000 (02:31 +0300)]
cuBLAS: non-contiguous tensor support (#1215)

* Cuda: non-contiguous tensor support

* remove extra stuff

* rename

* fix error

* more fixes, now OpenBLAS and CLBlast build too

* now then?

2 years agoRemove Q4_3 which is no better than Q5 (#1218)
Stephan Walter [Fri, 28 Apr 2023 23:10:43 +0000 (23:10 +0000)]
Remove Q4_3 which is no better than Q5 (#1218)

2 years agoreadme : update hot topics
Georgi Gerganov [Fri, 28 Apr 2023 18:32:52 +0000 (21:32 +0300)]
readme : update hot topics

2 years agoggml : sync ggml (ggml_alibi)
Georgi Gerganov [Fri, 28 Apr 2023 17:37:43 +0000 (20:37 +0300)]
ggml : sync ggml (ggml_alibi)

2 years agoexamples : add Jeopardy example (#1168)
CRD716 [Fri, 28 Apr 2023 16:13:33 +0000 (11:13 -0500)]
examples : add Jeopardy example (#1168)

* Basic Setup

* Prevent Results.txt from coming up

* Prefixes, Line separators, etc

* editorcheck

* introduction to give more consistent results

* Basic graph thing

* Grading, ready for testing!

* Y'all ready to get funky?

* fix column removal stuff

* missed a few

2 years agollama : add session file format and saved sessions in main (#1169)
Evan Jones [Fri, 28 Apr 2023 15:59:37 +0000 (11:59 -0400)]
llama : add session file format and saved sessions in main (#1169)

2 years agoggml : add helper debug printf in soft_max
Georgi Gerganov [Fri, 28 Apr 2023 14:58:44 +0000 (17:58 +0300)]
ggml : add helper debug printf in soft_max

2 years agoggml : add CLBlast support (#1164)
0cc4m [Fri, 28 Apr 2023 14:57:16 +0000 (16:57 +0200)]
ggml : add CLBlast support (#1164)

* Allow use of OpenCL GPU-based BLAS using ClBlast instead of OpenBLAS for context processing

* Improve ClBlast implementation, avoid recreating buffers, remove redundant transfers

* Finish merge of ClBlast support

* Move CLBlast implementation to separate file

Add buffer reuse code (adapted from slaren's cuda implementation)

* Add q4_2 and q4_3 CLBlast support, improve code

* Double CLBlast speed by disabling OpenBLAS thread workaround

Co-authored-by: Concedo <redacted>
Co-authored-by: slaren <redacted>
* Fix device selection env variable names

* Fix cast in opencl kernels

* Add CLBlast to CMakeLists.txt

* Replace buffer pool with static buffers a, b, qb, c

Fix compile warnings

* Fix typos, use GGML_TYPE defines, improve code

* Improve btype dequant kernel selection code, add error if type is unsupported

* Improve code quality

* Move internal stuff out of header
* Use internal enums instead of CLBlast enums
* Remove leftover C++ includes and defines
* Make event use easier to read

Co-authored-by: Henri Vasserman <redacted>
* Use c compiler for opencl files

* Simplify code, fix include

* First check error, then release event

* Make globals static, fix indentation

* Rename dequant kernels file to conform with other file names

* Fix import cl file name

---------

Co-authored-by: Concedo <redacted>
Co-authored-by: slaren <redacted>
Co-authored-by: Henri Vasserman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoCorrecting link to w64devkit (#1214)
Folko-Ven [Fri, 28 Apr 2023 14:22:48 +0000 (19:22 +0500)]
Correcting link to w64devkit (#1214)

Correcting link to w64devkit (change seeto to skeeto).

2 years agoAdd Manjaro CUDA include and lib dirs to Makefile (#1212)
Johannes Gäßler [Fri, 28 Apr 2023 13:40:32 +0000 (15:40 +0200)]
Add Manjaro CUDA include and lib dirs to Makefile (#1212)

2 years agoadd avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
Yann Follet [Fri, 28 Apr 2023 11:59:48 +0000 (19:59 +0800)]
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)

2 years agoggml : slightly faster AVX2 implementation for Q5 (#1197)
Stephan Walter [Wed, 26 Apr 2023 20:26:42 +0000 (20:26 +0000)]
ggml : slightly faster AVX2 implementation for Q5 (#1197)

2 years agoreadme : add quantization info
Georgi Gerganov [Wed, 26 Apr 2023 20:24:42 +0000 (23:24 +0300)]
readme : add quantization info

2 years agoggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov [Wed, 26 Apr 2023 20:14:13 +0000 (23:14 +0300)]
ggml : add Q5_0 and Q5_1 quantization (#1187)

* ggml : add Q5_0 quantization (cuBLAS only)

* ggml : fix Q5_0 qh -> uint32_t

* ggml : fix q5_0 histogram stats

* ggml : q5_0 scalar dot product

* ggml : q5_0 ARM NEON dot

* ggml : q5_0 more efficient ARM NEON using uint64_t masks

* ggml : rename Q5_0 -> Q5_1

* ggml : adding Q5_0 mode

* quantize : add Q5_0 and Q5_1 to map

* ggml : AVX2 optimizations for Q5_0, Q5_1 (#1195)

---------

Co-authored-by: Stephan Walter <redacted>
2 years agoAllow setting the rng seed after initialization. (#1184)
Ásgeir Bjarni Ingvarsson [Wed, 26 Apr 2023 20:08:43 +0000 (20:08 +0000)]
Allow setting the rng seed after initialization. (#1184)

The llama_set_state_data function restores the rng state to what it
was at the time llama_copy_state_data was called. But users may want
to restore the state and proceed with a different seed.

2 years agoUpdating build instructions to include BLAS support (#1183)
DaniAndTheWeb [Wed, 26 Apr 2023 20:03:03 +0000 (22:03 +0200)]
Updating build instructions to include BLAS support (#1183)

* Updated build information

First update to the build instructions to include BLAS.

* Update README.md

* Update information about BLAS

* Better BLAS explanation

Adding a clearer BLAS explanation and adding a link to download the CUDA toolkit.

* Better BLAS explanation

* BLAS for Mac

Specifying that BLAS is already supported on Macs using the Accelerate Framework.

* Clarify the effect of BLAS

* Windows Make instructions

Added the instructions to build with Make on Windows

* Fixing typo

* Fix trailing whitespace

2 years agoquantize : use `map` to assign quantization type from `string` (#1191)
Pavol Rusnak [Wed, 26 Apr 2023 16:43:27 +0000 (18:43 +0200)]
quantize : use `map` to assign quantization type from `string` (#1191)

instead of `int` (while `int` option still being supported)

This allows the following usage:

`./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0`

instead of:

`./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`

2 years agoUpdate SHA256SUMS after quantization change (#1181)
Stephan Walter [Tue, 25 Apr 2023 21:41:56 +0000 (21:41 +0000)]
Update SHA256SUMS after quantization change (#1181)

Co-authored-by: Pavol Rusnak <redacted>
2 years agopy : cast lora_alpha to int in convert-lora-to-ggml (#1170)
ostix360 [Tue, 25 Apr 2023 21:33:08 +0000 (23:33 +0200)]
py : cast lora_alpha to int in convert-lora-to-ggml (#1170)

Co-authored-by: Pavol Rusnak <redacted>
2 years agonix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)
Pavol Rusnak [Tue, 25 Apr 2023 21:19:57 +0000 (23:19 +0200)]
nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)

2 years agoggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
Georgi Gerganov [Tue, 25 Apr 2023 20:40:51 +0000 (23:40 +0300)]
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)

* ggml : add Q8_0 quantization format (rename the old one to Q8_1)

* tests : fix test-quantize-fns

* ggml : finalize Q8_0 implementation

* ggml : use q4_0_q8_0 and q4_2_q8_0

* ggml : fix Q8_0 dot product bug (ARM)

* ggml : Q8_0 unroll x2

* ggml : fix bug - using wrong block type

* ggml : extend quantize_fns_t with "vec_dot_type"

* ggml : fix Q8_0 to use 255 values out of 256

* ggml : fix assert using wrong QK4_2 instead of QK4_3

2 years agoggml : use full range for Q4_0 and Q4_2 quantization (#729)
unbounded [Tue, 25 Apr 2023 17:20:46 +0000 (19:20 +0200)]
ggml : use full range for Q4_0 and Q4_2 quantization (#729)

* Use full range for q4_0 quantization

By keeping the sign of the highest magnitude, we can make sure the
highest value maps to -8, which is currently unused.
This is a bit of a freebie since it is fully backwards compatible with
the current format.

* Update quantize_row_q4_0 for AVX/AVX2

* Update quantize_row_q4_0 for WASM

Untested

* Update quantize_row_q4_0 for Arm NEON

* Update quantize_row_q4_0 for PowerPC

Untested

* Use full range for q4_2 quantization

2 years agoggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
xaedes [Mon, 24 Apr 2023 21:02:02 +0000 (23:02 +0200)]
ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)

The sum over all rows is now computed instead of just the last row

2 years agoggml : export symbols (#1155)
Georgi Gerganov [Mon, 24 Apr 2023 19:18:25 +0000 (22:18 +0300)]
ggml : export symbols (#1155)

2 years agoexamples : add save_load_state example (#1150)
xaedes [Mon, 24 Apr 2023 16:23:31 +0000 (18:23 +0200)]
examples : add save_load_state example (#1150)

* add save_load_state example

* use <cstdio> instead of <iostream> and fprintf / printf instead of cout

* renamed save-load-state example files replacing underscores by dashes

2 years agollama : increase scratch buffer size for 65B (ref #1152)
Georgi Gerganov [Mon, 24 Apr 2023 15:47:03 +0000 (18:47 +0300)]
llama : increase scratch buffer size for 65B (ref #1152)

Temporary solution