]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
DannyDaemonic [Sun, 23 Apr 2023 15:37:02 +0000 (08:37 -0700)]
Added README.md for main with examples and explanations (#1139)
Georgi Gerganov [Sun, 23 Apr 2023 15:32:52 +0000 (18:32 +0300)]
ggml : do not print perf ops that have not been used at all
Georgi Gerganov [Sun, 23 Apr 2023 15:15:39 +0000 (18:15 +0300)]
ggml : better PERF prints + support "LLAMA_PERF=1 make"
Stephan Walter [Sun, 23 Apr 2023 11:01:03 +0000 (11:01 +0000)]
Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)
Pavol Rusnak [Sun, 23 Apr 2023 08:21:26 +0000 (10:21 +0200)]
readme : update gpt4all instructions (#980)
Yishuo Wang [Sun, 23 Apr 2023 07:57:05 +0000 (15:57 +0800)]
A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX512 (#1119)
Georgi Gerganov [Sat, 22 Apr 2023 13:31:56 +0000 (16:31 +0300)]
ggml : fix Q4_3 cuBLAS
Stephan Walter [Sat, 22 Apr 2023 13:12:29 +0000 (13:12 +0000)]
ci : trigger CI for drafts, but not most PR actions (#1125)
Stephan Walter [Sat, 22 Apr 2023 10:54:13 +0000 (10:54 +0000)]
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
unbounded [Sat, 22 Apr 2023 09:10:39 +0000 (11:10 +0200)]
ggml : unit test for quantization functions (#953)
* Unit test for quantization functions
Use the ggml_internal_get_quantize_fn function to loop through all
quantization formats and run a sanity check on the result.
Also add a microbenchmark that times these functions directly without
running the rest of the GGML graph.
* test-quantize-fns: CI fixes
Fix issues uncovered in CI
- need to use sizes divisible by 32*8 for loop unrolling
- use intrinsic header that should work on Mac
* test-quantize: remove
Per PR comment, subsumed by test-quantize-fns
* test-quantize: fix for q8_0 intermediates
wbpxre150 [Sat, 22 Apr 2023 08:56:35 +0000 (16:56 +0800)]
llama : print timings on ctrl+c exit (#1021)
* print timings on ctrl+c exit
* remove redundant free memory call.
* add global pointer to ctx.
eiery [Sat, 22 Apr 2023 08:27:05 +0000 (04:27 -0400)]
llama : have n_batch default to 512 (#1091)
* set default n_batch to 512 when using BLAS
* spacing
* alternate implementation of setting different n_batch for BLAS
* set n_batch to 512 for all cases
Howard Su [Sat, 22 Apr 2023 08:18:20 +0000 (16:18 +0800)]
cmake : fix build under Windows when enable BUILD_SHARED_LIBS (#1100)
* Fix build under Windows when enable BUILD_SHARED_LIBS
* Make AVX512 test on Windows to build the shared libs
Georgi Gerganov [Sat, 22 Apr 2023 08:08:12 +0000 (11:08 +0300)]
ggml : fix AVX build + update to new Q8_0 format
Georgi Gerganov [Sat, 22 Apr 2023 07:55:35 +0000 (10:55 +0300)]
ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)
* ggml : prefer vzip to vuzp
This way we always use the same type of instruction across all quantizations
* ggml : alternative Q4_3 implementation using modified Q8_0
* ggml : fix Q4_3 scalar imlpementation
* ggml : slight improvement of Q4_3 - no need for loop unrolling
* ggml : fix AVX paths for Q8_0 quantization
Stephan Walter [Sat, 22 Apr 2023 07:37:05 +0000 (07:37 +0000)]
ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)
* AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring
* finish AVX vectorization of quantize_row_q8_0
* Rename hsum_int_8 to hsum_i32_8
Clint Herron [Sat, 22 Apr 2023 06:54:33 +0000 (02:54 -0400)]
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107)
* Moving parameters to separate lines for readability.
* Increasing repeate_penalty to 1.1 to make alpaca more usable by default.
* Adding trailing newline.
xaedes [Sat, 22 Apr 2023 06:21:32 +0000 (08:21 +0200)]
llama : add api for getting/setting the complete state: rng, logits, embedding and kv_cache (#1105)
* reserve correct size for logits
* add functions to get and set the whole llama state:
including rng, logits, embedding and kv_cache
* remove unused variables
* remove trailing whitespace
* fix comment
slaren [Fri, 21 Apr 2023 19:59:17 +0000 (21:59 +0200)]
Improve cuBLAS performance by using a memory pool (#1094)
* Improve cuBLAS performance by using a memory pool
* Move cuda specific definitions to ggml-cuda.h/cu
* Add CXX flags to nvcc
* Change memory pool synchronization mechanism to a spin lock
General code cleanup
apaz [Fri, 21 Apr 2023 18:48:06 +0000 (13:48 -0500)]
llama : fixed rlimit error message (#888)
源文雨 [Fri, 21 Apr 2023 18:27:06 +0000 (02:27 +0800)]
cmake : link threads publicly to ggml (#1042)
* fix: ld link test-tokenizer-0 error
```
cmake3 --build . --config Release
[ 5%] Built target ggml
[ 16%] Built target llama
[ 22%] Linking CXX executable ../bin/test-tokenizer-0
../libllama.a(ggml.c.o):在函数‘ggml_graph_compute’中:
ggml.c:(.text+0xf2db):对‘pthread_create’未定义的引用
ggml.c:(.text+0xf9d4):对‘pthread_join’未定义的引用
collect2: error: ld returned 1 exit status
gmake[2]: *** [bin/test-tokenizer-0] 错误 1
gmake[1]: *** [tests/CMakeFiles/test-tokenizer-0.dir/all] 错误 2
gmake: *** [all] 错误 2
```
* Update CMakeLists.txt
* Update CMakeLists.txt
* Update CMakeLists.txt
Alex Klinkhamer [Fri, 21 Apr 2023 18:18:09 +0000 (11:18 -0700)]
main : evaluate tokens in batches after swapping context (#1014)
* examples : evaluate tokens in batches after swapping context
* Update examples/main/main.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
xaedes [Fri, 21 Apr 2023 15:25:21 +0000 (17:25 +0200)]
llama : remember and restore kv cache data pointers (#1104)
because their value is stored in buf and overwritten by memcpy
Kawrakow [Fri, 21 Apr 2023 15:18:26 +0000 (17:18 +0200)]
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
* A faster version for Q4_1 x Q8_0 dot products
The idea nehind being that Q8_0 quantized
values get used many times in the matrix multiplications
where they are involved. In the current implementations,
when we are evaluating the dot products, we need to compute
the sum of the quants in the Q8_0 vector, so the same
operation is repeated many times. Here we pre-compute
the sum during Q8_0 quantization, store it in the
now modified block_q8_0 struct, and then reuse this
result in the subsequent dot products.
In a synthetic benchmark (just compute a bunch of dot
products), this change speeds up the Q4_1 * Q8_0 dot
product by 80%, making the performance identical to
Q4_0 * Q8_0.
In practical application, I see a ~15% gain in speed for
token prediction on M2, and ~5% gain on Ryzen 7950X.
The speed gain in the prompt evaluation is much bigger
(around 50%).
I have only done the change for the scalar version,
ARM_NEON, and AVX2, so we still need an AVX implementation.
* Cleaning up
---------
Co-authored-by: Iwan Kawrakow <redacted>
slaren [Fri, 21 Apr 2023 12:57:57 +0000 (14:57 +0200)]
Show perplexity ETA in hours and minutes (#1096)
Georgi Gerganov [Fri, 21 Apr 2023 07:23:36 +0000 (10:23 +0300)]
llama : fix comment for "output.weight" tensor
Stephan Walter [Thu, 20 Apr 2023 21:56:44 +0000 (21:56 +0000)]
Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)
* Add ggml-model-*.bin checksums for 7B, 13B, 30B
* Add ggml-model-*.bin checksums for 65B
---------
Co-authored-by: Pavol Rusnak <redacted>
Georgi Gerganov [Thu, 20 Apr 2023 20:32:59 +0000 (23:32 +0300)]
ggml : sync ggml (add GPT-NeoX RoPE implementation)
Georgi Gerganov [Thu, 20 Apr 2023 18:58:05 +0000 (21:58 +0300)]
ggml : fix bug in ggml_compute_forward_dup_f32()
slaren [Thu, 20 Apr 2023 18:49:53 +0000 (20:49 +0200)]
Add Q4_3 support to cuBLAS (#1086)
Georgi Gerganov [Thu, 20 Apr 2023 18:43:50 +0000 (21:43 +0300)]
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)
Georgi Gerganov [Thu, 20 Apr 2023 17:44:05 +0000 (20:44 +0300)]
ggml : fix Q4_3 quantization
Broke it during conflict resolution in last PR
Kawrakow [Thu, 20 Apr 2023 17:42:27 +0000 (19:42 +0200)]
llama : multi-threaded quantization (#1075)
* Multi-threading quantization.
Not much gain for simple quantizations, bit it will be important
for quantizations that require more CPU cycles.
* Multi-threading for quantize-stats
It now does the job in ~14 seconds on my Mac for
Q4_0, Q4_1 and Q4_2. Single-threaded it was taking
more than 2 minutes after adding the more elaborate
version of Q4_2.
* Reviewer comments
* Avoiding compiler confusion
After changing chunk_size to const int as suggested by
@ggerganov, clang and GCC starting to warn me that I don't
need to capture it in the lambda. So, I removed it from the
capture list. But that makes the MSVC build fail. So,
making it a constexpr to make every compiler happy.
* Still fighting with lambda captures in MSVC
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Thu, 20 Apr 2023 17:35:53 +0000 (20:35 +0300)]
ggml : add Q4_3 quantization (#1082)
Ivan Komarov [Thu, 20 Apr 2023 15:15:18 +0000 (17:15 +0200)]
ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the CI (#1074)
[Accelerate](https://developer.apple.com/documentation/accelerate) is an Apple framework which can only be used on macOS, and the CMake build [ignores](https://github.com/ggerganov/llama.cpp/blob/master/CMakeLists.txt#L102) the `LLAMA_ACCELERATE` variable when run on non-Apple platforms. This implies setting `LLAMA_ACCELERATE` is a no-op on Ubuntu and can be removed.
This will reduce visual noise in CI check results (in addition to reducing the number of checks we have to run for every PR). Right now every sanitized build is duplicated twice for no good reason (e.g., we have `CI / ubuntu-latest-cmake-sanitizer (ADDRESS, Debug, ON)` and `CI / ubuntu-latest-cmake-sanitizer (ADDRESS, Debug, OFF)`).
源文雨 [Thu, 20 Apr 2023 13:28:43 +0000 (21:28 +0800)]
fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)
Stephan Walter [Thu, 20 Apr 2023 06:45:41 +0000 (06:45 +0000)]
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
slaren [Thu, 20 Apr 2023 01:14:14 +0000 (03:14 +0200)]
Improve cuBLAS performance by dequantizing on the GPU (#1065)
CRD716 [Wed, 19 Apr 2023 19:52:14 +0000 (14:52 -0500)]
Minor: Readme fixed grammar, spelling, and misc updates (#1071)
Kawrakow [Wed, 19 Apr 2023 18:20:14 +0000 (20:20 +0200)]
Q4_2 quantization with rmse-optimized scale and quants (#1062)
* Q4_2 quantization with rmse-optimized scale and quants
For quantize-stats we get
q4_2: rmse 0.
00159301 , maxerr 0.
17480469 , 95pct<0.0030, median<0.0012
For 7B perplexity with BLAS enabled we get 6.2038 after 655 chunks.
Quantization is slow (~90 seconds on my Mac for 7B) as not
multi-threaded as in PR #896.
* ggml : satisfy the sanitizer builds
Not sure why this makes them fail
* Better follow ggml conventions for function names
* Fixed type as per reviewer comment
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 19 Apr 2023 17:10:08 +0000 (20:10 +0300)]
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
* ggml : use 8-bit precision for Q4_1 intermediate results (ARM)
* ggml : optimize ggml_vec_dot_q4_1_q8_0() via vmalq_n_f32
56 ms/token with Q4_1 !
* ggml : AVX2 implementation of ggml_vec_dot_q4_1_q8_0 (#1051)
* gitignore : ignore ppl-*.txt files
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Wed, 19 Apr 2023 16:07:54 +0000 (19:07 +0300)]
readme : add warning about Q4_2 and Q4_3
Stephan Walter [Wed, 19 Apr 2023 16:06:37 +0000 (16:06 +0000)]
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
* Q4 cleanup
* Remove unused AVX512 Q4_0 code
slaren [Wed, 19 Apr 2023 09:22:45 +0000 (11:22 +0200)]
Add NVIDIA cuBLAS support (#1044)
slaren [Tue, 18 Apr 2023 22:53:24 +0000 (00:53 +0200)]
Multi-threaded ggml_cpy (#1035)
* Multi-threaded ggml_cpy
* Update ggml.c
Co-authored-by: Georgi Gerganov <redacted>
* Also fix wdata offset in ggml_compute_forward_add_q_f32
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 18 Apr 2023 20:54:57 +0000 (23:54 +0300)]
ggml : add new Q4_2 quantization (ARM only) (#1046)
* ggml : Q4_2 ARM
* ggml : add ggml_is_quantized()
* llama : update llama_type_name() with Q4_2 entry
* ggml : speed-up q4_2
- 4 threads: ~100ms -> ~90ms
- 8 threads: ~55ms -> ~50ms
* ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
Georgi Gerganov [Tue, 18 Apr 2023 20:11:23 +0000 (23:11 +0300)]
ggml : scratch that - vmlaq_n_f32 is always better
Had a background process that was messing with the timings
Georgi Gerganov [Tue, 18 Apr 2023 20:00:08 +0000 (23:00 +0300)]
gitignore : vdot
Georgi Gerganov [Tue, 18 Apr 2023 19:59:17 +0000 (22:59 +0300)]
ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulators
Kawrakow [Tue, 18 Apr 2023 19:00:14 +0000 (21:00 +0200)]
Adding a simple program to measure speed of dot products (#1041)
On my Mac, the direct Q4_1 product is marginally slower
(~69 vs ~55 us for Q4_0). The SIMD-ified ggml version
is now almost 2X slower (~121 us).
On a Ryzen 7950X CPU, the direct product for Q4_1 quantization
is faster than the AVX2 implementation (~60 vs ~62 us).
---------
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Tue, 18 Apr 2023 17:10:26 +0000 (20:10 +0300)]
readme : update hot topics about new LoRA functionality
Georgi Gerganov [Mon, 17 Apr 2023 15:00:10 +0000 (18:00 +0300)]
ci : do not run on drafts
Ivan Komarov [Tue, 18 Apr 2023 01:15:50 +0000 (03:15 +0200)]
Do not close file after mmap (Windows version) (#1034)
Atsushi Tatsuma [Mon, 17 Apr 2023 19:34:35 +0000 (04:34 +0900)]
readme : add Ruby bindings (#1029)
Cameron [Mon, 17 Apr 2023 18:26:23 +0000 (11:26 -0700)]
add 4_0 to default outfile namestr dict (#1031)
this came up when trying to convert the gpt4all-lora-unfiltered-quantized.bin file
slaren [Mon, 17 Apr 2023 15:28:55 +0000 (17:28 +0200)]
Add LoRA support (#820)
Arik Poznanski [Mon, 17 Apr 2023 14:41:53 +0000 (17:41 +0300)]
llama : well-defined static initialization of complex objects (#927)
* Replaced static initialization of complex objects with a initialization on first use. This prevents an undefined behavior on program run, for example, crash in Release build, works in Debug build
* replaced use of auto with exact type to avoid using -std=c++14
* Made the assessors functions for static maps be static const
Georgi Gerganov [Mon, 17 Apr 2023 14:31:06 +0000 (17:31 +0300)]
quantize-stats : fix bug in --type argument
Georgi Gerganov [Mon, 17 Apr 2023 13:16:23 +0000 (16:16 +0300)]
ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.c
Ivan Komarov [Mon, 17 Apr 2023 13:10:57 +0000 (15:10 +0200)]
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
slaren [Sun, 16 Apr 2023 19:27:38 +0000 (21:27 +0200)]
Fix: do not close file on mmap (#1017)
Georgi Gerganov [Sun, 16 Apr 2023 10:58:48 +0000 (13:58 +0300)]
stdout : vertical align outputs for better readibility
Pavol Rusnak [Sun, 16 Apr 2023 10:13:00 +0000 (12:13 +0200)]
examples: add missing <ctime> include for time() (#1011)
nanahi [Sun, 16 Apr 2023 09:13:42 +0000 (17:13 +0800)]
Fix msys2 build error and warnings (#1009)
comex [Sat, 15 Apr 2023 21:53:21 +0000 (14:53 -0700)]
convert.py: Fix loading safetensors and ggml format on Windows (#991)
Calling `mmap.mmap` on Windows apparently resets the file offset of the
raw file object (and makes the BufferedReader return a *negative* file
offset). For safetensors, avoid using the file offset after calling
mmap. For GGML format, explicitly save and restore the offset.
Fixes #966.
Stephan Walter [Sat, 15 Apr 2023 18:28:56 +0000 (18:28 +0000)]
Fix potential int8 overflow in non-SIMD vec_dot (#986)
Stephan Walter [Sat, 15 Apr 2023 16:25:38 +0000 (16:25 +0000)]
Refactor ggml.c for future tensor types (#1001)
Georgi Gerganov [Sat, 15 Apr 2023 14:53:22 +0000 (17:53 +0300)]
ggml : add Q8_0 quantization for intermediate results (#951)
* ggml : add Q8_0 quantization for intermediate results
* quantize-stats : fix test + add it to Makefile default
* Q8: use int8_t, AVX/AVX2 optimizations
* ggml : fix quantize_row_q8_0() ARM_NEON rounding
* minor : updates after rebase to latest master
* quantize-stats : delete obsolete strings
* ggml : fix q4_1 dot func
---------
Co-authored-by: Stephan Walter <redacted>
Georgi Gerganov [Sat, 15 Apr 2023 11:25:45 +0000 (14:25 +0300)]
ggml : use posix_memalign on non-Windows env
Ivan Komarov [Sat, 15 Apr 2023 05:51:54 +0000 (07:51 +0200)]
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
katsu560 [Sat, 15 Apr 2023 05:51:11 +0000 (14:51 +0900)]
cmake : add finding the OpenBLAS header file (#992)
Pavol Rusnak [Fri, 14 Apr 2023 19:58:43 +0000 (21:58 +0200)]
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
This reverts commit
f4d277ae17247ee51129ef1a9ff74d377cc90b1b .
Pavol Rusnak [Fri, 14 Apr 2023 19:46:49 +0000 (21:46 +0200)]
py : bump sentencepiece to 0.1.98 to support Python 3.11 (#976)
Stephan Walter [Fri, 14 Apr 2023 19:39:48 +0000 (19:39 +0000)]
make : fix dependencies, use auto variables (#983)
Pavol Rusnak [Fri, 14 Apr 2023 18:05:37 +0000 (20:05 +0200)]
Expose type name from ggml (#970)
Avoid duplication of type names in utils
Co-authored-by: Håkon H. Hitland <redacted>
Tomáš Pazdiora [Fri, 14 Apr 2023 15:19:17 +0000 (17:19 +0200)]
main : alternative instruct mode (Vicuna support, etc.) (#863)
* Add support for configs, add configurable prefixes / suffixes, deprecate instruct mode, add stop prompt
* Add multiline mode, update text input.
* bugfix
* update implementation
* typos
* Change --multiline implementation to be toggled by EOF.
* bugfix
* default multiline mode
* add more configs
* update formating
* update formatting
* apply suggestions
Kerfuffle [Fri, 14 Apr 2023 14:43:55 +0000 (08:43 -0600)]
ggml : add unary and binary map operations (#874)
* GGML map ops proof of concept.
* Various cleanups.
Add handling for task setting.
Add handling for ggml_compute_backward.
Rename functions to ggml_map_unary_f32 and ggml_map_binary_f32
Fix compiler warnings related to casting function pointers and `void *`
Reorder functions and definitions based on the GGML op number.
Use typedefs for map op function pointer types.
* Fix position of map ops cases in ggml_compute_forward
Pavol Rusnak [Fri, 14 Apr 2023 13:37:11 +0000 (15:37 +0200)]
py : cleanup dependencies (#962)
after #545 we do not need torch, tqdm and requests in the dependencies
Pavol Rusnak [Fri, 14 Apr 2023 12:23:21 +0000 (14:23 +0200)]
py : fix flake8 and isort nitpicks (#960)
Georgi Gerganov [Fri, 14 Apr 2023 10:31:29 +0000 (13:31 +0300)]
ggml : minor
Georgi Gerganov [Fri, 14 Apr 2023 10:31:15 +0000 (13:31 +0300)]
ggml : always allocate buffers with size multiple of GGML_MEM_ALIGN
comex [Fri, 14 Apr 2023 07:03:03 +0000 (00:03 -0700)]
py : new conversion script (#545)
Current status: Working, except for the latest GPTQ-for-LLaMa format
that includes `g_idx`. This turns out to require changes to GGML, so
for now it only works if you use the `--outtype` option to dequantize it
back to f16 (which is pointless except for debugging).
I also included some cleanup for the C++ code.
This script is meant to replace all the existing conversion scripts
(including the ones that convert from older GGML formats), while also
adding support for some new formats. Specifically, I've tested with:
- [x] `LLaMA` (original)
- [x] `llama-65b-4bit`
- [x] `alpaca-native`
- [x] `alpaca-native-4bit`
- [x] LLaMA converted to 'transformers' format using
`convert_llama_weights_to_hf.py`
- [x] `alpaca-native` quantized with `--true-sequential --act-order
--groupsize 128` (dequantized only)
- [x] same as above plus `--save_safetensors`
- [x] GPT4All
- [x] stock unversioned ggml
- [x] ggmh
There's enough overlap in the logic needed to handle these different
cases that it seemed best to move to a single script.
I haven't tried this with Alpaca-LoRA because I don't know where to find
it.
Useful features:
- Uses multiple threads for a speedup in some cases (though the Python
GIL limits the gain, and sometimes it's disk-bound anyway).
- Combines split models into a single file (both the intra-tensor split
of the original and the inter-tensor split of 'transformers' format
files). Single files are more convenient to work with and more
friendly to future changes to use memory mapping on the C++ side. To
accomplish this without increasing memory requirements, it has some
custom loading code which avoids loading whole input files into memory
at once.
- Because of the custom loading code, it no longer depends in PyTorch,
which might make installing dependencies slightly easier or faster...
although it still depends on NumPy and sentencepiece, so I don't know
if there's any meaningful difference. In any case, I also added a
requirements.txt file to lock the dependency versions in case of any
future breaking changes.
- Type annotations checked with mypy.
- Some attempts to be extra user-friendly:
- The script tries to be forgiving with arguments, e.g. you can
specify either the model file itself or the directory containing
it.
- The script doesn't depend on config.json / params.json, just in
case the user downloaded files individually and doesn't have those
handy. But you still need tokenizer.model and, for Alpaca,
added_tokens.json.
- The script tries to give a helpful error message if
added_tokens.json is missing.
Georgi Gerganov [Fri, 14 Apr 2023 06:45:42 +0000 (09:45 +0300)]
ggml : fix q4_1 dot product types
Howard Su [Fri, 14 Apr 2023 06:24:52 +0000 (14:24 +0800)]
ggml : optimize rope function to avoid call powf in the tight loop (#807)
Gary Linscott [Thu, 13 Apr 2023 21:50:42 +0000 (14:50 -0700)]
perplexity : add support for batch size to `--perplexity` (#407)
* Add support to batch size for perplexity
* Revert "Fix memory allocation issues and seg faults"
This reverts commit
4870e455b3653f7d7769fa5772b2c90ffad088df .
* update from merge
* Remove perplexity from main
* updates
* Update batch size for efficiency
CRD716 [Thu, 13 Apr 2023 15:39:25 +0000 (10:39 -0500)]
common : remove unnecessary includes (#947)
Georgi Gerganov [Thu, 13 Apr 2023 15:36:40 +0000 (18:36 +0300)]
ggml : add GGML_DEFAULT_N_THREADS
Georgi Gerganov [Thu, 13 Apr 2023 15:32:36 +0000 (18:32 +0300)]
ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)
* ggml : speed-up q4_1 ARM_NEON by ~5%
* ggml : implement vaddvq when missing
* ggml : implement vminvq and vmaxvq when missing
* ggml : implement vzip when missing
* ggml : fix comment
* ggml : try to use correct ifdef
Georgi Gerganov [Thu, 13 Apr 2023 15:04:45 +0000 (18:04 +0300)]
llama : merge llama_internal.h into llama.h
Hide it behind an #ifdef
Georgi Gerganov [Thu, 13 Apr 2023 15:01:22 +0000 (18:01 +0300)]
gitignore : benchmark
Stephan Walter [Thu, 13 Apr 2023 14:59:50 +0000 (14:59 +0000)]
ggml : optimize non-SIMD Q4_0 vector dot product (#703)
Pavol Rusnak [Thu, 13 Apr 2023 14:08:32 +0000 (16:08 +0200)]
ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)
which allows us to use aligned_alloc or _aligned_malloc functions
CRD716 [Thu, 13 Apr 2023 14:03:57 +0000 (09:03 -0500)]
fix whitespace (#944)
CRD716 [Thu, 13 Apr 2023 13:59:53 +0000 (08:59 -0500)]
readme : remove python 3.10 warning (#929)
Genkagaku.GPT [Thu, 13 Apr 2023 13:54:27 +0000 (21:54 +0800)]
readme : llama node binding (#911)
* chore: add nodejs binding
* chore: add nodejs binding
Pavol Rusnak [Thu, 13 Apr 2023 13:49:05 +0000 (15:49 +0200)]
flake.nix: add all binaries from bin (#848)
Judd [Thu, 13 Apr 2023 13:43:22 +0000 (21:43 +0800)]
zig : update build.zig (#872)
* update
* update readme
* minimize the changes.
---------
Co-authored-by: zjli2019 <redacted>
Vladimir [Thu, 13 Apr 2023 13:24:30 +0000 (15:24 +0200)]
ggml : update cblas_sgemm columns var to be more reasonable (#838)
niansa/tuxifan [Thu, 13 Apr 2023 13:03:39 +0000 (15:03 +0200)]
examples : add -n to alpaca and gpt4all scripts (#706)
anzz1 [Thu, 13 Apr 2023 12:48:21 +0000 (15:48 +0300)]
cmake : add explicit F16C option (x86) (#576)
Fixes building for x86 processors missing F16C featureset
MSVC not included, as in MSVC F16C is implied with AVX2/AVX512