]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Jhen-Jie Hong [Mon, 11 Sep 2023 11:49:06 +0000 (19:49 +0800)]
cmake : support build for iOS/tvOS (#3116)
* cmake : support build for iOS/tvOS
* ci : add iOS/tvOS build into macOS-latest-cmake
* ci : split ios/tvos jobs
Johannes Gäßler [Mon, 11 Sep 2023 11:00:24 +0000 (13:00 +0200)]
CUDA: add device number to error messages (#3112)
Kawrakow [Mon, 11 Sep 2023 07:30:11 +0000 (09:30 +0200)]
metal : PP speedup (#3084)
* Minor speed gains for all quantization types
* metal: faster kernel_scale via float4
* Various other speedups for "small" kernels
* metal: faster soft_max vial float4
* metal: faster diagonal infinity
Although, to me it looks like one should simply
fuse scale + diagnonal infinity + soft_max on the
KQtensor.
* Another faster f16 x f32 matrix multiply kernel
* Reverting the diag infinity change
It does work for PP, but somehow it fails for TG.
Need to look more into it.
* metal: add back faster diagonal infinity
This time more carefully
* metal : minor (readibility)
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Erik Scholz [Sun, 10 Sep 2023 15:06:53 +0000 (17:06 +0200)]
convert: remove most of the n_mult usage in convert.py (#3098)
kchro3 [Sat, 9 Sep 2023 09:12:10 +0000 (02:12 -0700)]
metal : support for Swift (#3078)
* Metal support for Swift
* update
* add a toggle for arm/arm64
* set minimum versions for all platforms
* update to use newLibraryWithURL
* bump version
Co-authored-by: Jhen-Jie Hong <redacted>
---------
Co-authored-by: Jhen-Jie Hong <redacted>
Jhen-Jie Hong [Sat, 9 Sep 2023 08:46:04 +0000 (16:46 +0800)]
metal : support build for iOS/tvOS (#3089)
takov751 [Fri, 8 Sep 2023 16:06:26 +0000 (17:06 +0100)]
flake : add train-text-from-scratch to flake.nix (#3042)
Ikko Eltociear Ashimine [Fri, 8 Sep 2023 16:04:32 +0000 (01:04 +0900)]
readme : fix typo (#3043)
* readme : fix typo
acceleation -> acceleration
* Update README.md
---------
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Fri, 8 Sep 2023 16:01:04 +0000 (18:01 +0200)]
metal : Q3_K speedup (#2995)
* Slightly faster Q3_K and Q5_K on metal
* Another Q3_K speedup on metal
Combined with previous commit, we are now +9.6% for TG.
PP is not affected as this happens via the matrix multiplication
templates.
* Slowly progressing on Q3_K on metal
We are now 13% faster than master
* nother small improvement for Q3_K on metal
---------
Co-authored-by: Iwan Kawrakow <redacted>
Cebtenzzre [Fri, 8 Sep 2023 15:43:35 +0000 (11:43 -0400)]
examples : make n_ctx warning work again (#3066)
This was broken by commit
e36ecdcc ("build : on Mac OS enable Metal by
default (#2901)").
Georgi Gerganov [Fri, 8 Sep 2023 15:18:04 +0000 (18:18 +0300)]
readme : update hot tpoics
Georgi Gerganov [Fri, 8 Sep 2023 14:58:07 +0000 (17:58 +0300)]
sync : ggml (CUDA GLM RoPE + POSIX) (#3082)
ggml-ci
Przemysław Pawełczyk [Fri, 8 Sep 2023 12:09:21 +0000 (14:09 +0200)]
build : do not use _GNU_SOURCE gratuitously (#2035)
* Do not use _GNU_SOURCE gratuitously.
What is needed to build llama.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/
009695399 /) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions,
plus some stuff from BSD that is not specified in POSIX.1.
Well, that was true until NUMA support was added recently,
so enable GNU libc extensions for Linux builds to cover that.
Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
FTMs set by Makefile here or other FTMs depending on their needs.
It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.
* make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK
* make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK
* make : use BSD-specific FTMs to enable alloca on BSDs
* make : fix OpenBSD build by exposing newer POSIX definitions
* cmake : follow recent FTM improvements from Makefile
hongbo.mo [Fri, 8 Sep 2023 10:57:55 +0000 (18:57 +0800)]
docker : add git to full-cuda.Dockerfile main-cuda.Dockerfile (#3044)
Yui [Fri, 8 Sep 2023 10:32:55 +0000 (12:32 +0200)]
Update deprecated GGML TheBloke links to GGUF (#3079)
slaren [Fri, 8 Sep 2023 02:04:56 +0000 (04:04 +0200)]
ggml-alloc : correctly check mmap return value for errors (#3075)
Kunshang Ji [Fri, 8 Sep 2023 01:46:56 +0000 (09:46 +0800)]
enable CPU HBM (#2603)
* add cpu hbm support
* add memalign 0 byte check
* Update ggml.c
* Update llama.cpp
* ggml : allow ggml_init with 0 size
* retrigger ci
* fix code style
---------
Co-authored-by: Georgi Gerganov <redacted>
Cebtenzzre [Thu, 7 Sep 2023 18:27:42 +0000 (14:27 -0400)]
convert : fix F32 ftype not being saved (#3048)
Cebtenzzre [Thu, 7 Sep 2023 17:22:29 +0000 (13:22 -0400)]
fix some warnings from gcc and clang-tidy (#3038)
Co-authored-by: xaedes <redacted>
Cebtenzzre [Thu, 7 Sep 2023 14:15:01 +0000 (10:15 -0400)]
make : improve test target (#3031)
Cebtenzzre [Thu, 7 Sep 2023 14:13:50 +0000 (10:13 -0400)]
make : fix CPPFLAGS (#3035)
slaren [Thu, 7 Sep 2023 13:52:34 +0000 (15:52 +0200)]
llama-bench : use two tokens in the warmup run for prompt evals (#3059)
Kawrakow [Thu, 7 Sep 2023 13:45:01 +0000 (15:45 +0200)]
metal : parallel RoPE on Metal (#3024)
* Parallel RoPE on metal
* PR suggestion
---------
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Thu, 7 Sep 2023 13:42:42 +0000 (15:42 +0200)]
metal : correct fix of kernel_norm (#3060)
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Thu, 7 Sep 2023 12:49:09 +0000 (15:49 +0300)]
metal : fix kernel_norm (fixes Falcon on Metal) (#3057)
* metal : fix kernel_norm
ggml-ci
* metal : put warning in kernel_norm to not combine the loops
* metal : restore original F16 mat-vec multiplication
It works after the norm fixes
* common : don't do warm-up with more than n_batch tokens (close #3058)
ggml-ci
* metal : minor
Przemysław Pawełczyk [Thu, 7 Sep 2023 08:15:06 +0000 (10:15 +0200)]
ggml : posixify madvise and pagesize (#3037)
* llama : use posix_madvise() instead of madvise() derived from BSD
sed -i 's,\<madvise\>,posix_&,g;s,\<MADV_,POSIX_&,g' llama.cpp
* ggml : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD
sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml.c
* metal : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD
sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml-metal.m
Georgi Gerganov [Wed, 6 Sep 2023 09:40:57 +0000 (12:40 +0300)]
k-quants : fix zero-weight guard in Q6_K (ref #3040)
Kerfuffle [Wed, 6 Sep 2023 08:49:11 +0000 (02:49 -0600)]
convert-llama-ggml-to-gguf: Try to handle files older than GGJTv3 (#3023)
* convert-llama-ggmlv3-to-gguf: Try to handle files older than GGJTv3
* Better error messages for files that cannot be converted
* Add file type to GGUF output
* Rename to convert-llama-ggml-to-gguf.py
* Include original file type information in description
* Improve some informational output
Cebtenzzre [Tue, 5 Sep 2023 22:21:10 +0000 (18:21 -0400)]
build : add LLAMA_METAL_NDEBUG flag (#3033)
Cebtenzzre [Tue, 5 Sep 2023 19:12:00 +0000 (15:12 -0400)]
make : use new flag variables for recent changes (#3019)
Cebtenzzre [Tue, 5 Sep 2023 19:10:27 +0000 (15:10 -0400)]
examples : replace fprintf to stdout with printf (#3017)
Erik Scholz [Tue, 5 Sep 2023 17:41:00 +0000 (19:41 +0200)]
convert: fix convert.py not working with int filename_stem (#3028)
* fix implicit int to string conversion
* convert : remove an obsolete pyright comment
---------
Co-authored-by: Cebtenzzre <redacted>
Kawrakow [Tue, 5 Sep 2023 07:55:33 +0000 (09:55 +0200)]
Guard against all weights in a super-block being zero (#3010)
* Guard against all weights in a super-block being zero
* Also guard against extremely small weights
Closes #2982
---------
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Tue, 5 Sep 2023 07:46:39 +0000 (10:46 +0300)]
llama : update logic for number of threads when using BLAS
Georgi Gerganov [Tue, 5 Sep 2023 05:46:17 +0000 (08:46 +0300)]
speculative : add grammar support (#2991)
* speculative : add grammar support
* grammars : add json_arr.gbnf
* grammar : add comments to new grammar file
* grammar : remove one nested level
* common : warm-up with 2 tokens - seems to work better
* speculative : print draft token pieces
* speculative : reuse grammar parser + better logs and comments
* speculative : avoid grammar_mem
* make : fix speculative build
Georgi Gerganov [Mon, 4 Sep 2023 19:50:50 +0000 (22:50 +0300)]
py : minor
Georgi Gerganov [Mon, 4 Sep 2023 19:26:24 +0000 (22:26 +0300)]
build : on Mac OS enable Metal by default (#2901)
* build : on Mac OS enable Metal by default
* make : try to fix build on Linux
* make : move targets back to the top
* make : fix target clean
* llama : enable GPU inference by default with Metal
* llama : fix vocab_only logic when GPU is enabled
* common : better `n_gpu_layers` assignment
* readme : update Metal instructions
* make : fix merge conflict remnants
* gitignore : metal
slaren [Mon, 4 Sep 2023 12:59:52 +0000 (14:59 +0200)]
ggml-opencl : store GPU buffer in ggml_tensor::extra (#2994)
Cebtenzzre [Mon, 4 Sep 2023 10:40:18 +0000 (06:40 -0400)]
llama-bench : make cpp file non-executable (#2999)
Leng Yue [Mon, 4 Sep 2023 10:39:57 +0000 (03:39 -0700)]
make : add speculative example (#3003)
Aarni Koskela [Mon, 4 Sep 2023 08:28:55 +0000 (10:28 +0200)]
server : add a subtle loading animation to the edit box (#2466)
* editorconfig: add override for the server HTML (which already is 2-space indented)
* server: add a subtle loading animation to the edit box
Jiahao Li [Mon, 4 Sep 2023 06:53:30 +0000 (14:53 +0800)]
2x faster (rms) norm cuda kernels (3.7% e2e improvement) (#2985)
* 2x faster (rms) norm cuda kernels
* Fix code style
slaren [Sun, 3 Sep 2023 18:34:09 +0000 (20:34 +0200)]
ggml-alloc : use virtual memory for measurement (#2973)
* ggml-alloc : use virtual memory for measurement
* compatibility fixes for MAP_ANONYMOUS
* fallback to fixed address for systems without virtual memory
Georgi Gerganov [Sun, 3 Sep 2023 12:12:08 +0000 (15:12 +0300)]
speculative : PoC for speeding-up inference via speculative sampling (#2926)
* speculative : initial example
* speculative : print encoding speed
* speculative : add --draft CLI arg
Georgi Gerganov [Sun, 3 Sep 2023 10:42:56 +0000 (13:42 +0300)]
perplexity : fix ETA by warming up the model with an empty run
Kerfuffle [Sun, 3 Sep 2023 10:38:43 +0000 (04:38 -0600)]
gguf(python): Fix special vocab handling when id < 0 (#2984)
Georgi Gerganov [Sun, 3 Sep 2023 10:23:33 +0000 (13:23 +0300)]
metal : restore
363f0bf and fix reduce in F16_F32 kernels (#2986)
Alon [Sun, 3 Sep 2023 10:19:01 +0000 (13:19 +0300)]
cov : disable comment in PRs (#2989)
opparco [Sun, 3 Sep 2023 10:18:09 +0000 (19:18 +0900)]
llama : fix bpe tokenize from byte (#2889)
Georgi Gerganov [Sun, 3 Sep 2023 09:40:56 +0000 (12:40 +0300)]
metal : revert
6af0bab until we fix it
This restores the generated text to be the same as before #2959
Alon [Sun, 3 Sep 2023 08:48:49 +0000 (11:48 +0300)]
cov : add Code Coverage and codecov.io integration (#2928)
* update .gitignore
* makefile: add coverage support (lcov, gcovr)
* add code-coverage workflow
* update code coverage workflow
* wun on ubuntu 20.04
* use gcc-8
* check why the job hang
* add env vars
* add LLAMA_CODE_COVERAGE=1 again
* - add CODECOV_TOKEN
- add missing make lcov-report
* install lcov
* update make file -pb flag
* remove unused GGML_NITER from workflows
* wrap coverage output files in COV_TARGETS
Wentai Zhang [Sun, 3 Sep 2023 08:46:44 +0000 (16:46 +0800)]
opencl : fix a bug in ggml_cl_pool_malloc() for ggml_cl_mul_mat_f32() (#2955)
Co-authored-by: Wentai Zhang <redacted>
Kawrakow [Sun, 3 Sep 2023 08:06:22 +0000 (11:06 +0300)]
metal : more optimizations (#2959)
* Very minor speedup via simd-group synchronization in f16 x f32
* Another very minor speedup on metal
* Quite significant PP speedup on metal
* Another attempt
* Minor
* Massive improvement for TG for fp16
* ~4-5% improvement for Q8_0 TG on metal
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
kchro3 [Sun, 3 Sep 2023 06:21:05 +0000 (23:21 -0700)]
swift : add support for k-quants (#2983)
Kerfuffle [Sun, 3 Sep 2023 05:52:13 +0000 (23:52 -0600)]
convert.py : BPE fixes (#2938)
* convert.py: BPE fixes?
* Remove unnecessary conditional in addl token error handling
Ido S [Sun, 3 Sep 2023 05:50:51 +0000 (08:50 +0300)]
docs : add `catai` to `README.md` (#2967)
momonga [Sun, 3 Sep 2023 05:36:28 +0000 (14:36 +0900)]
examples : fix gpt-neox (#2943)
Co-authored-by: mmnga <redacted>
kchro3 [Sun, 3 Sep 2023 05:27:25 +0000 (22:27 -0700)]
swift : add missing c file to Package.swift (#2978)
Cebtenzzre [Sun, 3 Sep 2023 05:26:59 +0000 (01:26 -0400)]
make : support overriding CFLAGS/CXXFLAGS/CPPFLAGS/LDFLAGS (#2886)
* make : remove unused -DGGML_BIG_ENDIAN
* make : put preprocessor stuff in CPPFLAGS
* make : pass Raspberry Pi arch flags to g++ as well
* make : support overriding CFLAGS/CXXFLAGS/CPPFLAGS/LDFLAGS
* make : fix inverted conditional
Kerfuffle [Sat, 2 Sep 2023 17:53:55 +0000 (11:53 -0600)]
logging: Fix creating empty file even when disabled (#2966)
* logging: Fix creating empty file even when disabled
* Minor formatting fix
Co-authored-by: staviq <redacted>
---------
Co-authored-by: staviq <redacted>
bandoti [Sat, 2 Sep 2023 12:53:18 +0000 (09:53 -0300)]
readme : update clblast instructions (#2903)
* Update Windows CLBlast instructions
* Update Windows CLBlast instructions
* Remove trailing whitespace
Karsten Weiss [Sat, 2 Sep 2023 12:29:09 +0000 (14:29 +0200)]
metal : show all Metal device instances in the system (#2952)
* ggml_metal_init: Show all Metal device instances in the system
Also show the default Metal device that was picked.
* Update ggml-metal.m
---------
Co-authored-by: Georgi Gerganov <redacted>
Jhen-Jie Hong [Sat, 2 Sep 2023 12:23:45 +0000 (20:23 +0800)]
k-quants : fix build on armv7 (android only) (#2920)
* k-quants : fix build on armv7
* ggml : cleanup unused arm32 specific impl
* k-quants : avoid some unused vzero / mzero define
* ggml-alloc : use 4g for MEASURE_MAX_SIZE in 32-bit arm
Jhen-Jie Hong [Sat, 2 Sep 2023 00:31:46 +0000 (08:31 +0800)]
server : avoid aniprompt in probabilities of final response (#2849)
Engininja2 [Fri, 1 Sep 2023 21:33:19 +0000 (15:33 -0600)]
cuda : vsubss4 for older versions of ROCm/clang (#2942)
ZHAOKAI WANG [Fri, 1 Sep 2023 14:06:44 +0000 (22:06 +0800)]
readme : quick start command fix (#2908)
* quick start command fix
* quick start win command fix
Kerfuffle [Fri, 1 Sep 2023 14:02:48 +0000 (08:02 -0600)]
Allow quantize to only copy tensors, some other improvements (#2931)
* Allow quantize tool to only copy tensors to allow repackaging models.
* Slightly better logic when requantizing.
* Change help message to go to `stdout`.
Georgi Gerganov [Fri, 1 Sep 2023 14:00:40 +0000 (17:00 +0300)]
llama2c : rename function
Cebtenzzre [Fri, 1 Sep 2023 13:53:14 +0000 (09:53 -0400)]
make : use unaligned vector moves on MinGW (#2945)
Fixes #2922
m3ndax [Fri, 1 Sep 2023 13:47:27 +0000 (15:47 +0200)]
minor : add const qualifiers (#2853)
* made the methods const
# Conflicts:
# examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp
* made method const
* Update convert-llama2c-to-ggml.cpp
removed write_raw and write_u32
* llama2c : remove misleading const
---------
Co-authored-by: Georgi Gerganov <redacted>
Konstantin Herud [Fri, 1 Sep 2023 13:36:14 +0000 (15:36 +0200)]
docs : add java-llama.cpp to README.md (#2935)
Cebtenzzre [Fri, 1 Sep 2023 13:34:50 +0000 (09:34 -0400)]
build : fix most gcc and clang warnings (#2861)
* fix most gcc and clang warnings
* baby-llama : remove commented opt_params_adam
* fix some MinGW warnings
* fix more MinGW warnings
Ben Siraphob [Fri, 1 Sep 2023 13:32:14 +0000 (09:32 -0400)]
examples : add C grammar (#2357)
Tameem [Fri, 1 Sep 2023 13:27:40 +0000 (18:27 +0500)]
ggml : add RISC-V vector intrinsics support (#2929)
* added support for RISCV CFLAGS & native compile + cross compile options
* Add RISC-V Vector Intrinsics Support
Added RVV intrinsics for following
ggml_vec_dot_q4_0_q8_0
ggml_vec_dot_q4_1_q8_1
ggml_vec_dot_q5_0_q8_0
ggml_vec_dot_q5_1_q8_1
ggml_vec_dot_q8_0_q8_0
Co-authored-by: Sharafat <redacted>
Signed-off-by: Ahmad Tameem <redacted>
---------
Signed-off-by: Ahmad Tameem <redacted>
Co-authored-by: moiz.hussain <redacted>
Co-authored-by: Sharafat <redacted>
Georgi Gerganov [Fri, 1 Sep 2023 10:42:41 +0000 (13:42 +0300)]
metal : slight speed-up for add and mul kernels (#2917)
staviq [Fri, 1 Sep 2023 09:07:06 +0000 (11:07 +0200)]
logs : fix mingw-like builds (fixes #2898) (#2911)
* fix mingw-like builds
* formatting
* make LOG_COMPAT easier to override and extend
* simplify win detection
* fix for #2940
Cebtenzzre [Fri, 1 Sep 2023 09:03:49 +0000 (05:03 -0400)]
llama2c : fix segfault and alloc-dealloc-mismatch (#2913)
* llama2c : fix segfault if vocab is not found
* llama2c : fix mismatch between new[] and delete
* llama2c : fix basename on Windows
* llama2c : use a destructor to prevent memory leaks
Kawrakow [Fri, 1 Sep 2023 08:15:57 +0000 (11:15 +0300)]
metal: somewhat faster f16 x f32 matrix multiply kernel (#2951)
* Somewhat faster f16 x f32 matrix multiply kernel
* Better use 32 thread groups for f16 x f32
---------
Co-authored-by: Iwan Kawrakow <redacted>
Cebtenzzre [Fri, 1 Sep 2023 02:13:51 +0000 (22:13 -0400)]
convert : fix another python 3.8 issue (#2949)
slaren [Thu, 31 Aug 2023 23:32:09 +0000 (01:32 +0200)]
remove convert-llama-7b-pth-to-gguf.py and convert-llama-hf-to-gguf.py (#2906)
Kerfuffle [Thu, 31 Aug 2023 22:49:24 +0000 (16:49 -0600)]
scripts: Use local gguf package when running from repo (#2927)
* scripts: Use local gguf when running from repo
DannyDaemonic [Thu, 31 Aug 2023 11:21:45 +0000 (04:21 -0700)]
@vxiiduu's fix for PrefetchVirtualMemory (#2930)
Reimplement fix for `PrefetchVirtualMemory`.
Co-authored-by: vxiiduu <redacted>
Cebtenzzre [Thu, 31 Aug 2023 05:02:23 +0000 (01:02 -0400)]
convert : fix python 3.8 support, modernize type annotations (#2916)
* convert : fix python 3.8 support
* convert : sort imports
* convert : fix required parameters in convert-llama-ggmlv3-to-gguf
* convert : fix mypy errors in convert-llama-ggmlv3-to-gguf
* convert : use PEP 585 generics and PEP 604 unions
Now that we have `from __future__ import annotations`, we can use this
modern syntax in Python 3.7 instead of restricting support to Python 3.9
or 3.10 respectively.
* gguf.py : a tuple is already a tuple
* add mypy.ini
* convert : add necessary `type: ignore` comments
* gguf-py: bump version
Johannes Gäßler [Wed, 30 Aug 2023 19:46:19 +0000 (21:46 +0200)]
CUDA: mul_mat_q=true llama_context_params default (#2912)
Henri Vasserman [Wed, 30 Aug 2023 16:14:53 +0000 (19:14 +0300)]
[Docker] fix tools.sh argument passing. (#2884)
* [Docker] fix tools.sh argument passing.
This should allow passing multiple arguments to containers with
the full image that are using the tools.sh frontend.
Fix from https://github.com/ggerganov/llama.cpp/issues/2535#issuecomment-
1697091734
Georgi Gerganov [Wed, 30 Aug 2023 10:29:40 +0000 (13:29 +0300)]
convert.py : use dir name to name the llama
Georgi Gerganov [Wed, 30 Aug 2023 09:52:46 +0000 (12:52 +0300)]
examples : fix underscore in beam-search + .gitignore (close #2900)
M. Yusuf Sarıgöz [Wed, 30 Aug 2023 09:47:40 +0000 (12:47 +0300)]
gguf : add workflow for Pypi publishing (#2896)
* gguf : add workflow for Pypi publishing
* gguf : add workflow for Pypi publishing
* fix trailing whitespace
alonfaraj [Wed, 30 Aug 2023 09:42:51 +0000 (12:42 +0300)]
make : add test and update CI (#2897)
* build ci: run make test
* makefile:
- add all
- add test
* enable tests/test-tokenizer-0-llama
* fix path to model
* remove gcc-8 from macos build test
* Update Makefile
* Update Makefile
Gilad S [Wed, 30 Aug 2023 08:40:12 +0000 (11:40 +0300)]
docs : add `node-llama-cpp` to `README.md` (#2885)
Kerfuffle [Wed, 30 Aug 2023 08:25:50 +0000 (02:25 -0600)]
convert : various script cleanups/fixes + merges and special token handling (#2842)
* convert: Fix permute calls and method/func definitions
* Cleanups for gguf-py
* Minor types cleanups.
* Initial implementation of handling merges and special tokens
* convert: Handle special tokens and merges in vocab only mode
convert: Vocab only mode no longer requires loading model tensors
* gguf: Refactor tensor name mapping
* convert: Fix type hint for special_token_types in SpecialVocab
* Use common special vocab handling in various conversion scripts
* First pass at implementing suggested changes
* Second pass
* gguf: SpecialVocab: Fix issue with special token content not in a dict
gguf: SpecialVocab: Allow skipping handling of merges
* convert-falcon-hf-to-gguf: Support --vocab-only option, bail out if no tokenizer.json
* convert-gptneox-hf-to-gguf and convert: Only handle merges for BPE tokenizer
* gguf: SpecialVocab: Actually set load_merges in object
* Uniform args parsing and vocab only mode for convert examples
* convert.py: Set gpt2 as tokenizer model when using BPE
* Squish last type warning in gguf.py - yay!
chaihahaha [Wed, 30 Aug 2023 06:50:55 +0000 (14:50 +0800)]
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
staviq [Wed, 30 Aug 2023 06:29:32 +0000 (08:29 +0200)]
main : log file (#2748)
* initial, base LOG macro
* add *.log to .gitignore
* added basic log file handler
* reverted log auto endline to better mimic printf
* remove atomics and add dynamic log target
* log_enable/disable, LOG_TEE, basic usage doc
* update .gitignore
* mv include to common, params, help msg
* log tostring helpers, token vectors pretty prints
* main: replaced fprintf/LOG_TEE, some trace logging
* LOG_DISABLE_LOGS compile flag, wrapped f in macros
* fix LOG_TEELN and configchecker
* stub LOG_DUMP_CMDLINE for WIN32 for now
* fix msvc
* cleanup main.cpp:273
* fix stray whitespace after master sync
* log : fix compile warnings
- do not use C++20 stuff
- use PRIu64 to print uint64_t
- avoid string copies by using const ref
- fix ", ##__VA_ARGS__" warnings
- compare strings with == and !=
* log : do not append to existing log + disable file line func by default
* log : try to fix Windows build
* main : wip logs
* main : add trace log
* review: macro f lowercase, str append to sstream
* review: simplify ifs and str comparisons
* fix MSVC, formatting, FMT/VAL placeholders
* review: if/else cleanup
* review: if/else cleanup (2)
* replace _ prefix with _impl suffix
---------
Co-authored-by: Georgi Gerganov <redacted>
Cebtenzzre [Wed, 30 Aug 2023 06:20:26 +0000 (02:20 -0400)]
tests : add a C compliance test (#2848)
* tests : add a C compliance test
* make : build C compliance test by default
* make : fix clean and make sure C test fails on clang
* make : move -Werror=implicit-int to CFLAGS
slaren [Tue, 29 Aug 2023 21:24:42 +0000 (23:24 +0200)]
ggml : add view_src and view_offs to ggml_tensor for views (#2874)
* ggml : add view_src and view_offs
* update ggml-alloc to use view_src
* update ggml_diag_mask to work correctly with automatic inplace
* exclude other ops that set an inplace flag from automatic inplace
slaren [Tue, 29 Aug 2023 21:17:34 +0000 (23:17 +0200)]
remove outdated references to -eps and -gqa from README (#2881)
Kawrakow [Tue, 29 Aug 2023 20:55:45 +0000 (23:55 +0300)]
Tell users attmepting to run perplexity with too few tokens to use more (#2882)
Closes #2858
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Tue, 29 Aug 2023 20:55:03 +0000 (23:55 +0300)]
10X faster BPE tokenizer (#2876)
* 10X faster BPE tokenizer
* Remove comment that no longer applies
---------
Co-authored-by: Iwan Kawrakow <redacted>
maddes8cht [Tue, 29 Aug 2023 13:51:02 +0000 (15:51 +0200)]
py : fix "usage" messages (#2873)
convert-to-gguf python scripts
jameswu2014 [Tue, 29 Aug 2023 09:48:41 +0000 (17:48 +0800)]
convert.py : fix baichuan7B support (#2870)
* [Fix]: convert.py support baichuan7B
* convert.py : fix trailing whitespaces
---------
Co-authored-by: Georgi Gerganov <redacted>