]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Georgi Gerganov [Wed, 13 Dec 2023 12:05:38 +0000 (14:05 +0200)]
readme : update hot topics
slaren [Wed, 13 Dec 2023 12:04:25 +0000 (13:04 +0100)]
llama : add Mixtral support (#4406)
* convert : support Mixtral as LLAMA arch
* convert : fix n_ff typo
* llama : model loading
* ggml : sync latest ggml_mul_mat_id
* llama : update graph to support MoE
* llama : fix cur -> cur_expert
* llama : first working version
* llama : fix expert weighting in the FFN
* ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)
* ggml : add n_as argument to ggml_mul_mat_id
* ggml : fix ggml_get_rows to take into account ne02 / ne11
* metal : add more general support for ggml_get_rows + tests
* llama : add basic support for offloading moe with CUDA
* metal : add/mul/div use general kernel when src1 not cont
* metal : reduce the kernel launches for ggml_mul_mat_id
* ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D
* ggml : update get_rows f16 and q
* cuda : support non-contiguous src1 in get_rows
* llama : offload missing ffn_moe_silu
* metal : fix ggml_get_rows to work with non-cont src1
* metal : add indirect mat-vec kernels for all quantization types
* llama : do not quantize expert gating tensors
* llama : add n_expert and n_expert_used to hparams + change quants
* test-backend-ops : add moe test
* cuda : fix get_rows when ncols is odd
* convert : determine n_ctx correctly
* metal : fix ggml_mul_mat_id for F32
* test-backend-ops : make experts more evenly probable (test_moe)
* test-backend-ops : cleanup, add moe test for batches
* test-backend-ops : add cpy from f32 -> all types test
* test-backend-ops : fix dequantize block offset
* llama : fix hard-coded number of experts
* test-backend-ops : simplify and disable slow tests to avoid CI timeout
* test-backend-ops : disable MOE test with thread sanitizer
* cuda : fix mul_mat_id with multi gpu
* convert : use 1e6 rope_freq_base for mixtral
* convert : fix style
* convert : support safetensors format
* gguf-py : bump version
* metal : add cpy f16 -> f32 kernel
* metal : fix binary ops for ne10 % 4 != 0
* test-backend-ops : add one more sum_rows test
* ggml : do not use BLAS with ggml_mul_mat_id
* convert-hf : support for mixtral-instruct (#4428)
* convert : typo fix, add additional hyperparameters, use LLaMA arch for Mixtral-instruct
* convert : use sentencepiece tokenizer for Mixtral-instruct
* convert : make flake8 happy
* metal : fix soft_max kernels
ref: https://github.com/ggerganov/ggml/pull/621/commits/
1914017863d2f9ab8ecc0281cc2a56d683668b92
* metal : limit kernels to not use more than the allowed threads
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Radek Pilar <redacted>
kalomaze [Tue, 12 Dec 2023 10:12:35 +0000 (04:12 -0600)]
server : tweak default sampling parameters (#4367)
* Set a more typical Top P setting as the default
* Update temp max
Richard Kiss [Tue, 12 Dec 2023 09:53:36 +0000 (01:53 -0800)]
english : use `typos` to fix comments and logs (#4354)
Jared Van Bortel [Tue, 12 Dec 2023 09:27:26 +0000 (04:27 -0500)]
build : target Windows 8 for standard mingw-w64 (#4405)
* build : target Windows 8 for standard mingw-w64
* make : fix missing console.o deps
This was causing a link error with `make all` on Windows.
crasm [Tue, 12 Dec 2023 09:25:57 +0000 (04:25 -0500)]
llama : document logits_all deprecation (#4418)
llama_context_params.logits_all is a parameter for controlling
llama_eval. This documents that logits_all should not be used with
llama_decode and llama_batch.
Vladimir Zorin [Tue, 12 Dec 2023 09:25:29 +0000 (11:25 +0200)]
server : fix local model name in server (#4420)
Taikono-Himazin [Tue, 12 Dec 2023 09:24:32 +0000 (18:24 +0900)]
ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (#4424)
Yueh-Po Peng [Sun, 10 Dec 2023 22:27:38 +0000 (06:27 +0800)]
Update README.md (#4388)
Fix small typo.
Xiang (Kevin) Li [Sat, 9 Dec 2023 21:29:27 +0000 (16:29 -0500)]
grammar : revert the replacement of llama_token_to_piece with id_to_token (#4396)
Georgi Gerganov [Thu, 7 Dec 2023 20:26:54 +0000 (22:26 +0200)]
sync : ggml (new ops, tests, backend, etc.) (#4359)
* sync : ggml (part 1)
* sync : ggml (part 2, CUDA)
* sync : ggml (part 3, Metal)
* ggml : build fixes
ggml-ci
* cuda : restore lost changes
* cuda : restore lost changes (StableLM rope)
* cmake : enable separable compilation for CUDA
ggml-ci
* ggml-cuda : remove device side dequantize
* Revert "cmake : enable separable compilation for CUDA"
This reverts commit
09e35d04b1c4ca67f9685690160b35bc885a89ac .
* cuda : remove assert for rope
* tests : add test-backend-ops
* ggml : fix bug in ggml_concat
* ggml : restore `ggml_get_n_tasks()` logic in `ggml_graph_plan()`
* ci : try to fix macOS
* ggml-backend : remove backend self-registration
* ci : disable Metal for macOS cmake build
ggml-ci
* metal : fix "supports family" call
* metal : fix assert
* metal : print resource path
ggml-ci
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Thu, 7 Dec 2023 11:03:17 +0000 (13:03 +0200)]
llama : per-layer KV cache + quantum K cache (#4309)
* per-layer KV
* remove unnecessary copies
* less code duplication, offload k and v separately
* llama : offload KV cache per-layer
* llama : offload K shift tensors
* llama : offload for rest of the model arches
* llama : enable offload debug temporarily
* llama : keep the KV related layers on the device
* llama : remove mirrors, perform Device -> Host when partial offload
* common : add command-line arg to disable KV cache offloading
* llama : update session save/load
* llama : support quantum K cache (#4312)
* llama : support quantum K cache (wip)
* metal : add F32 -> Q8_0 copy kernel
* cuda : add F32 -> Q8_0 copy kernel
ggml-ci
* cuda : use mmv kernel for quantum cache ops
* llama : pass KV cache type through API
* llama : fix build
ggml-ci
* metal : add F32 -> Q4_0 copy kernel
* metal : add F32 -> Q4_1 copy kernel
* cuda : wip
* cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels
* llama-bench : support type_k/type_v
* metal : use mm kernel only for quantum KV cache
* cuda : add comment
* llama : remove memory_f16 and kv_f16 flags
---------
Co-authored-by: slaren <redacted>
* readme : add API change notice
---------
Co-authored-by: slaren <redacted>
Hongyu Ouyang [Thu, 7 Dec 2023 10:25:22 +0000 (02:25 -0800)]
train : fix #4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (#4351)
On commit b1108 (
44c117f4 ) xaedes added
ggml_allocr * alloc = NULL;
... (many lines in between)
if (alloc) {
ggml_allocr_free(alloc);
}
Which is correct, but it's easy to lose context after many lines in between.
On commit b1287 (
0e76a899 ) xaedes made a big change. From here on, alloc is freed eagerly.
alloc = ggml_allocr_new(...)
... (short lines of code)
ggml_allocr_free(alloc)
This happens a few times, but alloc is never set to NULL, and many lines below,
we still have
if (alloc) {
ggml_allocr_free(alloc);
}
which causes a double-free.
Georgi Gerganov [Wed, 6 Dec 2023 18:21:59 +0000 (20:21 +0200)]
server : recognize cache_prompt parameter in OAI API (#4347)
Georgi Gerganov [Wed, 6 Dec 2023 08:41:03 +0000 (10:41 +0200)]
common : fix compile warning
stduhpf [Wed, 6 Dec 2023 08:08:17 +0000 (09:08 +0100)]
speculative : support `--color` (#4343)
* speculative: add some colors
* minor : add braces
---------
Co-authored-by: Georgi Gerganov <redacted>
Marcus Dunn [Tue, 5 Dec 2023 20:55:12 +0000 (10:55 -1000)]
grammar : pre-computed pieces + reserve mem + less string copies (#4330)
* reserve space for codepoints
* improvement for the appended 0
* used precomputed token text for grammar sample
* reserve canidates_decoded
* reserve canidates_grammar
* remove candidates_decoded
* Revert "remove candidates_decoded"
This reverts commit
3773328080e6a139ee83198329a13cf4ff61d707 .
* changed decode_utf8 to take src by ref
Kerfuffle [Tue, 5 Dec 2023 17:19:18 +0000 (10:19 -0700)]
llama : allow overriding GGUF metadata when loading model (#4092)
* feat: Allow overriding GGUF metadata when loading model
* Fix the one time GCC is stricter than clang about something
* Step1
* Refactor... basically everything!
* Nuke obsolete GetArrayLen struct
* simplify std::string specialization
* Various cleanups
Add informational output when overrides are applied
Warn user when an override with the wrong type is specified
* Fix broken logic for parsing bool KV overrides
Fix issue where overrides didn't apply when key missing in GGUF metadata
Resolve merge changes
* llama : rearrange model params
* Update new GET_KEY call
Add note that metadata KV overrides aren't reflected in initial metadata KV info dump
---------
Co-authored-by: cebtenzzre <redacted>
Co-authored-by: Georgi Gerganov <redacted>
MaggotHATE [Tue, 5 Dec 2023 10:05:51 +0000 (15:05 +0500)]
sampling : custom samplers order (#4285)
* Samplers sequence order w parameter
* Cleaned commented code
* Fixed formatting
* Rewrote with unordered_map
* Revert and rewrite, too many problems and safeguards would be needed
* Fixed code style
* Code style fixes according to review
* More readable samplers input string, fixed help
* Style fix in sampler_queue
* Formatting fixes
* Fixing whitespaces
kchro3 [Tue, 5 Dec 2023 07:29:46 +0000 (23:29 -0800)]
swift : revert compiler checks for swift package (#4332)
Daniel Bevenius [Mon, 4 Dec 2023 16:04:21 +0000 (17:04 +0100)]
simple : update error message for KV cache check (#4324)
This commit updates the error message that is printed when the
KV cache is not big enough to hold all the prompt and generated
tokens. Specifically it removes the reference to n_parallel and
replaces it with n_len.
Signed-off-by: Daniel Bevenius <redacted>
Miwa / Ensan [Mon, 4 Dec 2023 16:03:49 +0000 (01:03 +0900)]
swift : fix concatenation method to avoid invalid UTF8 stringfication (#4325)
Miwa / Ensan [Mon, 4 Dec 2023 13:43:45 +0000 (22:43 +0900)]
swift : fix prompt tokenization logic (#4321)
Ikko Eltociear Ashimine [Mon, 4 Dec 2023 07:57:35 +0000 (16:57 +0900)]
grammar-parser : fix typo (#4318)
preceeding -> preceding
Georgi Gerganov [Sun, 3 Dec 2023 13:56:35 +0000 (15:56 +0200)]
ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (#4308)
* ggml : fix soft max out-of-bounds access
ggml-ci
* ggml : reuse ggml_get_n_tasks() in ggml_graph_plan()
ggml-ci
Georgi Gerganov [Sun, 3 Dec 2023 13:56:22 +0000 (15:56 +0200)]
ggml : fix soft max out-of-bounds access (#4307)
ggml-ci
Ed Lee [Sun, 3 Dec 2023 09:10:43 +0000 (01:10 -0800)]
server : fix OpenAI API `stop` field to be optional (#4299)
(cherry picked from commit Mozilla-Ocho/llamafile@
e8c92bcb84ae3bcbf0d617b7ee6a5413bcbd58af )
Rickard Edén [Sun, 3 Dec 2023 09:03:25 +0000 (10:03 +0100)]
py : add grammar to oai like api (#4294)
Georgi Gerganov [Sun, 3 Dec 2023 08:58:16 +0000 (10:58 +0200)]
llama : pad KV cache size (#4280)
* llama : pad KV cache size to 32
* metal : try to improve batched decoding
Georgi Gerganov [Fri, 1 Dec 2023 18:39:12 +0000 (20:39 +0200)]
llama : avoid using "optional" keyword (#4283)
Georgi Gerganov [Fri, 1 Dec 2023 18:35:03 +0000 (20:35 +0200)]
llama : support optional tensors (#4283)
Miwa / Ensan [Fri, 1 Dec 2023 18:19:45 +0000 (03:19 +0900)]
swift : fix token_to_piece implementation (#4278)
* Fix token_to_piece implementation in Swift
* Fix errors
Jared Van Bortel [Fri, 1 Dec 2023 18:18:35 +0000 (13:18 -0500)]
build : enable libstdc++ assertions for debug builds (#4275)
CausalLM [Fri, 1 Dec 2023 18:17:06 +0000 (02:17 +0800)]
llama : support attention bias on LLaMA architecture (#4283)
* Support attention_bias on LLaMA architecture
QKVO bias, should fix InternLM (https://github.com/ggerganov/llama.cpp/issues/3133) and works for LLaMAfied Qwen models (https://github.com/ggerganov/llama.cpp/pull/3743#issuecomment-
1825923608 ).
* check existence of qkvo bias while loading llama models
Tested on LLaMA2, CUDA and CPU.
* Update llama.cpp
Shijie [Fri, 1 Dec 2023 18:16:31 +0000 (02:16 +0800)]
llama : add Qwen support (#4281)
* enable qwen to llama.cpp
* llama : do not GPU split bias tensors
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 1 Dec 2023 16:42:11 +0000 (18:42 +0200)]
llama : fix integer overflow during quantization (#4284)
happens with multi-threaded quantization of Qwen-72B
ggml-ci
Daniel Bevenius [Fri, 1 Dec 2023 09:41:56 +0000 (10:41 +0100)]
py : add requirements file for convert-hf-to-gguf.py (#4277)
This commit adds a requirements file for the convert-hf-to-gguf.py
script, and also add the torch and transformers packages to it.
The motivation for this is that currently running convert-hf-to-gguf.py
will produce the following error:
```console
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install -r requirements.txt
Collecting numpy==1.24.4
Collecting sentencepiece==0.1.98
Collecting gguf>=0.1.0
Installing collected packages: sentencepiece, numpy, gguf
Successfully installed gguf-0.5.1 numpy-1.24.4 sentencepiece-0.1.98
(venv) $ python convert-hf-to-gguf.py --help
Traceback (most recent call last):
File "llama.cpp/convert-hf-to-gguf.py", line 16, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
```
With this commit, and using requirements-hf-to-gguf.txt instead of
requirements.txt, the script can be run and shows the help output.
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Fri, 1 Dec 2023 08:51:24 +0000 (10:51 +0200)]
ggml : add ggml_soft_max_ext (#4256)
* metal : implement soft_max_ext
* cuda : implement soft_max_ext
* ggml : implement soft_max_ext (CPU)
* batched-bench : print threads
ggml-ci
* metal : simplify soft_max encoding
ggml-ci
* cuda : use 512 threads for soft_max instead of 32
* ggml : update soft max cpu
* cuda : do warp-based block reduce
* cuda : increase max block size to 1024
* cuda : fix warp reduction initialization of shared mem
* metal : warp-based reduction for soft max kernel
* metal : warp-based reduce for rms_norm
* metal : simplify soft max kernel
ggml-ci
* alloc : fix build with debug
Ziad Ben Hadj-Alouane [Thu, 30 Nov 2023 22:25:49 +0000 (17:25 -0500)]
server : add --log-disable to disable logging to file (#4260)
* * add --log-disable to disable logging to file in the server example
* * typo fix
Ziad Ben Hadj-Alouane [Thu, 30 Nov 2023 22:25:04 +0000 (17:25 -0500)]
server : add single-client multi-prompt support (#4232)
* * add multiprompt support
* * cleanup
* * more cleanup
* * remove atomicity of id_gen, and change lock_guard to unique_lock on completion requests
* * remove all references to mutex_multitasks
* Update examples/server/server.cpp
Co-authored-by: Jared Van Bortel <redacted>
* Update examples/server/server.cpp
Co-authored-by: Jared Van Bortel <redacted>
* Update examples/server/server.cpp
Co-authored-by: Jared Van Bortel <redacted>
* Update examples/server/server.cpp
Co-authored-by: Jared Van Bortel <redacted>
* * change to set
---------
Co-authored-by: Jared Van Bortel <redacted>
WillCorticesAI [Thu, 30 Nov 2023 22:23:44 +0000 (17:23 -0500)]
make : fix Apple clang determination bug (#4272)
Co-authored-by: Will Findley <redacted>
Jared Van Bortel [Thu, 30 Nov 2023 22:23:08 +0000 (17:23 -0500)]
build : fix build info generation and cleanup Makefile (#3920)
* cmake : fix joining of REAL_GIT_DIR
* fix includes with help from include-what-you-use
* make : remove unneeded deps and add test-rope target
* fix C includes in C++ source files
* Revert "fix includes with help from include-what-you-use"
This reverts commit
635e9fadfd516d4604a0fecf4a854bfb25ad17ae .
John [Thu, 30 Nov 2023 22:11:14 +0000 (23:11 +0100)]
llava : ShareGPT4V compatibility (vision encoder only loading) (#4172)
* ShareGPT4 compatibility (vision encoder only loading)
Load only a CLIP vision encoder (as supplied by ShareGPT finetunes)
Corrects the argument parsing for --img_mean and --img_std (which were previously not parsed but attempted to access)
Defines defaults for img_mean and img_std which are equal to the llava 1.5 CLIP encoder, so you do not have to provide them
* Update convert-image-encoder-to-gguf.py
Andrew Godfrey [Thu, 30 Nov 2023 21:56:19 +0000 (13:56 -0800)]
main : pass LOG_TEE callback to llama.cpp log (#4033)
* main : Call llama_log_set to use LOG_TEE
* tabs to spaces
vodkaslime [Thu, 30 Nov 2023 21:49:21 +0000 (05:49 +0800)]
readme : fix (#4135)
* fix: readme
* chore: resolve comments
* chore: resolve comments
Juraj Bednar [Thu, 30 Nov 2023 21:46:01 +0000 (22:46 +0100)]
docker : add finetune option (#4211)
Miwa / Ensan [Thu, 30 Nov 2023 21:45:17 +0000 (06:45 +0900)]
batched.swift : update README.md (#4214)
docs: update how to run
Li Tan [Thu, 30 Nov 2023 21:44:11 +0000 (13:44 -0800)]
cmake : fix the metal file foder path (#4217)
Dawid Wysocki [Thu, 30 Nov 2023 21:43:32 +0000 (22:43 +0100)]
readme : fix typo (#4253)
llama.cpp uses GitHub Actions, not Gitlab Actions.
Daniel Bevenius [Thu, 30 Nov 2023 21:43:08 +0000 (22:43 +0100)]
llama : fix alignment of general.name in print meta (#4254)
* llama: fix alignment of general.name in print meta
This commit fixes the alignment of the general.name field in the
llm_load_print_meta function.
Currently the output looks like this:
```console
llm_load_print_meta: model ftype = mostly Q4_0
llm_load_print_meta: model params = 13.02 B
llm_load_print_meta: model size = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name = LLaMA v2
```
And with this commit it looks like this:
```console
llm_load_print_meta: model ftype = mostly Q4_0
llm_load_print_meta: model params = 13.02 B
llm_load_print_meta: model size = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name = LLaMA v2
```
Signed-off-by: Daniel Bevenius <redacted>
* llama: fix alignment of special tokens
Signed-off-by: Daniel Bevenius <redacted>
---------
Signed-off-by: Daniel Bevenius <redacted>
slaren [Thu, 30 Nov 2023 21:42:23 +0000 (22:42 +0100)]
convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#4258)
tarcey [Thu, 30 Nov 2023 21:40:23 +0000 (22:40 +0100)]
llama : fix typical sampling (#4261)
Typical sampling was broken because after copying new_candidates into canditates, the "sorted" bool is left at "true", but the new data is no longer sorted according to probability. Patch to set "sorted" to false.
Test: Generating with temp=0.0001 (approx. argmax) should generate the same sequence at typical>=1.0 and typical=0.9999 (approx. disabled, but enters the typical sampling codepath).
rhjdvsgsgks [Thu, 30 Nov 2023 20:50:40 +0000 (20:50 +0000)]
py : fix oai proxy (#3972)
* fix oai proxy
fix generation not stoped while bot stop talking in chat mode
fix possible `slot_id` not exist
response for cors (and pre flight)
* oai proxy: workaround for some client (such as Chatbox)
* use stop as separator to replace hardcoded `\n`
Georgi Gerganov [Wed, 29 Nov 2023 09:00:17 +0000 (11:00 +0200)]
examples : add readme files
Peter Sugihara [Wed, 29 Nov 2023 07:16:34 +0000 (23:16 -0800)]
readme : add FreeChat (#4248)
Jared Van Bortel [Tue, 28 Nov 2023 09:51:11 +0000 (04:51 -0500)]
ggml : restore abort() in GGML_ASSERT (#4242)
Georgi Gerganov [Tue, 28 Nov 2023 08:32:03 +0000 (10:32 +0200)]
ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload checks in llama.cpp (#4240)
* ggml : use blas even if src0 is not F32
* llama : use n_threads_batch only when n_tokens >= 32
ggml-ci
* llama : revert n_threads_batch logic
ggml-ci
bandoti [Mon, 27 Nov 2023 19:25:42 +0000 (15:25 -0400)]
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970)
* Split CPP generation from build-info query
* Remove blank lines
* Add BUILD_SHARED_LIBS option
Kasumi [Mon, 27 Nov 2023 17:39:42 +0000 (01:39 +0800)]
readme : add Amica to UI list (#4230)
Bailey Chittle [Mon, 27 Nov 2023 14:56:52 +0000 (09:56 -0500)]
examples : iOS example with swift ui (#4159)
* copy to llama.cpp as subdir
* attempt enabling metal, fails
* ggml metal compiles!
* Update README.md
* initial conversion to new format, utf8 errors?
* bug fixes, but now has an invalid memory access :(
* added O3, now has insufficient memory access
* begin sync with master
* update to match latest code, new errors
* fixed it!
* fix for loop conditionals, increase result size
* fix current workflow errors
* attempt a llama.swiftui workflow
* Update .github/workflows/build.yml
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Jared Van Bortel [Mon, 27 Nov 2023 03:58:43 +0000 (22:58 -0500)]
ggml : fix -Warray-bounds warning with gcc (#4231)
Georgi Gerganov [Sun, 26 Nov 2023 19:51:46 +0000 (21:51 +0200)]
lookahead : support `-n -1` infinite generation
Georgi Gerganov [Sun, 26 Nov 2023 18:42:51 +0000 (20:42 +0200)]
readme : update hot topics
Georgi Gerganov [Sun, 26 Nov 2023 18:33:07 +0000 (20:33 +0200)]
lookahead : add example for lookahead decoding (#4207)
* lookahead : init
* lookahead : generate and store n-grams
* lookahead : use loop instead recursion to generate n-grams
* lookahead : initial working implementation
* lookahead : filter repeating n-grams
* lookahead : use deterministic init
* lookahead : add to Makefile
* lookahead : fix a bug in the seq_id of the lookahead tokens
* lookahead : add comments
---------
Co-authored-by: slaren <redacted>
Xiao-Yong Jin [Sun, 26 Nov 2023 08:30:02 +0000 (02:30 -0600)]
metal : fix yarn (#4220)
get the correct n_orig_ctx in metal
Galunid [Sat, 25 Nov 2023 21:45:02 +0000 (22:45 +0100)]
scripts : Use mmap in torch load (#4202)
* Use mmap in torch load, prefer .bin files when loading
* Revert .bin > .safetensors preference
Marcus Dunn [Sat, 25 Nov 2023 16:58:23 +0000 (08:58 -0800)]
llama : grammar `reserve` space in `decode_utf8` (#4210)
* reserve space for codepoints
* improvement for the appended 0
crasm [Sat, 25 Nov 2023 15:47:07 +0000 (10:47 -0500)]
Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189)
Georgi Gerganov [Sat, 25 Nov 2023 10:02:13 +0000 (12:02 +0200)]
readme : update hot topics
Georgi Gerganov [Sat, 25 Nov 2023 09:29:06 +0000 (11:29 +0200)]
server : OAI API compatibility (#4198)
* Add openai-compatible POST /v1/chat/completions API endpoint to server example
* fix code style
* Update server README.md
* Improve server README.md
* Fix server.cpp code style according to review
* server : some style changes
* server : indentation
* server : enable special tokens during tokenization by default
* server : minor code style
* server : change random string generator
* straightforward /v1/models endpoint
---------
Co-authored-by: kir-gadjello <redacted>
Co-authored-by: Tobi Lütke <redacted>
slaren [Fri, 24 Nov 2023 17:10:01 +0000 (18:10 +0100)]
llama : set metal log callback correctly (#4204)
slaren [Fri, 24 Nov 2023 17:04:31 +0000 (18:04 +0100)]
ggml-cuda : support stablelm rope (#4156)
* ggml-cuda : support stablelm rope
* remove unused freq_base kernel parameter
* add n_dims parameter to llm_build_k_shift, default to n_rot via overload
* llama : fix llm_build_k_shift args
---------
Co-authored-by: Georgi Gerganov <redacted>
Galunid [Fri, 24 Nov 2023 14:02:49 +0000 (15:02 +0100)]
convert : fix tensors using grad in some models (#4173)
eastriver [Fri, 24 Nov 2023 09:25:10 +0000 (18:25 +0900)]
main.swift : fix eos checking (#4197)
llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.
Aaryaman Vasishta [Fri, 24 Nov 2023 07:52:39 +0000 (16:52 +0900)]
readme : use PATH for Windows ROCm (#4195)
* Update README.md to use PATH for Windows ROCm
* Update README.md
* Update README.md
Haohui Mai [Thu, 23 Nov 2023 21:56:53 +0000 (13:56 -0800)]
Fix incorrect format strings and uninitialized variables. (#4133)
* Fix incorrect format strings and uninitialized variables.
* Address comments
* Add the missing include statement
Georgi Gerganov [Thu, 23 Nov 2023 17:07:56 +0000 (19:07 +0200)]
llama : KV cache view API + better KV cache management (#4170)
* llama : keep track of used KV cells + better KV cache management
* llama : zero KV cache used upon clear
ggml-ci
* llama : allow exporting a view of the KV cache (#4180)
* Allow exporting a view of the KV cache
* Allow dumping the sequences per cell in common
* Track max contiguous cells value and position as well
* Fix max contiguous empty cells index calculation
Make dump functions deal with lengths or sequences counts > 10 better
* Fix off by one error in dump_kv_cache_view
* Add doc comments for KV cache view functions
Eliminate cell sequence struct; use llama_seq_id directly
Minor cleanups
* common : add -dkvc arg for enabling kv cache dumps
---------
Co-authored-by: Kerfuffle <redacted>
Georgi Gerganov [Thu, 23 Nov 2023 11:51:22 +0000 (13:51 +0200)]
readme : update hot topics
Daniel Bevenius [Thu, 23 Nov 2023 11:34:20 +0000 (12:34 +0100)]
examples : fix typo in parallel example doc comment (#4181)
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Thu, 23 Nov 2023 09:35:04 +0000 (11:35 +0200)]
docs : add llama-star arch idea
Galunid [Tue, 21 Nov 2023 15:22:30 +0000 (16:22 +0100)]
stablelm : simplify + speedup generation (#4153)
Galunid [Mon, 20 Nov 2023 18:30:00 +0000 (19:30 +0100)]
finetune - update readme to mention llama support only (#4148)
Aaryaman Vasishta [Mon, 20 Nov 2023 15:02:46 +0000 (00:02 +0900)]
readme : update ROCm Windows instructions (#4122)
* Update README.md
* Update README.md
Co-authored-by: Jared Van Bortel <redacted>
---------
Co-authored-by: Jared Van Bortel <redacted>
Seb C [Mon, 20 Nov 2023 13:56:59 +0000 (00:26 +1030)]
main : Add ChatML functionality to main example (#4046)
Co-authored-by: Sebastian Cramond <redacted>
Galunid [Mon, 20 Nov 2023 10:35:47 +0000 (11:35 +0100)]
ci : add flake8 to github actions (python linting) (#4129)
Disabled rules:
* E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned
* E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned
* E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned
* E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard
* E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned
* E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned
* E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard
* E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard
* E266 Too many leading '#' for block comment - sometimes used as "section" separator
* E501 Line too long - disabled because it's broken so often it seems like a standard
* E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead)
* E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)
Branden Butler [Mon, 20 Nov 2023 09:50:04 +0000 (03:50 -0600)]
speculative : fix prompt tokenization in speculative example (#4025)
* Support special tokens and not adding BOS to prompt in speculative
* Adapt to new should_add_bos function
* Ensure tgt and dft have same add_bos setting
Georgi Gerganov [Sun, 19 Nov 2023 17:16:07 +0000 (19:16 +0200)]
Revert "finetune : add --n-gpu-layers flag info to --help (#4128)"
This reverts commit
05e8301e4593e2a67b4bae24f093dd12ce5cc7c2 .
Clark Saben [Sun, 19 Nov 2023 16:56:38 +0000 (11:56 -0500)]
finetune : add --n-gpu-layers flag info to --help (#4128)
SoftwareRenderer [Sun, 19 Nov 2023 16:54:10 +0000 (11:54 -0500)]
server : relay error messages (#4131)
kchro3 [Sun, 19 Nov 2023 16:52:57 +0000 (08:52 -0800)]
common : comma should be semicolon (#4137)
Georgi Gerganov [Sun, 19 Nov 2023 16:50:49 +0000 (18:50 +0200)]
gitignore : tokenize
slaren [Sun, 19 Nov 2023 10:10:52 +0000 (11:10 +0100)]
gguf-py : export chat templates (#4125)
* gguf-py : export chat templates
* llama.cpp : escape new lines in gguf kv info prints
* gguf-py : bump version
* gguf-py : check chat_template type
* gguf-py : initialize chat_template
Kerfuffle [Sat, 18 Nov 2023 21:48:17 +0000 (14:48 -0700)]
tokenize example: Respect normal add BOS token behavior (#4126)
Allow building with Makefile
Galunid [Sat, 18 Nov 2023 20:08:33 +0000 (21:08 +0100)]
scripts : Remove missed baichuan convert script (#4127)
Kerfuffle [Sat, 18 Nov 2023 15:11:18 +0000 (08:11 -0700)]
Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124)
* ggml-cuda.cu: Clean up warnings when compiling with clang
* ggml-cuda.cu: Move static items into anonymous namespace
* ggml-cuda.cu: Fix use of namespace start macro
* Revert "ggml-cuda.cu: Fix use of namespace start macro"
This reverts commit
26c11490266c096e3e5731e05270a8f73a5b2874 .
* Revert "ggml-cuda.cu: Move static items into anonymous namespace"
This reverts commit
e29757e0f7535d1ac314300f0324684cc785e06c .
slaren [Fri, 17 Nov 2023 19:39:11 +0000 (20:39 +0100)]
llama : increase max nodes (#4115)
Roger Meier [Fri, 17 Nov 2023 16:11:23 +0000 (17:11 +0100)]
build : support ppc64le build for make and CMake (#3963)
* build: support ppc64le build for make and CMake
* build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 17 Nov 2023 16:01:38 +0000 (18:01 +0200)]
tokenize : fix trailing whitespace
zakkor [Fri, 17 Nov 2023 15:36:44 +0000 (17:36 +0200)]
examples : add tokenize (#4039)
Don Mahurin [Fri, 17 Nov 2023 15:32:34 +0000 (07:32 -0800)]
convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (#4089)
Co-authored-by: Don Mahurin <@>