]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
slaren [Thu, 30 Nov 2023 21:42:23 +0000 (22:42 +0100)]
convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#4258)
tarcey [Thu, 30 Nov 2023 21:40:23 +0000 (22:40 +0100)]
llama : fix typical sampling (#4261)
Typical sampling was broken because after copying new_candidates into canditates, the "sorted" bool is left at "true", but the new data is no longer sorted according to probability. Patch to set "sorted" to false.
Test: Generating with temp=0.0001 (approx. argmax) should generate the same sequence at typical>=1.0 and typical=0.9999 (approx. disabled, but enters the typical sampling codepath).
rhjdvsgsgks [Thu, 30 Nov 2023 20:50:40 +0000 (20:50 +0000)]
py : fix oai proxy (#3972)
* fix oai proxy
fix generation not stoped while bot stop talking in chat mode
fix possible `slot_id` not exist
response for cors (and pre flight)
* oai proxy: workaround for some client (such as Chatbox)
* use stop as separator to replace hardcoded `\n`
Georgi Gerganov [Wed, 29 Nov 2023 09:00:17 +0000 (11:00 +0200)]
examples : add readme files
Peter Sugihara [Wed, 29 Nov 2023 07:16:34 +0000 (23:16 -0800)]
readme : add FreeChat (#4248)
Jared Van Bortel [Tue, 28 Nov 2023 09:51:11 +0000 (04:51 -0500)]
ggml : restore abort() in GGML_ASSERT (#4242)
Georgi Gerganov [Tue, 28 Nov 2023 08:32:03 +0000 (10:32 +0200)]
ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload checks in llama.cpp (#4240)
* ggml : use blas even if src0 is not F32
* llama : use n_threads_batch only when n_tokens >= 32
ggml-ci
* llama : revert n_threads_batch logic
ggml-ci
bandoti [Mon, 27 Nov 2023 19:25:42 +0000 (15:25 -0400)]
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970)
* Split CPP generation from build-info query
* Remove blank lines
* Add BUILD_SHARED_LIBS option
Kasumi [Mon, 27 Nov 2023 17:39:42 +0000 (01:39 +0800)]
readme : add Amica to UI list (#4230)
Bailey Chittle [Mon, 27 Nov 2023 14:56:52 +0000 (09:56 -0500)]
examples : iOS example with swift ui (#4159)
* copy to llama.cpp as subdir
* attempt enabling metal, fails
* ggml metal compiles!
* Update README.md
* initial conversion to new format, utf8 errors?
* bug fixes, but now has an invalid memory access :(
* added O3, now has insufficient memory access
* begin sync with master
* update to match latest code, new errors
* fixed it!
* fix for loop conditionals, increase result size
* fix current workflow errors
* attempt a llama.swiftui workflow
* Update .github/workflows/build.yml
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Jared Van Bortel [Mon, 27 Nov 2023 03:58:43 +0000 (22:58 -0500)]
ggml : fix -Warray-bounds warning with gcc (#4231)
Georgi Gerganov [Sun, 26 Nov 2023 19:51:46 +0000 (21:51 +0200)]
lookahead : support `-n -1` infinite generation
Georgi Gerganov [Sun, 26 Nov 2023 18:42:51 +0000 (20:42 +0200)]
readme : update hot topics
Georgi Gerganov [Sun, 26 Nov 2023 18:33:07 +0000 (20:33 +0200)]
lookahead : add example for lookahead decoding (#4207)
* lookahead : init
* lookahead : generate and store n-grams
* lookahead : use loop instead recursion to generate n-grams
* lookahead : initial working implementation
* lookahead : filter repeating n-grams
* lookahead : use deterministic init
* lookahead : add to Makefile
* lookahead : fix a bug in the seq_id of the lookahead tokens
* lookahead : add comments
---------
Co-authored-by: slaren <redacted>
Xiao-Yong Jin [Sun, 26 Nov 2023 08:30:02 +0000 (02:30 -0600)]
metal : fix yarn (#4220)
get the correct n_orig_ctx in metal
Galunid [Sat, 25 Nov 2023 21:45:02 +0000 (22:45 +0100)]
scripts : Use mmap in torch load (#4202)
* Use mmap in torch load, prefer .bin files when loading
* Revert .bin > .safetensors preference
Marcus Dunn [Sat, 25 Nov 2023 16:58:23 +0000 (08:58 -0800)]
llama : grammar `reserve` space in `decode_utf8` (#4210)
* reserve space for codepoints
* improvement for the appended 0
crasm [Sat, 25 Nov 2023 15:47:07 +0000 (10:47 -0500)]
Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189)
Georgi Gerganov [Sat, 25 Nov 2023 10:02:13 +0000 (12:02 +0200)]
readme : update hot topics
Georgi Gerganov [Sat, 25 Nov 2023 09:29:06 +0000 (11:29 +0200)]
server : OAI API compatibility (#4198)
* Add openai-compatible POST /v1/chat/completions API endpoint to server example
* fix code style
* Update server README.md
* Improve server README.md
* Fix server.cpp code style according to review
* server : some style changes
* server : indentation
* server : enable special tokens during tokenization by default
* server : minor code style
* server : change random string generator
* straightforward /v1/models endpoint
---------
Co-authored-by: kir-gadjello <redacted>
Co-authored-by: Tobi Lütke <redacted>
slaren [Fri, 24 Nov 2023 17:10:01 +0000 (18:10 +0100)]
llama : set metal log callback correctly (#4204)
slaren [Fri, 24 Nov 2023 17:04:31 +0000 (18:04 +0100)]
ggml-cuda : support stablelm rope (#4156)
* ggml-cuda : support stablelm rope
* remove unused freq_base kernel parameter
* add n_dims parameter to llm_build_k_shift, default to n_rot via overload
* llama : fix llm_build_k_shift args
---------
Co-authored-by: Georgi Gerganov <redacted>
Galunid [Fri, 24 Nov 2023 14:02:49 +0000 (15:02 +0100)]
convert : fix tensors using grad in some models (#4173)
eastriver [Fri, 24 Nov 2023 09:25:10 +0000 (18:25 +0900)]
main.swift : fix eos checking (#4197)
llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.
Aaryaman Vasishta [Fri, 24 Nov 2023 07:52:39 +0000 (16:52 +0900)]
readme : use PATH for Windows ROCm (#4195)
* Update README.md to use PATH for Windows ROCm
* Update README.md
* Update README.md
Haohui Mai [Thu, 23 Nov 2023 21:56:53 +0000 (13:56 -0800)]
Fix incorrect format strings and uninitialized variables. (#4133)
* Fix incorrect format strings and uninitialized variables.
* Address comments
* Add the missing include statement
Georgi Gerganov [Thu, 23 Nov 2023 17:07:56 +0000 (19:07 +0200)]
llama : KV cache view API + better KV cache management (#4170)
* llama : keep track of used KV cells + better KV cache management
* llama : zero KV cache used upon clear
ggml-ci
* llama : allow exporting a view of the KV cache (#4180)
* Allow exporting a view of the KV cache
* Allow dumping the sequences per cell in common
* Track max contiguous cells value and position as well
* Fix max contiguous empty cells index calculation
Make dump functions deal with lengths or sequences counts > 10 better
* Fix off by one error in dump_kv_cache_view
* Add doc comments for KV cache view functions
Eliminate cell sequence struct; use llama_seq_id directly
Minor cleanups
* common : add -dkvc arg for enabling kv cache dumps
---------
Co-authored-by: Kerfuffle <redacted>
Georgi Gerganov [Thu, 23 Nov 2023 11:51:22 +0000 (13:51 +0200)]
readme : update hot topics
Daniel Bevenius [Thu, 23 Nov 2023 11:34:20 +0000 (12:34 +0100)]
examples : fix typo in parallel example doc comment (#4181)
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Thu, 23 Nov 2023 09:35:04 +0000 (11:35 +0200)]
docs : add llama-star arch idea
Galunid [Tue, 21 Nov 2023 15:22:30 +0000 (16:22 +0100)]
stablelm : simplify + speedup generation (#4153)
Galunid [Mon, 20 Nov 2023 18:30:00 +0000 (19:30 +0100)]
finetune - update readme to mention llama support only (#4148)
Aaryaman Vasishta [Mon, 20 Nov 2023 15:02:46 +0000 (00:02 +0900)]
readme : update ROCm Windows instructions (#4122)
* Update README.md
* Update README.md
Co-authored-by: Jared Van Bortel <redacted>
---------
Co-authored-by: Jared Van Bortel <redacted>
Seb C [Mon, 20 Nov 2023 13:56:59 +0000 (00:26 +1030)]
main : Add ChatML functionality to main example (#4046)
Co-authored-by: Sebastian Cramond <redacted>
Galunid [Mon, 20 Nov 2023 10:35:47 +0000 (11:35 +0100)]
ci : add flake8 to github actions (python linting) (#4129)
Disabled rules:
* E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned
* E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned
* E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned
* E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard
* E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned
* E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned
* E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard
* E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard
* E266 Too many leading '#' for block comment - sometimes used as "section" separator
* E501 Line too long - disabled because it's broken so often it seems like a standard
* E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead)
* E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)
Branden Butler [Mon, 20 Nov 2023 09:50:04 +0000 (03:50 -0600)]
speculative : fix prompt tokenization in speculative example (#4025)
* Support special tokens and not adding BOS to prompt in speculative
* Adapt to new should_add_bos function
* Ensure tgt and dft have same add_bos setting
Georgi Gerganov [Sun, 19 Nov 2023 17:16:07 +0000 (19:16 +0200)]
Revert "finetune : add --n-gpu-layers flag info to --help (#4128)"
This reverts commit
05e8301e4593e2a67b4bae24f093dd12ce5cc7c2 .
Clark Saben [Sun, 19 Nov 2023 16:56:38 +0000 (11:56 -0500)]
finetune : add --n-gpu-layers flag info to --help (#4128)
SoftwareRenderer [Sun, 19 Nov 2023 16:54:10 +0000 (11:54 -0500)]
server : relay error messages (#4131)
kchro3 [Sun, 19 Nov 2023 16:52:57 +0000 (08:52 -0800)]
common : comma should be semicolon (#4137)
Georgi Gerganov [Sun, 19 Nov 2023 16:50:49 +0000 (18:50 +0200)]
gitignore : tokenize
slaren [Sun, 19 Nov 2023 10:10:52 +0000 (11:10 +0100)]
gguf-py : export chat templates (#4125)
* gguf-py : export chat templates
* llama.cpp : escape new lines in gguf kv info prints
* gguf-py : bump version
* gguf-py : check chat_template type
* gguf-py : initialize chat_template
Kerfuffle [Sat, 18 Nov 2023 21:48:17 +0000 (14:48 -0700)]
tokenize example: Respect normal add BOS token behavior (#4126)
Allow building with Makefile
Galunid [Sat, 18 Nov 2023 20:08:33 +0000 (21:08 +0100)]
scripts : Remove missed baichuan convert script (#4127)
Kerfuffle [Sat, 18 Nov 2023 15:11:18 +0000 (08:11 -0700)]
Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124)
* ggml-cuda.cu: Clean up warnings when compiling with clang
* ggml-cuda.cu: Move static items into anonymous namespace
* ggml-cuda.cu: Fix use of namespace start macro
* Revert "ggml-cuda.cu: Fix use of namespace start macro"
This reverts commit
26c11490266c096e3e5731e05270a8f73a5b2874 .
* Revert "ggml-cuda.cu: Move static items into anonymous namespace"
This reverts commit
e29757e0f7535d1ac314300f0324684cc785e06c .
slaren [Fri, 17 Nov 2023 19:39:11 +0000 (20:39 +0100)]
llama : increase max nodes (#4115)
Roger Meier [Fri, 17 Nov 2023 16:11:23 +0000 (17:11 +0100)]
build : support ppc64le build for make and CMake (#3963)
* build: support ppc64le build for make and CMake
* build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 17 Nov 2023 16:01:38 +0000 (18:01 +0200)]
tokenize : fix trailing whitespace
zakkor [Fri, 17 Nov 2023 15:36:44 +0000 (17:36 +0200)]
examples : add tokenize (#4039)
Don Mahurin [Fri, 17 Nov 2023 15:32:34 +0000 (07:32 -0800)]
convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (#4089)
Co-authored-by: Don Mahurin <@>
John [Fri, 17 Nov 2023 15:24:30 +0000 (16:24 +0100)]
py : Falcon HF compatibility (#4104)
Falcon HF compatibility
Jannis Schönleber [Fri, 17 Nov 2023 15:24:07 +0000 (16:24 +0100)]
common : improve yaml log escaping (#4080)
* logging: improve escaping in yaml output
* logging: include review feedback
Huawei Lin [Fri, 17 Nov 2023 15:22:56 +0000 (10:22 -0500)]
llava : fix compilation warning that fread return value is not used (#4069)
Jiří Podivín [Fri, 17 Nov 2023 15:20:53 +0000 (16:20 +0100)]
py : remove superfluous import statements (#4076)
Signed-off-by: Jiri Podivin <redacted>
Co-authored-by: Jiri Podivin <redacted>
Jiří Podivín [Fri, 17 Nov 2023 15:19:16 +0000 (16:19 +0100)]
train : move number of gpu layers argument parsing to common/train.cpp (#4074)
- introduces help entry for the argument
- cuts '--gpu-layers' form in order to simplify usage and documentation.
Signed-off-by: Jiri Podivin <redacted>
Co-authored-by: Jiri Podivin <redacted>
slaren [Fri, 17 Nov 2023 15:17:37 +0000 (16:17 +0100)]
llama : add functions to get the model's metadata (#4013)
* llama : add functions to get the model's metadata
* format -> std::to_string
* better documentation
gwjr [Fri, 17 Nov 2023 14:48:19 +0000 (14:48 +0000)]
finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (#4079)
* Remove logically superfluous assertions and order by dimension
* Use cblas_sgemm() to implement ggml_compute_forward_out_prod()
* Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace
* Add openBLAS support for sgemm() in compute_forward_out_prod()
Andrew Godfrey [Fri, 17 Nov 2023 10:23:11 +0000 (02:23 -0800)]
finetune : zero the loraB initial vectors (#4082)
* finetune : zero the loraB initial vectors
Without this, the first iteration is starting out far from the base model, instead of exactly on it.
Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs
(though it departs from the paper in using a different distribution for the other vector, in some cases).
* tabs to spaces
* Use ggml_set_zero instead of adding a new function
Andrew Godfrey [Fri, 17 Nov 2023 08:01:15 +0000 (00:01 -0800)]
cuda : get_row_rounding F32 (#4095)
* Fix #4017
* Update ggml-cuda.cu
Co-authored-by: Jared Van Bortel <redacted>
* Update ggml-cuda.cu
Co-authored-by: Jared Van Bortel <redacted>
---------
Co-authored-by: Jared Van Bortel <redacted>
Georgi Gerganov [Fri, 17 Nov 2023 08:00:15 +0000 (10:00 +0200)]
llama : fix data units (#4101)
* llama : fix data units
ggml-ci
* Revert "llama : fix data units"
This reverts commit
f5feac831fe225ed7f3db938d115732a49dccfc4 .
* llama : disambiguate data units
ggml-ci
Kerfuffle [Fri, 17 Nov 2023 02:14:37 +0000 (19:14 -0700)]
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode.
* Respect add_bos_token GGUF metadata value
* gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time
texmex76 [Thu, 16 Nov 2023 15:01:48 +0000 (16:01 +0100)]
gguf : fix potential infinite loops while parsing (#4100)
Co-authored-by: Bernhard Gstrein <redacted>
Jared Van Bortel [Wed, 15 Nov 2023 16:34:47 +0000 (11:34 -0500)]
llama : restore prefix space in llama tokenizer (#4081)
slaren [Wed, 15 Nov 2023 12:58:13 +0000 (13:58 +0100)]
ggml-cuda : increase max graph size (#4084)
Michael Potter [Tue, 14 Nov 2023 17:34:41 +0000 (09:34 -0800)]
Fix MacOS Sonoma model quantization (#4052)
Co-authored-by: Jared Van Bortel <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Galunid [Tue, 14 Nov 2023 10:17:12 +0000 (11:17 +0100)]
stablelm : StableLM support (#3586)
* Add support for stablelm-3b-4e1t
* Supports GPU offloading of (n-1) layers
afrideva [Tue, 14 Nov 2023 01:03:40 +0000 (17:03 -0800)]
convert.py: also look for plain model.safetensors (#4043)
* add safetensors to convert.py help message
* Check for single-file safetensors model
* Update convert.py "model" option help message
* revert convert.py help message change
M. Yusuf Sarıgöz [Mon, 13 Nov 2023 15:20:52 +0000 (18:20 +0300)]
llava : fix regression for square images in #3613 (#4056)
Georgi Gerganov [Mon, 13 Nov 2023 14:55:52 +0000 (16:55 +0200)]
ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
ggml-ci
Georgi Gerganov [Mon, 13 Nov 2023 12:18:08 +0000 (14:18 +0200)]
readme : update hot topics
Georgi Gerganov [Mon, 13 Nov 2023 12:16:23 +0000 (14:16 +0200)]
sync : ggml (backend v2) (#3912)
* sync : ggml (backend v2) (wip)
* sync : migrate examples and llama.cpp to dynamic graphs (wip)
* sync : update tests + fix max op params to 64
ggml-ci
* sync : ggml-cuda
ggml-ci
* llama : fix save/load state context size
ggml-ci
* sync : try to fix build on tvOS
* sync : pass custom graph sizes in training examples
* sync : update graph copies to new ggml API
* sync : update sync-ggml.sh with new files
* scripts : fix header in sync script
* train : fix context size calculations
* llama : increase inference graph size up to 4096 nodes
* train : allocate grads for backward graphs
* train : allocate grads for gb_tmp
Kerfuffle [Mon, 13 Nov 2023 08:58:15 +0000 (01:58 -0700)]
Add ReLU and SQR CUDA ops to (partially) fix Persimmon offloading (#4041)
* Add ReLU and SQR CUDA ops to fix Persimmon offloading
* Persimmon loader: More helpful error on CUDA/ROCM when offloading too many layers
Kerfuffle [Sun, 12 Nov 2023 23:39:37 +0000 (16:39 -0700)]
gguf-py: gguf_writer: Use bytearray to build metadata (#4051)
* gguf-py: gguf_writer: Use BytesIO to build metadata
* Use bytearray instead
Bump gguf-py package version
Richard Kiss [Sun, 12 Nov 2023 06:04:58 +0000 (22:04 -0800)]
Fix some documentation typos/grammar mistakes (#4032)
* typos
* Update examples/parallel/README.md
Co-authored-by: Kerfuffle <redacted>
---------
Co-authored-by: Kerfuffle <redacted>
M. Yusuf Sarıgöz [Sat, 11 Nov 2023 15:35:31 +0000 (18:35 +0300)]
Fix gguf-convert-endian script (#4037)
* Fix gguf-convert-endian script
* Bump version and update description
Alexey Parfenov [Sat, 11 Nov 2023 05:48:21 +0000 (05:48 +0000)]
server : fix crash when prompt exceeds context size (#3996)
Kerfuffle [Sat, 11 Nov 2023 05:04:50 +0000 (22:04 -0700)]
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
* gguf-py: Refactor and add file reading support
* Replay changes from #3871
Credit to @cebtenzzre for that pull
* Various type annotation fixes.
* sort imports with isort (again)
* Fix missing return statement in add_tensor
* style cleanup with flake8
* fix NamedTuple and Enum usage
* Fix an issue with state init in GGUFReader
Move examples to an examples/ directory
Clean up examples
Add an example of modifying keys in a GGUF file
Update documentation with info on examples
Try to support people importing gguf/gguf.py directly
* Damagage is not a word.
* Clean up gguf-py/examples/modify_gguf.py whitespace
Co-authored-by: Jared Van Bortel <redacted>
* Update gguf-py/examples/modify_gguf.py formatting
Co-authored-by: Jared Van Bortel <redacted>
* Update gguf-py/gguf/gguf_reader.py type hint
Co-authored-by: Jared Van Bortel <redacted>
* Make examples executable, formatting changes
* Add more information to GGUFReader and examples comments
* Include a gguf Python package version bump
* Add convert-gguf-endian.py script
* cleanup
* gguf-py : bump minor version
* Reorganize scripts
* Make GGUFReader endian detection less arbitrary
* Add JSON dumping support to gguf-dump.py
Which I kind of regret now
* A few for gguf-dump.py cleanups
* Murder accidental tuple in gguf-py/scripts/gguf-dump.py
Co-authored-by: Jared Van Bortel <redacted>
* cleanup
* constants : remove unneeded type annotations
* fix python 3.8 compat
* Set up gguf- scripts in pyproject.toml
* And include scripts/__init__.py, derp
* convert.py: We can't currently support Q8_0 on big endian.
* gguf-py: SpecialVocab: Always try available sources for special token ids
gguf-py: SpecialVocab: Try to load merges from merges.txt if not in tokenizer.json
gguf-py: SpecialVocab: Add 'add_bos_token' type bools to GGUF metadata
u
* cleanup
* Promote add_X_token to GGUF metadata for BOS and EOS
---------
Co-authored-by: Jared Van Bortel <redacted>
Co-authored-by: Jared Van Bortel <redacted>
Jhen-Jie Hong [Fri, 10 Nov 2023 22:49:33 +0000 (06:49 +0800)]
server : allow continue edit on completion mode (#3950)
* server : allow continue edit on completion mode
* server : handle abort case in runCompletion
* server : style improvement
Galunid [Fri, 10 Nov 2023 13:24:54 +0000 (14:24 +0100)]
Unbreak persimmon after #3837 (#4010)
Galunid [Thu, 9 Nov 2023 10:09:29 +0000 (11:09 +0100)]
scripts: Generalize convert scripts (#3838)
* Replace convert-*-hf-to-gguf.py files with convert-hf-to-gguf.py
Mihai [Thu, 9 Nov 2023 02:00:34 +0000 (04:00 +0200)]
server : add min_p param (#3877)
* Update server.cpp with min_p after it was introduced in https://github.com/ggerganov/llama.cpp/pull/3841
* Use spaces instead of tabs
* Update index.html.hpp after running deps.sh
* Fix test - fix line ending
slaren [Wed, 8 Nov 2023 12:15:14 +0000 (13:15 +0100)]
ggml-alloc : fix backend assignments of views (#3982)
Jared Van Bortel [Tue, 7 Nov 2023 17:43:04 +0000 (12:43 -0500)]
gguf : track writer state, free unneeded tensors, cleanup (#3871)
Georgi Gerganov [Tue, 7 Nov 2023 17:25:32 +0000 (19:25 +0200)]
make : do not add linker flags when compiling static llava lib (#3977)
xaedes [Tue, 7 Nov 2023 08:04:51 +0000 (09:04 +0100)]
ggml : fix backward rope after YaRN (#3974)
* fix backward process of rope
rope backward process was broken after YaRN RoPE (#2268) implementation, due to missing changes in backward functions.
the code for the backward process is nearly identically to the forward process:
the only difference is the sign of the sin-values.
to avoid future regressions remove the near-duplicate backward functions and reuse the forward code:
for this a new function argument `bool forward` was added to `ggml_compute_forward_rope_f32` and `ggml_compute_forward_rope_f16`.
the sin-values will be negated when forward is false.
* fix finetune rope call to use correct default attn_factor of 1.0f
* remove unused `ggml_rope_xpos_back`
it is better to have only one `ggml_rope_back` function that accepts all rope parameters, so that `ggml_compute_backward` can propagate all parameters without having to switch between different rope_back variants.
* fix comments explaining the sinus sign in ggml_forward_rope
* add missing function arguments in declaration
* fix function argument type in declaration
Matthew Tejo [Tue, 7 Nov 2023 07:43:59 +0000 (23:43 -0800)]
Use params when loading models in llava-cli (#3976)
llava-cli was loading models with default params and ignoring settings
from the cli. This switches to a generic function to load the params
from the cli options.
Meng Zhang [Tue, 7 Nov 2023 06:49:08 +0000 (22:49 -0800)]
cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)
* protyping the idea that supports running on CPU for a GGML_USE_CUBLAS=on build
* doc: add comments to ggml_cublas_loaded()
* fix defined(...)
Damian Stewart [Mon, 6 Nov 2023 21:36:23 +0000 (22:36 +0100)]
llava : expose as a shared library for downstream projects (#3613)
* wip llava python bindings compatibility
* add external llava API
* add base64 in-prompt image support
* wip refactor image loading
* refactor image load out of llava init
* cleanup
* further cleanup; move llava-cli into its own file and rename
* move base64.hpp into common/
* collapse clip and llava libraries
* move llava into its own subdir
* wip
* fix bug where base64 string was not removed from the prompt
* get libllava to output in the right place
* expose llava methods in libllama.dylib
* cleanup memory usage around clip_image_*
* cleanup and refactor *again*
* update headerdoc
* build with cmake, not tested (WIP)
* Editorconfig
* Editorconfig
* Build with make
* Build with make
* Fix cyclical depts on Windows
* attempt to fix build on Windows
* attempt to fix build on Windows
* Upd TODOs
* attempt to fix build on Windows+CUDA
* Revert changes in cmake
* Fix according to review comments
* Support building as a shared library
* address review comments
---------
Co-authored-by: M. Yusuf Sarıgöz <redacted>
Co-authored-by: Jared Van Bortel <redacted>
slaren [Sun, 5 Nov 2023 17:45:16 +0000 (18:45 +0100)]
ggml-cuda : fix f16 mul mat (#3961)
* ggml-cuda : fix f16 mul mat
ggml-ci
* silence common.cpp warning (bonus)
Kerfuffle [Sun, 5 Nov 2023 17:06:06 +0000 (10:06 -0700)]
Allow common process_escapes to handle \x sequences (#3928)
* Allow common process_escapes to handle \x sequences
* Fix edge case when second hex digit is NUL
Thái Hoàng Tâm [Sun, 5 Nov 2023 16:15:27 +0000 (23:15 +0700)]
server : fix typo for --alias shortcut from -m to -a (#3958)
Jared Van Bortel [Sun, 5 Nov 2023 15:08:57 +0000 (10:08 -0500)]
cuda : fix disabling device with --tensor-split 1,0 (#3951)
Co-authored-by: slaren <redacted>
Meng Zhang [Sun, 5 Nov 2023 12:40:08 +0000 (04:40 -0800)]
llama : mark LLM_ARCH_STARCODER as full offload supported (#3945)
as done in https://github.com/ggerganov/llama.cpp/pull/3827
Eve [Sun, 5 Nov 2023 08:03:09 +0000 (08:03 +0000)]
cmake : MSVC instruction detection (fixed up #809) (#3923)
* Add detection code for avx
* Only check hardware when option is ON
* Modify per code review sugguestions
* Build locally will detect CPU
* Fixes CMake style to use lowercase like everywhere else
* cleanup
* fix merge
* linux/gcc version for testing
* msvc combines avx2 and fma into /arch:AVX2 so check for both
* cleanup
* msvc only version
* style
* Update FindSIMD.cmake
---------
Co-authored-by: Howard Su <redacted>
Co-authored-by: Jeremy Dunn <redacted>
Eve [Sun, 5 Nov 2023 07:46:44 +0000 (07:46 +0000)]
ci : use intel sde when ci cpu doesn't support avx512 (#3949)
slaren [Sun, 5 Nov 2023 07:12:13 +0000 (08:12 +0100)]
cuda : revert CUDA pool stuff (#3944)
* Revert "cuda : add ROCM aliases for CUDA pool stuff (#3918)"
This reverts commit
629f917cd6b96ba1274c49a8aab163b1b189229d .
* Revert "cuda : use CUDA memory pool with async memory allocation/deallocation when available (#3903)"
This reverts commit
d6069051de7165a4e06662c89257f5d2905bb156 .
ggml-ci
Kerfuffle [Sat, 4 Nov 2023 22:20:34 +0000 (16:20 -0600)]
gguf-py: Support 01.AI Yi models (#3943)
Peter Sugihara [Fri, 3 Nov 2023 19:18:18 +0000 (12:18 -0700)]
metal : round up to 16 to fix MTLDebugComputeCommandEncoder assertion (#3938)
Xiao-Yong Jin [Fri, 3 Nov 2023 18:00:31 +0000 (13:00 -0500)]
ggml-metal: fix yarn rope (#3937)
slaren [Fri, 3 Nov 2023 11:13:09 +0000 (12:13 +0100)]
ggml-cuda : move row numbers to x grid dim in mmv kernels (#3921)