]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Erik Garrison [Thu, 21 Dec 2023 19:45:32 +0000 (13:45 -0600)]
cuda : ROCm AMD Unified Memory Architecture (UMA) handling (#4449)
* AMD ROCm: handle UMA memory VRAM expansions
This resolves #2797 by allowing ROCm AMD GPU users with a UMA to
dynamically expand the VRAM allocated to the GPU.
Without this, AMD ROCm users with shared CPU/GPU memory usually are
stuck with the BIOS-set (or fixed) framebuffer VRAM, making it
impossible to load more than 1-2 layers.
Note that the model is duplicated in RAM because it's loaded once for
the CPU and then copied into a second set of allocations that are
managed by the HIP UMA system. We can fix this later.
* clarify build process for ROCm on linux with cmake
* avoid using deprecated ROCm hipMallocHost
* keep simplifying the change required for UMA
* cmake: enable UMA-compatible allocation when LLAMA_HIP_UMA=ON
arlo-phoenix [Thu, 21 Dec 2023 19:13:25 +0000 (20:13 +0100)]
ggml-cuda: Fix HIP build by adding define for __trap (#4569)
Regression of
139882392258671ffe5acdfcadc0bc08572d6eef
HIP doesn't have trap, only abort
Jared Van Bortel [Thu, 21 Dec 2023 17:55:34 +0000 (12:55 -0500)]
common : remove incorrect --model-draft default (#4568)
Johannes Gäßler [Thu, 21 Dec 2023 17:42:59 +0000 (18:42 +0100)]
CUDA: mul_mat_id always on GPU for batches >= 32 (#4553)
Georgi Gerganov [Thu, 21 Dec 2023 17:27:14 +0000 (19:27 +0200)]
readme : update coding guidelines
howlger [Thu, 21 Dec 2023 17:07:34 +0000 (18:07 +0100)]
py : open merges file as 'utf-8' (#4566)
Otherwise, on Windows converting bling-phi-2-v0 (<https://huggingface.co/llmware/bling-phi-2-v0>) via convert-hf-to-gguf.py will fail with the following error:
```
Traceback (most recent call last):
File "C:\Users\User\git\gguf\convert-hf-to-gguf.py", line 1061, in <module>
model_instance.set_vocab()
File "C:\Users\User\git\gguf\convert-hf-to-gguf.py", line 52, in set_vocab
self._set_vocab_gpt2()
File "C:\Users\User\git\gguf\convert-hf-to-gguf.py", line 264, in _set_vocab_gpt2
special_vocab = gguf.SpecialVocab(dir_model, load_merges=True)
File "C:\Users\User\git\gguf\gguf\vocab.py", line 33, in __init__
self._load(Path(path))
File "C:\Users\User\git\gguf\gguf\vocab.py", line 81, in _load
self._try_load_merges_txt(path)
File "C:\Users\User\git\gguf\gguf\vocab.py", line 95, in _try_load_merges_txt
for line in fp:
File "C:\Users\User\miniconda3\envs\gguf\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1415: character maps to <undefined>
```
bobqianic [Thu, 21 Dec 2023 17:06:44 +0000 (17:06 +0000)]
cuda : better error message for ggml_get_rows (#4561)
* Update ggml-cuda.cu
* Update ggml-cuda.cu
* Update ggml-cuda.cu
---------
Co-authored-by: Georgi Gerganov <redacted>
slaren [Thu, 21 Dec 2023 17:02:30 +0000 (18:02 +0100)]
cuda : replace asserts in wrong architecture checks with __trap (#4556)
* cuda : replace asserts in wrong architecture checks with __trap
* make bad_arch noreturn, remove returns
Johannes Gäßler [Thu, 21 Dec 2023 16:34:17 +0000 (17:34 +0100)]
llama : disable per-tensor info prints on model load (#4562)
LoganDark [Thu, 21 Dec 2023 09:59:27 +0000 (01:59 -0800)]
Fix access violation in ggml_cuda_free_data if tensor->extra is NULL (#4554)
Johannes Gäßler [Wed, 20 Dec 2023 14:41:22 +0000 (15:41 +0100)]
CUDA: Faster Mixtral prompt processing (#4538)
* CUDA: make MoE tensors contiguous for batch size>1
* Update ggml-cuda.cu
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Eric Sommerlade [Tue, 19 Dec 2023 16:17:01 +0000 (16:17 +0000)]
ggml : fixed check for _MSC_VER (#4535)
Co-authored-by: Eric Sommerlade <redacted>
arlo-phoenix [Mon, 18 Dec 2023 21:33:45 +0000 (22:33 +0100)]
ggml-cuda: Fix HIP build (#4528)
regression of #4490
Adds defines for two new datatypes
cublasComputeType_t, cudaDataType_t.
Currently using deprecated hipblasDatatype_t since newer ones very recent.
Georgi Gerganov [Mon, 18 Dec 2023 18:17:43 +0000 (20:17 +0200)]
llama.swiftui : add tinyllama 1.1B F16
Georgi Gerganov [Mon, 18 Dec 2023 18:05:12 +0000 (20:05 +0200)]
llama.swiftui : add more models
Ebey Abraham [Mon, 18 Dec 2023 17:27:47 +0000 (17:27 +0000)]
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
* phi2 implementation
* fix breaking change
* phi-2 : various fixes
* phi-2 : use layer norm eps
* py : whitespaces
* llama : fix meta KV override bug
* convert : phi don't add BOS token
* convert : revert "added_tokens_decoder" change
* phi-2 : scale Q instead of KQ for better precision
* ggml : fix NeoX rope to rotate just first n_dims
* cuda : less diff in the rope_neox kernel
* ggml : add ggml_mul_mat_set_prec
ggml-ci
* Update ggml-cuda.cu
Co-authored-by: slaren <redacted>
* Update ggml-cuda.cu
Co-authored-by: slaren <redacted>
* cuda : ggml_cuda_op_mul_mat_cublas support F32 precision
* cuda : remove oboslete comment
---------
Co-authored-by: Ebey Abraham <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: slaren <redacted>
hankcs [Mon, 18 Dec 2023 13:14:58 +0000 (05:14 -0800)]
llama : fix try_override for bool_value which always return true (#4519)
Jared Van Bortel [Mon, 18 Dec 2023 00:39:02 +0000 (19:39 -0500)]
decode : fix logits_valid for legacy API (#4516)
Georgi Gerganov [Sun, 17 Dec 2023 18:16:23 +0000 (20:16 +0200)]
readme : update hot topics
Georgi Gerganov [Sun, 17 Dec 2023 17:38:41 +0000 (19:38 +0200)]
llama.swiftui : add bench functionality (#4483)
* llama.swiftui : add bench button
* llama.swiftui : initial bench functionality
* force to use n_gpu_layers on simulator
* add download buttons & expose llamaState.loadModel
* update project.pbxproj
* comment #Preview & fix editorconfig check
* gitignore : xcode stuff
* llama.swiftui : UX improvements
* llama.swiftui : avoid data copy via "downloadTask"
* llama.swiftui : remove model from project
* llama : remove "mostly" from model infos
* llama.swiftui : improve bench
---------
Co-authored-by: jhen <redacted>
Jared Van Bortel [Sun, 17 Dec 2023 15:45:46 +0000 (10:45 -0500)]
gguf-py : fail fast on nonsensical special token IDs (#4489)
Matheus Gabriel Alves Silva [Sun, 17 Dec 2023 15:23:33 +0000 (12:23 -0300)]
build : Check the ROCm installation location (#4485)
* build : Check the ROCm installation location
* more generic approach
* fixup! It was returning the path instead of the command output
* fixup! Trailing whitespace
slaren [Sun, 17 Dec 2023 15:05:56 +0000 (16:05 +0100)]
finetune : keep allocs alive until all allocations are done (#4486)
olexiyb [Sun, 17 Dec 2023 15:02:16 +0000 (17:02 +0200)]
server : disable llm logs if SERVER_VERBOSE is off (#3792)
AdithyanI [Sun, 17 Dec 2023 14:57:56 +0000 (15:57 +0100)]
server : fix grammar being ignored (#4494)
Fix bug in identifying the grammar.
Alexey Parfenov [Sun, 17 Dec 2023 14:56:09 +0000 (14:56 +0000)]
server : fix possible ambiguity in content type charset (#4501)
mzcu [Sun, 17 Dec 2023 14:54:37 +0000 (15:54 +0100)]
server : allow requests larger than 8K (#4500)
Bach Le [Sun, 17 Dec 2023 10:57:33 +0000 (18:57 +0800)]
Link to cublas dynamically on Windows even with LLAMA_STATIC (#4506)
slaren [Sat, 16 Dec 2023 17:58:46 +0000 (18:58 +0100)]
lora : add support for non-llama models (#3333)
* lora : add support for non-llama models
ggml-ci
* avoid leaking ggml_context on failure
cleanup
ggml-ci
* lora : allow 1d tensors
* lora : include embd and output layers in size calculation
* fix style
Jared Van Bortel [Sat, 16 Dec 2023 03:16:15 +0000 (22:16 -0500)]
llama : sanity checks for access to logits (#4274)
Co-authored-by: Georgi Gerganov <redacted>
ShadovvBeast [Fri, 15 Dec 2023 11:49:01 +0000 (13:49 +0200)]
server : add optional API Key Authentication example (#4441)
* Add API key authentication for enhanced server-client security
* server : to snake_case
---------
Co-authored-by: Georgi Gerganov <redacted>
slaren [Fri, 15 Dec 2023 11:45:50 +0000 (12:45 +0100)]
ggml : group mul_mat_id rows by matrix (cpu only) (#4480)
* ggml : group mul_mat_id rows by matrix (cpu only)
* remove mmid parameters from mm forward
* store row groups in wdata and calculate only once in GGML_TASK_INIT
ggml-ci
slaren [Thu, 14 Dec 2023 19:05:21 +0000 (20:05 +0100)]
ggml : use ggml_row_size where possible (#4472)
* ggml : use ggml_row_size where possible
ggml-ci
* ggml : move ggml_nbytes_split to ggml-cuda.cu
slaren [Thu, 14 Dec 2023 15:52:08 +0000 (16:52 +0100)]
ggml : remove n_dims from ggml_tensor (#4469)
ggml-ci
wonjun Jang [Thu, 14 Dec 2023 12:44:49 +0000 (21:44 +0900)]
py : add protobuf dependency (#4466)
LostRuins [Thu, 14 Dec 2023 12:13:33 +0000 (20:13 +0800)]
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
* Fixes "Not enough space in the context's memory pool" encountered on certain models, which seems to be caused by some imprecision related to the automatic casting of floating point values
* do not cast to size_t, instead just use doubles
* ggml : add ggml_row_size(), deprecate ggml_type_sizef()
* ggml : fix row size compute to avoid overflows
* tests : fix sizey -> sizez
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Thu, 14 Dec 2023 08:35:29 +0000 (10:35 +0200)]
ggml : fix OpenCL broadcast requirement for ggml_mul (close #4453)
wonjun Jang [Thu, 14 Dec 2023 08:09:34 +0000 (17:09 +0900)]
convert : support loading vocab from fast tokenizer config (#3633)
* Add HFVocab into convert.py
* Update convert.py
* Update convert.py
* add bytes_to_unicode function
* change add_meta_vocab fucntion
* remove debug code
* remove byte_encoder
* Add newline between classes
* Check tokenizer.json when tokenizer.model is not exist.
* Move transformers dependency to local code
* Add error context with 'raise from'
* Add fast tokenizer option to BpeVocab
* Update convert.py
* Add VocabLoader and remove *Vocab class
* Add transformers dependency
* remove added tokens and check newline token to decide spm or bpe
* Update convert.py
* Add special token type
* Update convert.py
* Update convert.py
* Update convert.py
* Fix typo in convert.py
* Fix when params.n_vocab < tokenizer vocab size
* update vocab class
* change funtion name
* Remove unused variable/functions, add types to class variable and methods, delete blank liens
* fix flake8 warnings
* code style cleanup
* make mypy happy
* change exception
---------
Co-authored-by: Jared Van Bortel <redacted>
BarfingLemurs [Thu, 14 Dec 2023 07:38:49 +0000 (02:38 -0500)]
readme : update supported model list (#4457)
shibe2 [Wed, 13 Dec 2023 19:57:15 +0000 (23:57 +0400)]
server : fix handling of characters that span multiple tokens when streaming (#4446)
Georgi Gerganov [Wed, 13 Dec 2023 19:54:54 +0000 (21:54 +0200)]
sync : ggml (SD ops, tests, kernels) (#4444)
* sync : ggml (SD ops, tests, kernels)
ggml-ci
* cuda : restore im2col
ggml-ci
* metal : fix accuracy of dequantization kernels
ggml-ci
* cuda : restore correct im2col
ggml-ci
* metal : try to fix moe test by reducing expert size
ggml-ci
* cuda : fix bin bcast when src1 and dst have different types
ggml-ci
---------
Co-authored-by: slaren <redacted>
Jared Van Bortel [Wed, 13 Dec 2023 17:10:10 +0000 (12:10 -0500)]
build : detect host compiler and cuda compiler separately (#4414)
Siwen Yu [Wed, 13 Dec 2023 12:50:14 +0000 (20:50 +0800)]
common : add `--version` option to show build info in CLI (#4433)
Georgi Gerganov [Wed, 13 Dec 2023 12:05:38 +0000 (14:05 +0200)]
readme : update hot topics
slaren [Wed, 13 Dec 2023 12:04:25 +0000 (13:04 +0100)]
llama : add Mixtral support (#4406)
* convert : support Mixtral as LLAMA arch
* convert : fix n_ff typo
* llama : model loading
* ggml : sync latest ggml_mul_mat_id
* llama : update graph to support MoE
* llama : fix cur -> cur_expert
* llama : first working version
* llama : fix expert weighting in the FFN
* ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)
* ggml : add n_as argument to ggml_mul_mat_id
* ggml : fix ggml_get_rows to take into account ne02 / ne11
* metal : add more general support for ggml_get_rows + tests
* llama : add basic support for offloading moe with CUDA
* metal : add/mul/div use general kernel when src1 not cont
* metal : reduce the kernel launches for ggml_mul_mat_id
* ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D
* ggml : update get_rows f16 and q
* cuda : support non-contiguous src1 in get_rows
* llama : offload missing ffn_moe_silu
* metal : fix ggml_get_rows to work with non-cont src1
* metal : add indirect mat-vec kernels for all quantization types
* llama : do not quantize expert gating tensors
* llama : add n_expert and n_expert_used to hparams + change quants
* test-backend-ops : add moe test
* cuda : fix get_rows when ncols is odd
* convert : determine n_ctx correctly
* metal : fix ggml_mul_mat_id for F32
* test-backend-ops : make experts more evenly probable (test_moe)
* test-backend-ops : cleanup, add moe test for batches
* test-backend-ops : add cpy from f32 -> all types test
* test-backend-ops : fix dequantize block offset
* llama : fix hard-coded number of experts
* test-backend-ops : simplify and disable slow tests to avoid CI timeout
* test-backend-ops : disable MOE test with thread sanitizer
* cuda : fix mul_mat_id with multi gpu
* convert : use 1e6 rope_freq_base for mixtral
* convert : fix style
* convert : support safetensors format
* gguf-py : bump version
* metal : add cpy f16 -> f32 kernel
* metal : fix binary ops for ne10 % 4 != 0
* test-backend-ops : add one more sum_rows test
* ggml : do not use BLAS with ggml_mul_mat_id
* convert-hf : support for mixtral-instruct (#4428)
* convert : typo fix, add additional hyperparameters, use LLaMA arch for Mixtral-instruct
* convert : use sentencepiece tokenizer for Mixtral-instruct
* convert : make flake8 happy
* metal : fix soft_max kernels
ref: https://github.com/ggerganov/ggml/pull/621/commits/
1914017863d2f9ab8ecc0281cc2a56d683668b92
* metal : limit kernels to not use more than the allowed threads
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Radek Pilar <redacted>
kalomaze [Tue, 12 Dec 2023 10:12:35 +0000 (04:12 -0600)]
server : tweak default sampling parameters (#4367)
* Set a more typical Top P setting as the default
* Update temp max
Richard Kiss [Tue, 12 Dec 2023 09:53:36 +0000 (01:53 -0800)]
english : use `typos` to fix comments and logs (#4354)
Jared Van Bortel [Tue, 12 Dec 2023 09:27:26 +0000 (04:27 -0500)]
build : target Windows 8 for standard mingw-w64 (#4405)
* build : target Windows 8 for standard mingw-w64
* make : fix missing console.o deps
This was causing a link error with `make all` on Windows.
crasm [Tue, 12 Dec 2023 09:25:57 +0000 (04:25 -0500)]
llama : document logits_all deprecation (#4418)
llama_context_params.logits_all is a parameter for controlling
llama_eval. This documents that logits_all should not be used with
llama_decode and llama_batch.
Vladimir Zorin [Tue, 12 Dec 2023 09:25:29 +0000 (11:25 +0200)]
server : fix local model name in server (#4420)
Taikono-Himazin [Tue, 12 Dec 2023 09:24:32 +0000 (18:24 +0900)]
ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (#4424)
Yueh-Po Peng [Sun, 10 Dec 2023 22:27:38 +0000 (06:27 +0800)]
Update README.md (#4388)
Fix small typo.
Xiang (Kevin) Li [Sat, 9 Dec 2023 21:29:27 +0000 (16:29 -0500)]
grammar : revert the replacement of llama_token_to_piece with id_to_token (#4396)
Georgi Gerganov [Thu, 7 Dec 2023 20:26:54 +0000 (22:26 +0200)]
sync : ggml (new ops, tests, backend, etc.) (#4359)
* sync : ggml (part 1)
* sync : ggml (part 2, CUDA)
* sync : ggml (part 3, Metal)
* ggml : build fixes
ggml-ci
* cuda : restore lost changes
* cuda : restore lost changes (StableLM rope)
* cmake : enable separable compilation for CUDA
ggml-ci
* ggml-cuda : remove device side dequantize
* Revert "cmake : enable separable compilation for CUDA"
This reverts commit
09e35d04b1c4ca67f9685690160b35bc885a89ac .
* cuda : remove assert for rope
* tests : add test-backend-ops
* ggml : fix bug in ggml_concat
* ggml : restore `ggml_get_n_tasks()` logic in `ggml_graph_plan()`
* ci : try to fix macOS
* ggml-backend : remove backend self-registration
* ci : disable Metal for macOS cmake build
ggml-ci
* metal : fix "supports family" call
* metal : fix assert
* metal : print resource path
ggml-ci
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Thu, 7 Dec 2023 11:03:17 +0000 (13:03 +0200)]
llama : per-layer KV cache + quantum K cache (#4309)
* per-layer KV
* remove unnecessary copies
* less code duplication, offload k and v separately
* llama : offload KV cache per-layer
* llama : offload K shift tensors
* llama : offload for rest of the model arches
* llama : enable offload debug temporarily
* llama : keep the KV related layers on the device
* llama : remove mirrors, perform Device -> Host when partial offload
* common : add command-line arg to disable KV cache offloading
* llama : update session save/load
* llama : support quantum K cache (#4312)
* llama : support quantum K cache (wip)
* metal : add F32 -> Q8_0 copy kernel
* cuda : add F32 -> Q8_0 copy kernel
ggml-ci
* cuda : use mmv kernel for quantum cache ops
* llama : pass KV cache type through API
* llama : fix build
ggml-ci
* metal : add F32 -> Q4_0 copy kernel
* metal : add F32 -> Q4_1 copy kernel
* cuda : wip
* cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels
* llama-bench : support type_k/type_v
* metal : use mm kernel only for quantum KV cache
* cuda : add comment
* llama : remove memory_f16 and kv_f16 flags
---------
Co-authored-by: slaren <redacted>
* readme : add API change notice
---------
Co-authored-by: slaren <redacted>
Hongyu Ouyang [Thu, 7 Dec 2023 10:25:22 +0000 (02:25 -0800)]
train : fix #4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (#4351)
On commit b1108 (
44c117f4 ) xaedes added
ggml_allocr * alloc = NULL;
... (many lines in between)
if (alloc) {
ggml_allocr_free(alloc);
}
Which is correct, but it's easy to lose context after many lines in between.
On commit b1287 (
0e76a899 ) xaedes made a big change. From here on, alloc is freed eagerly.
alloc = ggml_allocr_new(...)
... (short lines of code)
ggml_allocr_free(alloc)
This happens a few times, but alloc is never set to NULL, and many lines below,
we still have
if (alloc) {
ggml_allocr_free(alloc);
}
which causes a double-free.
Georgi Gerganov [Wed, 6 Dec 2023 18:21:59 +0000 (20:21 +0200)]
server : recognize cache_prompt parameter in OAI API (#4347)
Georgi Gerganov [Wed, 6 Dec 2023 08:41:03 +0000 (10:41 +0200)]
common : fix compile warning
stduhpf [Wed, 6 Dec 2023 08:08:17 +0000 (09:08 +0100)]
speculative : support `--color` (#4343)
* speculative: add some colors
* minor : add braces
---------
Co-authored-by: Georgi Gerganov <redacted>
Marcus Dunn [Tue, 5 Dec 2023 20:55:12 +0000 (10:55 -1000)]
grammar : pre-computed pieces + reserve mem + less string copies (#4330)
* reserve space for codepoints
* improvement for the appended 0
* used precomputed token text for grammar sample
* reserve canidates_decoded
* reserve canidates_grammar
* remove candidates_decoded
* Revert "remove candidates_decoded"
This reverts commit
3773328080e6a139ee83198329a13cf4ff61d707 .
* changed decode_utf8 to take src by ref
Kerfuffle [Tue, 5 Dec 2023 17:19:18 +0000 (10:19 -0700)]
llama : allow overriding GGUF metadata when loading model (#4092)
* feat: Allow overriding GGUF metadata when loading model
* Fix the one time GCC is stricter than clang about something
* Step1
* Refactor... basically everything!
* Nuke obsolete GetArrayLen struct
* simplify std::string specialization
* Various cleanups
Add informational output when overrides are applied
Warn user when an override with the wrong type is specified
* Fix broken logic for parsing bool KV overrides
Fix issue where overrides didn't apply when key missing in GGUF metadata
Resolve merge changes
* llama : rearrange model params
* Update new GET_KEY call
Add note that metadata KV overrides aren't reflected in initial metadata KV info dump
---------
Co-authored-by: cebtenzzre <redacted>
Co-authored-by: Georgi Gerganov <redacted>
MaggotHATE [Tue, 5 Dec 2023 10:05:51 +0000 (15:05 +0500)]
sampling : custom samplers order (#4285)
* Samplers sequence order w parameter
* Cleaned commented code
* Fixed formatting
* Rewrote with unordered_map
* Revert and rewrite, too many problems and safeguards would be needed
* Fixed code style
* Code style fixes according to review
* More readable samplers input string, fixed help
* Style fix in sampler_queue
* Formatting fixes
* Fixing whitespaces
kchro3 [Tue, 5 Dec 2023 07:29:46 +0000 (23:29 -0800)]
swift : revert compiler checks for swift package (#4332)
Daniel Bevenius [Mon, 4 Dec 2023 16:04:21 +0000 (17:04 +0100)]
simple : update error message for KV cache check (#4324)
This commit updates the error message that is printed when the
KV cache is not big enough to hold all the prompt and generated
tokens. Specifically it removes the reference to n_parallel and
replaces it with n_len.
Signed-off-by: Daniel Bevenius <redacted>
Miwa / Ensan [Mon, 4 Dec 2023 16:03:49 +0000 (01:03 +0900)]
swift : fix concatenation method to avoid invalid UTF8 stringfication (#4325)
Miwa / Ensan [Mon, 4 Dec 2023 13:43:45 +0000 (22:43 +0900)]
swift : fix prompt tokenization logic (#4321)
Ikko Eltociear Ashimine [Mon, 4 Dec 2023 07:57:35 +0000 (16:57 +0900)]
grammar-parser : fix typo (#4318)
preceeding -> preceding
Georgi Gerganov [Sun, 3 Dec 2023 13:56:35 +0000 (15:56 +0200)]
ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (#4308)
* ggml : fix soft max out-of-bounds access
ggml-ci
* ggml : reuse ggml_get_n_tasks() in ggml_graph_plan()
ggml-ci
Georgi Gerganov [Sun, 3 Dec 2023 13:56:22 +0000 (15:56 +0200)]
ggml : fix soft max out-of-bounds access (#4307)
ggml-ci
Ed Lee [Sun, 3 Dec 2023 09:10:43 +0000 (01:10 -0800)]
server : fix OpenAI API `stop` field to be optional (#4299)
(cherry picked from commit Mozilla-Ocho/llamafile@
e8c92bcb84ae3bcbf0d617b7ee6a5413bcbd58af )
Rickard Edén [Sun, 3 Dec 2023 09:03:25 +0000 (10:03 +0100)]
py : add grammar to oai like api (#4294)
Georgi Gerganov [Sun, 3 Dec 2023 08:58:16 +0000 (10:58 +0200)]
llama : pad KV cache size (#4280)
* llama : pad KV cache size to 32
* metal : try to improve batched decoding
Georgi Gerganov [Fri, 1 Dec 2023 18:39:12 +0000 (20:39 +0200)]
llama : avoid using "optional" keyword (#4283)
Georgi Gerganov [Fri, 1 Dec 2023 18:35:03 +0000 (20:35 +0200)]
llama : support optional tensors (#4283)
Miwa / Ensan [Fri, 1 Dec 2023 18:19:45 +0000 (03:19 +0900)]
swift : fix token_to_piece implementation (#4278)
* Fix token_to_piece implementation in Swift
* Fix errors
Jared Van Bortel [Fri, 1 Dec 2023 18:18:35 +0000 (13:18 -0500)]
build : enable libstdc++ assertions for debug builds (#4275)
CausalLM [Fri, 1 Dec 2023 18:17:06 +0000 (02:17 +0800)]
llama : support attention bias on LLaMA architecture (#4283)
* Support attention_bias on LLaMA architecture
QKVO bias, should fix InternLM (https://github.com/ggerganov/llama.cpp/issues/3133) and works for LLaMAfied Qwen models (https://github.com/ggerganov/llama.cpp/pull/3743#issuecomment-
1825923608 ).
* check existence of qkvo bias while loading llama models
Tested on LLaMA2, CUDA and CPU.
* Update llama.cpp
Shijie [Fri, 1 Dec 2023 18:16:31 +0000 (02:16 +0800)]
llama : add Qwen support (#4281)
* enable qwen to llama.cpp
* llama : do not GPU split bias tensors
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 1 Dec 2023 16:42:11 +0000 (18:42 +0200)]
llama : fix integer overflow during quantization (#4284)
happens with multi-threaded quantization of Qwen-72B
ggml-ci
Daniel Bevenius [Fri, 1 Dec 2023 09:41:56 +0000 (10:41 +0100)]
py : add requirements file for convert-hf-to-gguf.py (#4277)
This commit adds a requirements file for the convert-hf-to-gguf.py
script, and also add the torch and transformers packages to it.
The motivation for this is that currently running convert-hf-to-gguf.py
will produce the following error:
```console
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install -r requirements.txt
Collecting numpy==1.24.4
Collecting sentencepiece==0.1.98
Collecting gguf>=0.1.0
Installing collected packages: sentencepiece, numpy, gguf
Successfully installed gguf-0.5.1 numpy-1.24.4 sentencepiece-0.1.98
(venv) $ python convert-hf-to-gguf.py --help
Traceback (most recent call last):
File "llama.cpp/convert-hf-to-gguf.py", line 16, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
```
With this commit, and using requirements-hf-to-gguf.txt instead of
requirements.txt, the script can be run and shows the help output.
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Fri, 1 Dec 2023 08:51:24 +0000 (10:51 +0200)]
ggml : add ggml_soft_max_ext (#4256)
* metal : implement soft_max_ext
* cuda : implement soft_max_ext
* ggml : implement soft_max_ext (CPU)
* batched-bench : print threads
ggml-ci
* metal : simplify soft_max encoding
ggml-ci
* cuda : use 512 threads for soft_max instead of 32
* ggml : update soft max cpu
* cuda : do warp-based block reduce
* cuda : increase max block size to 1024
* cuda : fix warp reduction initialization of shared mem
* metal : warp-based reduction for soft max kernel
* metal : warp-based reduce for rms_norm
* metal : simplify soft max kernel
ggml-ci
* alloc : fix build with debug
Ziad Ben Hadj-Alouane [Thu, 30 Nov 2023 22:25:49 +0000 (17:25 -0500)]
server : add --log-disable to disable logging to file (#4260)
* * add --log-disable to disable logging to file in the server example
* * typo fix
Ziad Ben Hadj-Alouane [Thu, 30 Nov 2023 22:25:04 +0000 (17:25 -0500)]
server : add single-client multi-prompt support (#4232)
* * add multiprompt support
* * cleanup
* * more cleanup
* * remove atomicity of id_gen, and change lock_guard to unique_lock on completion requests
* * remove all references to mutex_multitasks
* Update examples/server/server.cpp
Co-authored-by: Jared Van Bortel <redacted>
* Update examples/server/server.cpp
Co-authored-by: Jared Van Bortel <redacted>
* Update examples/server/server.cpp
Co-authored-by: Jared Van Bortel <redacted>
* Update examples/server/server.cpp
Co-authored-by: Jared Van Bortel <redacted>
* * change to set
---------
Co-authored-by: Jared Van Bortel <redacted>
WillCorticesAI [Thu, 30 Nov 2023 22:23:44 +0000 (17:23 -0500)]
make : fix Apple clang determination bug (#4272)
Co-authored-by: Will Findley <redacted>
Jared Van Bortel [Thu, 30 Nov 2023 22:23:08 +0000 (17:23 -0500)]
build : fix build info generation and cleanup Makefile (#3920)
* cmake : fix joining of REAL_GIT_DIR
* fix includes with help from include-what-you-use
* make : remove unneeded deps and add test-rope target
* fix C includes in C++ source files
* Revert "fix includes with help from include-what-you-use"
This reverts commit
635e9fadfd516d4604a0fecf4a854bfb25ad17ae .
John [Thu, 30 Nov 2023 22:11:14 +0000 (23:11 +0100)]
llava : ShareGPT4V compatibility (vision encoder only loading) (#4172)
* ShareGPT4 compatibility (vision encoder only loading)
Load only a CLIP vision encoder (as supplied by ShareGPT finetunes)
Corrects the argument parsing for --img_mean and --img_std (which were previously not parsed but attempted to access)
Defines defaults for img_mean and img_std which are equal to the llava 1.5 CLIP encoder, so you do not have to provide them
* Update convert-image-encoder-to-gguf.py
Andrew Godfrey [Thu, 30 Nov 2023 21:56:19 +0000 (13:56 -0800)]
main : pass LOG_TEE callback to llama.cpp log (#4033)
* main : Call llama_log_set to use LOG_TEE
* tabs to spaces
vodkaslime [Thu, 30 Nov 2023 21:49:21 +0000 (05:49 +0800)]
readme : fix (#4135)
* fix: readme
* chore: resolve comments
* chore: resolve comments
Juraj Bednar [Thu, 30 Nov 2023 21:46:01 +0000 (22:46 +0100)]
docker : add finetune option (#4211)
Miwa / Ensan [Thu, 30 Nov 2023 21:45:17 +0000 (06:45 +0900)]
batched.swift : update README.md (#4214)
docs: update how to run
Li Tan [Thu, 30 Nov 2023 21:44:11 +0000 (13:44 -0800)]
cmake : fix the metal file foder path (#4217)
Dawid Wysocki [Thu, 30 Nov 2023 21:43:32 +0000 (22:43 +0100)]
readme : fix typo (#4253)
llama.cpp uses GitHub Actions, not Gitlab Actions.
Daniel Bevenius [Thu, 30 Nov 2023 21:43:08 +0000 (22:43 +0100)]
llama : fix alignment of general.name in print meta (#4254)
* llama: fix alignment of general.name in print meta
This commit fixes the alignment of the general.name field in the
llm_load_print_meta function.
Currently the output looks like this:
```console
llm_load_print_meta: model ftype = mostly Q4_0
llm_load_print_meta: model params = 13.02 B
llm_load_print_meta: model size = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name = LLaMA v2
```
And with this commit it looks like this:
```console
llm_load_print_meta: model ftype = mostly Q4_0
llm_load_print_meta: model params = 13.02 B
llm_load_print_meta: model size = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name = LLaMA v2
```
Signed-off-by: Daniel Bevenius <redacted>
* llama: fix alignment of special tokens
Signed-off-by: Daniel Bevenius <redacted>
---------
Signed-off-by: Daniel Bevenius <redacted>
slaren [Thu, 30 Nov 2023 21:42:23 +0000 (22:42 +0100)]
convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#4258)
tarcey [Thu, 30 Nov 2023 21:40:23 +0000 (22:40 +0100)]
llama : fix typical sampling (#4261)
Typical sampling was broken because after copying new_candidates into canditates, the "sorted" bool is left at "true", but the new data is no longer sorted according to probability. Patch to set "sorted" to false.
Test: Generating with temp=0.0001 (approx. argmax) should generate the same sequence at typical>=1.0 and typical=0.9999 (approx. disabled, but enters the typical sampling codepath).
rhjdvsgsgks [Thu, 30 Nov 2023 20:50:40 +0000 (20:50 +0000)]
py : fix oai proxy (#3972)
* fix oai proxy
fix generation not stoped while bot stop talking in chat mode
fix possible `slot_id` not exist
response for cors (and pre flight)
* oai proxy: workaround for some client (such as Chatbox)
* use stop as separator to replace hardcoded `\n`
Georgi Gerganov [Wed, 29 Nov 2023 09:00:17 +0000 (11:00 +0200)]
examples : add readme files
Peter Sugihara [Wed, 29 Nov 2023 07:16:34 +0000 (23:16 -0800)]
readme : add FreeChat (#4248)
Jared Van Bortel [Tue, 28 Nov 2023 09:51:11 +0000 (04:51 -0500)]
ggml : restore abort() in GGML_ASSERT (#4242)
Georgi Gerganov [Tue, 28 Nov 2023 08:32:03 +0000 (10:32 +0200)]
ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload checks in llama.cpp (#4240)
* ggml : use blas even if src0 is not F32
* llama : use n_threads_batch only when n_tokens >= 32
ggml-ci
* llama : revert n_threads_batch logic
ggml-ci