]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Peter Sugihara [Fri, 3 Nov 2023 19:18:18 +0000 (12:18 -0700)]
metal : round up to 16 to fix MTLDebugComputeCommandEncoder assertion (#3938)
Xiao-Yong Jin [Fri, 3 Nov 2023 18:00:31 +0000 (13:00 -0500)]
ggml-metal: fix yarn rope (#3937)
slaren [Fri, 3 Nov 2023 11:13:09 +0000 (12:13 +0100)]
ggml-cuda : move row numbers to x grid dim in mmv kernels (#3921)
Georgi Gerganov [Fri, 3 Nov 2023 07:41:17 +0000 (09:41 +0200)]
speculative : change default p_accept to 0.5 + CLI args (#3919)
ggml-ci
Georgi Gerganov [Fri, 3 Nov 2023 07:24:00 +0000 (09:24 +0200)]
common : YAYF (yet another YARN fix) (#3925)
ggml-ci
cebtenzzre [Fri, 3 Nov 2023 06:31:58 +0000 (02:31 -0400)]
llama : change yarn_ext_factor placeholder to -1 (#3922)
Kerfuffle [Thu, 2 Nov 2023 19:58:22 +0000 (13:58 -0600)]
cuda : add ROCM aliases for CUDA pool stuff (#3918)
Andrei [Thu, 2 Nov 2023 19:40:31 +0000 (15:40 -0400)]
cmake : fix relative path to git submodule index (#3915)
Georgi Gerganov [Thu, 2 Nov 2023 18:44:12 +0000 (20:44 +0200)]
readme : add notice about #3912
Georgi Gerganov [Thu, 2 Nov 2023 18:32:11 +0000 (20:32 +0200)]
cuda : fix const ptrs warning causing ROCm build issues (#3913)
Oleksii Maryshchenko [Thu, 2 Nov 2023 17:10:39 +0000 (18:10 +0100)]
cuda : use CUDA memory pool with async memory allocation/deallocation when available (#3903)
* Using cuda memory pools for async alloc/dealloc.
* If cuda device doesnt support memory pool than use old implementation.
* Removed redundant cublasSetStream
---------
Co-authored-by: Oleksii Maryshchenko <redacted>
Georgi Gerganov [Thu, 2 Nov 2023 14:22:30 +0000 (16:22 +0200)]
gguf : print error for GGUFv1 files (#3908)
slaren [Thu, 2 Nov 2023 12:10:33 +0000 (13:10 +0100)]
cmake : disable LLAMA_NATIVE by default (#3906)
Georgi Gerganov [Thu, 2 Nov 2023 09:20:21 +0000 (11:20 +0200)]
gguf : remove special-case code for GGUFv1 (#3901)
ggml-ci
Georgi Gerganov [Thu, 2 Nov 2023 07:54:18 +0000 (09:54 +0200)]
llm : prevent from 1-D tensors being GPU split (#3697)
cebtenzzre [Thu, 2 Nov 2023 06:50:16 +0000 (02:50 -0400)]
build : link against build info instead of compiling against it (#3879)
* cmake : fix build when .git does not exist
* cmake : simplify BUILD_INFO target
* cmake : add missing dependencies on BUILD_INFO
* build : link against build info instead of compiling against it
* zig : make build info a .cpp source instead of a header
Co-authored-by: Matheus C. França <redacted>
* cmake : revert change to CMP0115
---------
Co-authored-by: Matheus C. França <redacted>
Georgi Gerganov [Thu, 2 Nov 2023 06:35:10 +0000 (08:35 +0200)]
cuda : check if this fixes Pascal card regression (#3882)
Georgi Gerganov [Thu, 2 Nov 2023 06:33:37 +0000 (08:33 +0200)]
metal : fix build errors and kernel sig after #2268 (#3898)
cebtenzzre [Thu, 2 Nov 2023 05:49:44 +0000 (01:49 -0400)]
cuda : fix RoPE after #2268 (#3897)
cebtenzzre [Wed, 1 Nov 2023 23:29:14 +0000 (19:29 -0400)]
llama : fix llama_context_default_params after #2268 (#3893)
slaren [Wed, 1 Nov 2023 22:10:09 +0000 (23:10 +0100)]
ggml-cuda : compute ptrs for cublasGemmBatchedEx in a kernel (#3891)
* ggml-cuda : compute ptrs for cublasGemmBatchedEx in a kernel
* fix warnings
cebtenzzre [Wed, 1 Nov 2023 22:04:33 +0000 (18:04 -0400)]
llama : implement YaRN RoPE scaling (#2268)
Co-authored-by: cebtenzzre <redacted>
Co-authored-by: Jeffrey Quesnelle <redacted>
Georgi Gerganov [Wed, 1 Nov 2023 21:08:30 +0000 (23:08 +0200)]
llm : fix llm_build_kqv taking unused tensor (benign, #3837)
Georgi Gerganov [Wed, 1 Nov 2023 21:00:50 +0000 (23:00 +0200)]
llm : fix falcon norm after refactoring (#3837)
Georgi Gerganov [Wed, 1 Nov 2023 19:25:00 +0000 (21:25 +0200)]
metal : multi-simd softmax (#3710)
ggml-ci
Georgi Gerganov [Wed, 1 Nov 2023 19:15:55 +0000 (21:15 +0200)]
common : minor (#3715)
Georgi Gerganov [Wed, 1 Nov 2023 18:11:02 +0000 (20:11 +0200)]
llm : add llm_build_context (#3881)
* llm : add llm_build_context
* llm : deduce norm eps based on type + explict max_alibi_bias, clamp_kqv
* llm : restore the non-graph llm_build_ functional API
ggml-ci
* llm : cleanup + comments
bandoti [Wed, 1 Nov 2023 17:42:01 +0000 (14:42 -0300)]
common : allow caller to handle help/argument exceptions (#3715)
* Allow caller to handle help/argument exceptions
* Prepend newline to usage output
* Add new gpt_params_parse_ex function to hide arg-parse impl
* Fix issue blocking success case
* exit instead of returning false
* Update common/common.h
Co-authored-by: Georgi Gerganov <redacted>
* Update common/common.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
staviq [Wed, 1 Nov 2023 14:18:27 +0000 (15:18 +0100)]
log : make generating separate log files optional (#3787)
* impl --log-new, --log-append
* Update common/log.h
Co-authored-by: cebtenzzre <redacted>
* Update common/log.h
Co-authored-by: cebtenzzre <redacted>
* Apply suggestions from code review
Co-authored-by: cebtenzzre <redacted>
---------
Co-authored-by: cebtenzzre <redacted>
l3utterfly [Wed, 1 Nov 2023 13:40:43 +0000 (21:40 +0800)]
sampling : null grammar field after reset (#3885)
Georgi Gerganov [Wed, 1 Nov 2023 11:50:45 +0000 (13:50 +0200)]
ggml : fix UNUSED macro (#3762)
Andrew Godfrey [Wed, 1 Nov 2023 11:49:04 +0000 (04:49 -0700)]
finetune : add -ngl parameter (#3762)
* Add '-ngl' support to finetune.cpp
* Add fprintf in ggml_cuda_op_add
When I tried CUDA offloading during finetuning following the readme, I got an assert here.
This probably isn't an important case because inference later gives a warning saying you should use f16 or f32 instead when using lora
* Add 'finetune.sh', which currently fails when using GPU
"error: operator (): Finetuning on tensors with type 'f16' is not yet supported"
* tweak finetune.sh
* Suppress some warnings in ggml.c
* Add f16 implementation to ggml_compute_forward_add_f16_f32
* Add an f16 case to ggml_add_cast_impl and llama_build_lora_finetune_graphs
* finetune.sh: Edit comments
* Add "add_f16_f32_f32_cuda"
* Tweak an error message
* finetune.sh: Add an optional LLAMA_MODEL_DIR variable
* finetune.sh: Add an optional LLAMA_TRAINING_DIR variable
* train : minor
* tabs to spaces
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: cebtenzzre <redacted>
Georgi Gerganov [Wed, 1 Nov 2023 09:29:07 +0000 (11:29 +0200)]
scripts : add server-llm.sh (#3868)
* scripts : add deploy-server.sh
* scripts : rename to server-llm.sh
* scripts : working curl pipe
Adrian Hesketh [Wed, 1 Nov 2023 09:28:28 +0000 (09:28 +0000)]
server : re-enable completion and embedded at the same time (#3876)
Georgi Gerganov [Wed, 1 Nov 2023 06:04:02 +0000 (08:04 +0200)]
llama : refactor graph build code (#3837)
* llama : factor out ggml-alloc from graph graph build functions
ggml-ci
* metal : disable kernel load log
* llama : factor out tensor offloading outside the build call (wip)
ggml-ci
* llama : offload rest of the models
ggml-ci
* llama : update offload log messages to print node index
* llama : comments
* llama : support offloading result_norm + comments
* llama : factor graph input into a function
* llama : do tensor offload only with CUDA
* llama : fix res_norm offloading
* llama : try to optimize offloading code
* llama : fix non-CUDA build
* llama : try to fix build
* llama : move refact in correct place + optimize graph input
* llama : refactor tensor offloading as callback
* llama : add layer index to all tensor names
* llama : add functional header
* llama : comment
ggml-ci
* llama : remove obsolete map for layer counting
* llama : add llm_build helper functions (#3848)
* llama : add llm_build_norm helper function
ggml-ci
* llama : add llm_build_ffn helper function (#3849)
ggml-ci
* llama : add llm_build_k_shift helper
ggml-ci
* llama : fix offloading after recent changes
* llama : add llm_build_kv_store helper
ggml-ci
* llama : remove obsolete offload names
* llama : fix llm_build_k_shift to use n_head_kv instead of n_head
* llama : simplify falcon Q, K, V computation
* llama : remove obsolete comments in build graphs
* llama : add llm_build_kqv helper
ggml-ci
* llama : minor
* llama : add LLAMA_OFFLOAD_DEBUG + fix starcoder offloading
* llama : fix input allocation logic
* llama : update offload functions for KQ tensors
* llama : normalize tensor names
ggml-ci
* llama : enable warning about not offloaded tensors
* llama : remove extra ; + deduplicate gate_b logic
* llama : add llm_build_inp_embd helper
kalomaze [Tue, 31 Oct 2023 19:44:49 +0000 (14:44 -0500)]
samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)
* Introduce the new Min-P sampler by @kalomaze
The Min-P sampling method was designed as an alternative to Top-P, and aims to ensure a balance of quality and variety. The parameter *p* represents the minimum probability for a token to be considered, relative to the probability of the most likely token.
* Min-P enabled and set to 0.05 default
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: cebtenzzre <redacted>
Tungsten842 [Tue, 31 Oct 2023 17:24:03 +0000 (18:24 +0100)]
flake.nix: fix for rocm 5.7 (#3853)
Georgi Gerganov [Mon, 30 Oct 2023 17:19:15 +0000 (19:19 +0200)]
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
* ggml : move FP16 <-> FP32 stuff to ggml-impl.h
ggml-ci
* tests : fix ARM build
* ggml : explicitly initialize deprecated type traits
* ggml : add math.h to ggml-impl.h
* ggml : remove duplicate static assert macros
* ggml : prefix lookup tables with ggml_
ggml-ci
* ggml-impl : move extern "C" to start of file
Kerfuffle [Sun, 29 Oct 2023 17:31:40 +0000 (11:31 -0600)]
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
* Extend llama_kv_cache_seq_rm to allow matichng any sequence
* Replace llama_kv_cache_tokens_rm with llama_kv_cache_clear
Use llama_kv_cache_clear for cache clearing
Change calls to llama_kv_cache_tokens_rm that want to delete by position to use llama_kv_cache_seq_rm functionality
cebtenzzre [Sun, 29 Oct 2023 16:33:47 +0000 (12:33 -0400)]
make : remove unnecessary dependency on build-info.h (#3842)
Georgi Gerganov [Sun, 29 Oct 2023 16:32:51 +0000 (18:32 +0200)]
llama : fix kv shift bug (#3835)
ggml-ci
Georgi Gerganov [Sun, 29 Oct 2023 16:32:28 +0000 (18:32 +0200)]
ggml : quantization refactoring (#3833)
* ggml : factor all quantization code in ggml-quants
ggml-ci
* ggml-quants : fix Zig and Swift builds + quantize tool
ggml-ci
* quantize : --pure option for disabling k-quant mixtures
---------
Co-authored-by: cebtenzzre <redacted>
Erik Scholz [Sat, 28 Oct 2023 14:41:07 +0000 (16:41 +0200)]
flake : update flake.lock for newer transformers version + provide extra dev shell (#3797)
* flake : update flake.lock for newer transformers version + provide extra dev shell with torch and transformers (for most convert-xxx.py scripts)
Aarni Koskela [Sat, 28 Oct 2023 12:43:01 +0000 (15:43 +0300)]
metal : try cwd for ggml-metal.metal if bundle lookup fails (#3793)
* Try cwd for ggml-metal if bundle lookup fails
When building with `-DBUILD_SHARED_LIBS=ON -DLLAMA_METAL=ON -DLLAMA_BUILD_SERVER=ON`,
`server` would fail to load `ggml-metal.metal` because `[bundle pathForResource:...]`
returns `nil`. In that case, fall back to `ggml-metal.metal` in the cwd instead of
passing `null` as a path.
Follows up on #1782
* Update ggml-metal.m
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sat, 28 Oct 2023 12:25:33 +0000 (15:25 +0300)]
issues : change label from bug to bug-unconfirmed (#3748)
Georgi Gerganov [Sat, 28 Oct 2023 12:25:15 +0000 (15:25 +0300)]
convert : ignore tokens if their IDs are within [0, vocab_size) (#3831)
Kerfuffle [Sat, 28 Oct 2023 11:54:24 +0000 (05:54 -0600)]
llama : allow quantizing k-quants to fall back when tensor size incompatible (#3747)
* Allow quantizing k-quants to fall back when tensor size incompatible
* quantizing: Add warning when tensors were incompatible with k-quants
Clean up k-quants state passing a bit
Georgi Gerganov [Sat, 28 Oct 2023 11:23:11 +0000 (14:23 +0300)]
llama : add option for greedy sampling with probs (#3813)
* llama : add option for greedy sampling with probs
* llama : add comment about llama_sample_token_greedy() missing probs
* sampling : temp == 0.0 -> no probs, temp < 0.0 -> probs
Henk Poley [Sat, 28 Oct 2023 10:16:33 +0000 (12:16 +0200)]
common : print that one line of the syntax help *also* to standard output (#3823)
Georgi Gerganov [Sat, 28 Oct 2023 09:06:08 +0000 (12:06 +0300)]
starcoder : add GPU offloading (#3827)
* starcoder : do not GPU split 1D bias tensors
* starcoder : offload layers to GPU
ggml-ci
Kerfuffle [Fri, 27 Oct 2023 21:40:07 +0000 (15:40 -0600)]
speculative : ensure draft and target model vocab matches (#3812)
* speculative: Ensure draft and target model vocab matches
* Tolerate small differences when checking dft vs tgt vocab
cebtenzzre [Fri, 27 Oct 2023 21:33:53 +0000 (17:33 -0400)]
llama : correctly report GGUFv3 format (#3818)
Thibault Terrasson [Fri, 27 Oct 2023 14:37:41 +0000 (16:37 +0200)]
simple : fix batch handling (#3803)
Georgi Gerganov [Fri, 27 Oct 2023 14:01:23 +0000 (17:01 +0300)]
cuda : improve text-generation and batched decoding performance (#3776)
* cuda : prints wip
* cuda : new cublas gemm branch for multi-batch quantized src0
* cuda : add F32 sgemm branch
* cuda : fine-tune >= VOLTA params + use MMQ only for small batches
* cuda : remove duplicated cuBLAS GEMM code
* cuda : add CUDA_USE_TENSOR_CORES and GGML_CUDA_FORCE_MMQ macros
* build : add compile option to force use of MMQ kernels
Georgi Gerganov [Thu, 26 Oct 2023 19:53:37 +0000 (22:53 +0300)]
server : do not release slot on image input (#3798)
Georgi Gerganov [Wed, 25 Oct 2023 07:26:27 +0000 (10:26 +0300)]
batched-bench : print params at start
Georgi Gerganov [Wed, 25 Oct 2023 07:09:16 +0000 (10:09 +0300)]
log : disable pid in log filenames
cebtenzzre [Tue, 24 Oct 2023 20:10:43 +0000 (16:10 -0400)]
server : add parameter -tb N, --threads-batch N (#3584) (#3768)
Co-authored-by: Michael Coppola <redacted>
Co-authored-by: Michael Coppola <redacted>
Georgi Gerganov [Tue, 24 Oct 2023 20:08:20 +0000 (23:08 +0300)]
server : do not block system prompt update (#3767)
* server : do not block system prompt update
* server : update state machine logic to process system prompts
* server : minor
Georgi Gerganov [Tue, 24 Oct 2023 18:51:20 +0000 (21:51 +0300)]
sync : ggml (conv ops + cuda MSVC fixes) (#3765)
ggml-ci
John Smith [Tue, 24 Oct 2023 17:48:45 +0000 (01:48 +0800)]
cmake : add missed dependencies (#3763)
Georgi Gerganov [Tue, 24 Oct 2023 13:48:37 +0000 (16:48 +0300)]
cuda : add batched cuBLAS GEMM for faster attention (#3749)
* cmake : add helper for faster CUDA builds
* batched : add NGL arg
* ggml : skip nops in compute_forward
* cuda : minor indentation
* cuda : batched cuBLAS GEMMs for src0 F16 and src1 F32 (attention ops)
* Apply suggestions from code review
These changes plus:
```c++
#define cublasGemmBatchedEx hipblasGemmBatchedEx
```
are needed to compile with ROCM. I haven't done performance testing, but it seems to work.
I couldn't figure out how to propose a change for lines outside what the pull changed, also this is the first time trying to create a multi-part review so please forgive me if I mess something up.
* cuda : add ROCm / hipBLAS cublasGemmBatchedEx define
* cuda : add cublasGemmStridedBatchedEx for non-broadcasted cases
* cuda : reduce mallocs in cublasGemmBatchedEx branch
* cuda : add TODO for calling cublas from kernel + using mem pool
---------
Co-authored-by: Kerfuffle <redacted>
Galunid [Tue, 24 Oct 2023 07:17:17 +0000 (09:17 +0200)]
Add more tokenizer tests (#3742)
* Add more tokenizer tests
* Add starcoder
* Update test vocab files
* Restrict bpe tokenizer tests to unicode planes
* Update comment
* Comment cosmetics
* Remove bloom vocab/test
Georgi Gerganov [Tue, 24 Oct 2023 06:46:50 +0000 (09:46 +0300)]
metal : handle ggml_scale for n%4 != 0 (close #3754)
ggml-ci
Georgi Gerganov [Mon, 23 Oct 2023 20:46:05 +0000 (23:46 +0300)]
Revert "make : add optional CUDA_NATIVE_ARCH (#2482)"
This reverts commit
96981f37b1e3f450d9e63e571514217bf60f0a7f .
See:
https://github.com/ggerganov/llama.cpp/pull/2482#issuecomment-
1775975866
M. Yusuf Sarıgöz [Mon, 23 Oct 2023 19:57:16 +0000 (22:57 +0300)]
issues : separate bug and enhancement template + no default title (#3748)
Galunid [Mon, 23 Oct 2023 19:46:00 +0000 (21:46 +0200)]
Update special token handling in conversion scripts for gpt2 derived tokenizers (#3746)
We still have the heads up in `README.md` regarding `bpe` tokenizers and this patch is needed for
- a couple of tokenizer tests
- some more `special` and `non-special` added tokens handling (as far as I understand it)
* Update special token handling
* Add mpt
Marcus Dunn [Mon, 23 Oct 2023 19:40:03 +0000 (12:40 -0700)]
llama : remove token functions with `context` args in favor of `model` (#3720)
* added `llama_model_token_*` variants to all the `llama_token_*` functions.
* added `LLAMA_API`
* formatting
Co-authored-by: Georgi Gerganov <redacted>
* removed old `llama_token` functions
* changed 3 more functions to take in model
- `llama_token_get_text`
- `llama_token_get_score`
- `llama_token_get_type`
* added back docs
* fixed main.cpp
* changed token functions to use new model variants
* changed token functions to use new model variants
---------
Co-authored-by: Georgi Gerganov <redacted>
Galunid [Mon, 23 Oct 2023 15:47:03 +0000 (17:47 +0200)]
Fix baichuan convert script not detecing model (#3739)
It seems nobody objects.
Alex [Sun, 22 Oct 2023 19:56:53 +0000 (15:56 -0400)]
make : add optional CUDA_NATIVE_ARCH (#2482)
Use the environment variable `CUDA_NATIVE_ARCH` if present to set NVCC arch. Otherwise, use `native`.
Georgi Gerganov [Sun, 22 Oct 2023 19:53:08 +0000 (22:53 +0300)]
server : parallel decoding and multimodal (#3677)
* implementing parallel decoding in server example
* crash fixed
* save dev progress
* refactored sampling function
* completion endpoint working
* multiple client support
* grammar + no stream completion
* cached prompt support
* chat.mjs support cached prompt + some fixes
* server ui now support multiple clients
* unused change reverted
* fixed timings per slot
* add context swap
* add changes to README.md
* llava multimodal integration
* fixed tokens probs
* add multimodal input - alfa
* refactor code + remove unused comments + improved README.md
* fix compilation errors with llvm
* notify the user from server ui that multimodality is unavialable
* some ci fixes
* fix ci make build undefined ref errors
* fix long prompt than ctx proposed in #3639
* fixed premature end due stop word
* context shift fixed
* fix llava implementation
* sync README.md changes
* readme change
* update api like OpenAI
* multimodal support enabled by default
* fix make bui;d errors
* fix multiple clients
* fix zig build
* new sampling API
* latest changes of sampling API
* server : coding-style normalization
* server : coding-style normalization (part 2)
* server : remove beam-search functionality
* server : bug fix in ingest_images
n_tokens is incremented internally by llama_batch_add
* server : use refs + use llama_batch_clear()
* server : snake case
* server : minor sync
* added thread safe pipeline
* server : bach has to be allocated for n_parallel sequences
* server : no need for atomic int - already using mutex
* server : logs + minor code style
* server : fix multibyte handle in partial response (#3706)
* fix image load + view image in chat
* make : silence stb warnings
* clip : link to ggml, not to llama
* server : fix switch fallthrough
* server : fix crash in Debug on macOS (I have no idea why this fixes it!?)
* server : refactor ctx_sampling init + n_ctx + names
* server : bug fix for prompt caching
* Do not save/load image_data to localStorage
* editorconfig : new line in index.html
* server : completion requests remember slot_id
* Update readme to document multimodal in server
* server : minor style
* Update readme to document multimodal in server
* server : hide ctx_sampling->prev behind API (#3696)
* server : apply fix from #3722
* server : fix slot reuse
* server : add comment about changing slot_state to bool
---------
Co-authored-by: FSSRepo <redacted>
Co-authored-by: Damian Stewart <redacted>
Co-authored-by: Steward Garcia <redacted>
Co-authored-by: Jhen-Jie Hong <redacted>
Co-authored-by: M. Yusuf Sarıgöz <redacted>
goerch [Sun, 22 Oct 2023 19:21:42 +0000 (21:21 +0200)]
Add test for MPT tokenization (#3728)
* Add test for MPT tokenization
* Revert code motion
* Remove unnecessary restriction in test case
* Clarify logic in conversion
Ian Scrivener [Sun, 22 Oct 2023 18:16:43 +0000 (05:16 +1100)]
readme : remove unsupported node.js library (#3703)
- https://github.com/Atome-FE/llama-node is quite out of date
- doesn't support recent/current llama.cpp functionality
Kerfuffle [Sun, 22 Oct 2023 18:14:56 +0000 (12:14 -0600)]
llama : validate special token ids are in range when loading GGUF model (#3635)
* Add validation for special token ids to llama.cpp
Small optimization for llama_byte_to_token SPM mode
* Fix BPE newline check, only I could break something so simple
* Killll meeeeee
* Account for GGUF_KEY_KEY only setting when the key exists
* Minor code cleanups.
* Fix convert.py error msg when added tokens are out of range
* Make gguf SpecialVocab vocab size-aware
Update conversion scripts accordingly
* Avoid a string copy
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
vvhg1 [Sun, 22 Oct 2023 18:09:51 +0000 (20:09 +0200)]
main : escape prompt for cfg_negative_prompt and consecutive inputs in main with interactive (#3623)
* infill tokens correction
* serverinfill tokens correction
* removing any leading whitespace from infill suffix and removing leeading space token from suffix when params.escape
* removing any leading whitespace from infill suffix and removing leeading space token from suffix when params.escape
* only rm when params.escape, rm space if possible which is added back or rm added space token
* only rm when params.escape, rm space if possible which is added back or rm added space token
* Revert "only rm when params.escape, rm space if possible which is added back or rm added space token"
This reverts commit
63ba0b621f21077c0e3bc6ba6a327534123cb738 .
* fix interactive prompt escaping and fix server infill leading space handling
* rm unnecessary bool check
* process escapes for neg prompt and interactive consec prompts
* removed unneccessary static string escape
Georgi Gerganov [Sun, 22 Oct 2023 05:37:20 +0000 (08:37 +0300)]
batched : add len CLI argument
shibe2 [Thu, 12 Oct 2023 12:01:23 +0000 (16:01 +0400)]
CLBlast: Add outer loops over src0 for broadcasting in mulmat
Reduce repeated dequantization of the same data.
Georgi Gerganov [Fri, 20 Oct 2023 18:07:23 +0000 (21:07 +0300)]
sampling : refactor init to use llama_sampling_params (#3696)
* sampling : refactor init to use llama_sampling_params
* llama : combine repetition, frequency and presence penalties in 1 call
* examples : remove embd-input and gptneox-wip
* sampling : rename penalty params + reduce size of "prev" vector
* sampling : add llama_sampling_print helper
* sampling : hide prev behind API and apply #3661
ggml-ci
Qin Yue Chen [Fri, 20 Oct 2023 11:19:40 +0000 (06:19 -0500)]
gguf : support big endian platform (#3552)
* check whether platform is 390x if yes->do not import immintrin.h
* support s390x big endian
* support --bigendian option for s390x
1. verified with baichuan7b-chat with float 16 on s390x
2. verified with baichuan7b-chat
3. verified with chinese-alpaca-2-13b-f16
* update format based on editor-config checker result
* Update convert-baichuan-hf-to-gguf.py
* 1. check in ggml.c if endianess is not match
2. update GGUF version
3. change get_pack_prefix to property
4. update information log
* always use "GGUF" as beginng of GGUF file
* Compare "GGUF" with file header char by char
1. Set GGUF_MAGIC to "GGUF" string instead of int value
2. Compare "GGUF" char by char to ensure its byte order
3. Move bytes swap code from convert.py to gguf.py write_tensor_data
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 20 Oct 2023 10:06:10 +0000 (13:06 +0300)]
server : fix uninitialized sampling context (close #3685)
Herman Semenov [Fri, 20 Oct 2023 10:02:12 +0000 (10:02 +0000)]
ggml : fix rope + llama minor optimizations (#3560)
* Minor fixes and fixed memleak
* Using const auto references in range-based loop C++17
cebtenzzre [Fri, 20 Oct 2023 05:32:08 +0000 (01:32 -0400)]
convert : restore compat with old Falcon models (#3680)
M. Yusuf Sarıgöz [Thu, 19 Oct 2023 16:40:41 +0000 (19:40 +0300)]
multimodal : add BakLLaVA conversion support (#3682)
M. Yusuf Sarıgöz [Thu, 19 Oct 2023 13:59:11 +0000 (16:59 +0300)]
llava : avoid segfault in case of non-existent mmproj file (#3674)
Georgi Gerganov [Wed, 18 Oct 2023 18:44:43 +0000 (21:44 +0300)]
readme : update hot topics
Georgi Gerganov [Wed, 18 Oct 2023 15:49:40 +0000 (18:49 +0300)]
speculative : bug fixes
Georgi Gerganov [Wed, 18 Oct 2023 13:21:57 +0000 (16:21 +0300)]
speculative : add tree-based sampling example (#3624)
* sampling : one sequence per sampling context
ggml-ci
* speculative : add tree-based sampling support
ggml-ci
* speculative : reuse the n_parallel CLI param
* speculative : refactor sampling
* examples : fix build after sampling refactoring
ggml-ci
* batched : fix n_seq_id
* sampling : fix malloc
ggml-ci
* swift : fix build
ggml-ci
* swift : try to fix build
ggml-ci
* prompts : add assistant.txt
* common : add llama_batch_add() and llama_batch_clear() helpers
* speculative : minor refactor
ggml-ci
* minor : comments + rename
ggml-ci
* speculative : fix off-by-one for n_drafted
* speculative : fix the n_drafted fix + p constants
Jhen-Jie Hong [Wed, 18 Oct 2023 12:21:48 +0000 (07:21 -0500)]
metal : implement q5_0 and q5_1 kernels (#3648)
* metal : implement dequantize_q5_0
* metal : block_q_n_dot_y for block_q5_0 (broken)
* metal : revert unnecessary change
* metal : implement dequantize_q5_1
* metal : block_q_n_dot_y for q5_1 (broken)
* metal : fix block_q_n_dot_y
* minor : spaces / formatting
---------
Co-authored-by: Georgi Gerganov <redacted>
shibe2 [Wed, 18 Oct 2023 12:09:22 +0000 (16:09 +0400)]
opencl : fix element-wise multiplication (#3656)
slaren [Tue, 17 Oct 2023 20:24:50 +0000 (22:24 +0200)]
fix embeddings when using CUDA (#3657)
Georgi Gerganov [Tue, 17 Oct 2023 19:34:26 +0000 (22:34 +0300)]
llama : avoid fprintf in favor of LLAMA_LOG (#3538)
BarfingLemurs [Tue, 17 Oct 2023 18:13:21 +0000 (14:13 -0400)]
readme : update hot-topics & models, detail windows release in usage (#3615)
* Update README.md
* Update README.md
* Update README.md
* move "Running on Windows" section below "Prepare data and run"
---------
Co-authored-by: Georgi Gerganov <redacted>
shibe2 [Wed, 11 Oct 2023 17:30:06 +0000 (21:30 +0400)]
CLBlast: Fix temporary buffer size for f16 conversion (wsize)
Fix buffer overflow.
Reduce the size to fit just one 2D slice.
Assert sufficient size.
slaren [Tue, 17 Oct 2023 17:00:58 +0000 (19:00 +0200)]
train-text-from-scratch : fix assert failure in ggml-alloc (#3618)
Georgi Gerganov [Tue, 17 Oct 2023 16:52:53 +0000 (19:52 +0300)]
editorconfig : remove trailing spaces
coezbek [Tue, 17 Oct 2023 16:51:02 +0000 (18:51 +0200)]
server : documentation of JSON return value of /completion endpoint (#3632)
* Added documentation of JSON return value of /completion endpoint
* Update examples/server/README.md
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 17 Oct 2023 16:12:46 +0000 (19:12 +0300)]
save-load-state : fix example + add ci test (#3655)
* save-load-state : fix example (close #3606)
* ci : add test for save-load-state example
ggml-ci
ldwang [Tue, 17 Oct 2023 15:52:33 +0000 (23:52 +0800)]
readme : add Aquila2 links (#3610)
Signed-off-by: ldwang <redacted>
Co-authored-by: ldwang <redacted>
staviq [Tue, 17 Oct 2023 15:11:01 +0000 (17:11 +0200)]
tokenizer : special token handling (#3538)
* Rewrite special token handling from #1931
* shorten param name, add st verification by type
* use offsets instead of copy by substr
* formatting, remove copying iterator on delete
* llama : normalize code-style
* swift fix
* print pfx/sfx if verb, main: split pfx input sfx
* dont add space when using special tokens
* minor : comment + spacing
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 17 Oct 2023 06:19:28 +0000 (09:19 +0300)]
k-quants : fix quantization ranges (#3646)