]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Rinne [Wed, 19 Jul 2023 07:06:40 +0000 (15:06 +0800)]
llama : extend API to get max devices at runtime (#2253)
wzy [Wed, 19 Jul 2023 07:01:55 +0000 (15:01 +0800)]
flake : update flake.nix (#2270)
When `isx86_32 || isx86_64`, it will use mkl, else openblas
According to
https://discourse.nixos.org/t/rpath-of-binary-contains-a-forbidden-reference-to-build/12200/3,
add -DCMAKE_SKIP_BUILD_RPATH=ON
Fix #2261, Nix doesn't provide mkl-sdl.pc.
When we build with -DBUILD_SHARED_LIBS=ON, -DLLAMA_BLAS_VENDOR=Intel10_lp64
replace mkl-sdl.pc by mkl-dynamic-lp64-iomp.pc
wzy [Wed, 19 Jul 2023 07:01:11 +0000 (15:01 +0800)]
cmake : install targets (#2256)
fix #2252
Georgi Gerganov [Tue, 18 Jul 2023 11:24:43 +0000 (14:24 +0300)]
ci : integrate with ggml-org/ci (#2250)
* ci : run ctest
ggml-ci
* ci : add open llama 3B-v2 tests
ggml-ci
* ci : disable wget progress output
ggml-ci
* ci : add open llama 3B-v2 tg tests for q4 and q5 quantizations
ggml-ci
* tests : try to fix tail free sampling test
ggml-ci
* ci : add K-quants
ggml-ci
* ci : add short perplexity tests
ggml-ci
* ci : add README.md
* ppl : add --chunks argument to limit max number of chunks
ggml-ci
* ci : update README
Georgi Gerganov [Tue, 18 Jul 2023 08:50:49 +0000 (11:50 +0300)]
llama : shorten quantization descriptions
Jiahao Li [Mon, 17 Jul 2023 17:39:29 +0000 (01:39 +0800)]
Support dup & cont ops on CUDA (#2242)
Alex Klinkhamer [Sun, 16 Jul 2023 21:01:45 +0000 (14:01 -0700)]
llama : fix t_start_sample_us initialization warning (#2238)
Qingyou Meng [Sun, 16 Jul 2023 19:57:28 +0000 (03:57 +0800)]
ggml : fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG (#2219)
* fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG
* remove ifdef GGML_PERF; update fmt
Jiří Podivín [Sun, 16 Jul 2023 19:54:47 +0000 (21:54 +0200)]
py : turn verify-checksum-models.py into executable (#2245)
README.md was adjusted to reflect the change.
Signed-off-by: Jiri Podivin <redacted>
Xiao-Yong Jin [Sat, 15 Jul 2023 10:34:16 +0000 (06:34 -0400)]
llama : add custom RoPE (#2054)
* Implement customizable RoPE
The original RoPE has pre-defined parameters
theta_i = 10000^(−2(i−1)/d), for i in [1, 2, ..., d/2]
Our customizable RoPE, ggml_rope_custom_inplace, uses
theta_i = scale * base^(−2(i−1)/d), for i in [1, 2, ..., d/2]
with the default matches the original
scale = 1.0
base = 10000
The new command line arguments
--rope-freq-base
--rope-freq-scale
set the two new RoPE parameter.
Recent researches show changing these two parameters extends the context limit with minimal loss.
1. Extending Context to 8K
kaiokendev
https://kaiokendev.github.io/til#extending-context-to-8k
2. Extending Context Window of Large Language Models via Positional Interpolation
Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian
https://arxiv.org/abs/2306.15595
3. NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.
https://www.reddit.com/user/bloc97
https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/
For the bold, try adding the following command line parameters to your favorite model:
-c 16384 --rope-freq-base 80000 --rope-freq-scale 0.5
* ggml-metal: fix custom rope
* common: fix argument names in help
* llama: increase MEM_REQ_EVAL for MODEL_3B
It avoids crashing for quantized weights on CPU.
Better ways to calculate the required buffer size would be better.
* llama: make MEM_REQ_EVAL depend on n_ctx
* server: use proper Content-Type in curl examples
Without the header Content-Type: application/json, curl will POST with
Content-Type: application/x-www-form-urlencoded
Though our simple server doesn't care, the httplib.h used has a limit
with CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH 8192
With Content-Type: application/json, we can send large json data.
* style : minor fixes, mostly indentations
* ggml : fix asserts
---------
Co-authored-by: Georgi Gerganov <redacted>
Dave Della Costa [Fri, 14 Jul 2023 19:13:38 +0000 (15:13 -0400)]
flake : add runHook preInstall/postInstall to installPhase so hooks function (#2224)
wzy [Fri, 14 Jul 2023 19:05:08 +0000 (03:05 +0800)]
make : use pkg-config for OpenBLAS (#2222)
Bach Le [Fri, 14 Jul 2023 19:00:58 +0000 (03:00 +0800)]
cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer (#2220)
Evan Miller [Fri, 14 Jul 2023 18:55:56 +0000 (14:55 -0400)]
ggml : fix static_assert with older compilers #2024 (#2218)
Bach Le [Fri, 14 Jul 2023 18:55:24 +0000 (02:55 +0800)]
llama : add functions that work directly on model (#2197)
* Remove vocab reference from context
* Add functions that works directly with model
Ali Chraghi [Fri, 14 Jul 2023 18:50:58 +0000 (11:50 -0700)]
build.zig : install config header (#2216)
Shangning Xu [Fri, 14 Jul 2023 18:40:05 +0000 (02:40 +0800)]
examples : fixed path typos in embd-input (#2214)
Jiahao Li [Fri, 14 Jul 2023 18:38:24 +0000 (02:38 +0800)]
cuda : support broadcast add & mul (#2192)
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Fri, 14 Jul 2023 17:44:08 +0000 (19:44 +0200)]
CUDA: mul_mat_vec_q kernels for k-quants (#2203)
James Reynolds [Fri, 14 Jul 2023 17:34:40 +0000 (11:34 -0600)]
make : fix combination of LLAMA_METAL and LLAMA_MPI (#2208)
Fixes https://github.com/ggerganov/llama.cpp/issues/2166 by moving commands after the CFLAGS are changed.
Georgi Gerganov [Fri, 14 Jul 2023 13:36:41 +0000 (16:36 +0300)]
ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)
Kawrakow [Fri, 14 Jul 2023 09:46:21 +0000 (12:46 +0300)]
Metal: faster Q4_0 and Q4_1 matrix x vector kernels (#2212)
* 3-5% faster Q4_0 on Metal
* 7-25% faster Q4_1 on Metal
* Oops, forgot to delete the original Q4_1 kernel
---------
Co-authored-by: Iwan Kawrakow <redacted>
Howard Su [Thu, 13 Jul 2023 13:58:25 +0000 (21:58 +0800)]
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Has perf regression when mlock is used.
This reverts commit
2347463201a9f4159ae95b737e1544dd300569c8 .
Howard Su [Thu, 13 Jul 2023 13:58:09 +0000 (21:58 +0800)]
Fix compile error on Windows CUDA (#2207)
Bodo Graumann [Thu, 13 Jul 2023 13:49:14 +0000 (15:49 +0200)]
devops : add missing quotes to bash script (#2193)
This prevents accidentally expanding arguments that contain spaces.
Shouzheng Liu [Wed, 12 Jul 2023 20:10:55 +0000 (16:10 -0400)]
metal : new q4_0 matrix-vector kernel (#2188)
Prefetch data to improve GPU utilization. ~48% faster for 33B model.
Georgi Gerganov [Wed, 12 Jul 2023 17:51:29 +0000 (20:51 +0300)]
ggml : broadcast mul_mat + conv batch support (#2199)
* ggml : broadcast mul_mat + conv batch support
* ggml : apply mul_mat broadcast fix by @jploski
Georgi Gerganov [Wed, 12 Jul 2023 17:27:03 +0000 (20:27 +0300)]
ggml : add ggml_pool_1d and ggml_pool_2d
Georgi Gerganov [Wed, 12 Jul 2023 17:26:18 +0000 (20:26 +0300)]
cuda : add gelu support
Howard Su [Wed, 12 Jul 2023 12:18:40 +0000 (20:18 +0800)]
FP16 is supported in CM=6.0 (#2177)
* FP16 is supported in CM=6.0
* Building PTX code for both of 60 and 61
Co-authored-by: Johannes Gäßler <redacted>
Johannes Gäßler [Wed, 12 Jul 2023 08:38:52 +0000 (10:38 +0200)]
Fixed __dp4a compute capability: 6.0 -> 6.1 (#2189)
Georgi Gerganov [Wed, 12 Jul 2023 07:54:19 +0000 (10:54 +0300)]
ggml : revert CUDA broadcast changes from #2183 (#2191)
Georgi Gerganov [Tue, 11 Jul 2023 19:53:34 +0000 (22:53 +0300)]
ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)
Spencer Sutton [Tue, 11 Jul 2023 16:31:10 +0000 (12:31 -0400)]
ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)
* Add ggml changes
* Update train-text-from-scratch for change
* mpi : adapt to new ggml_tensor->src
---------
Co-authored-by: Georgi Gerganov <redacted>
Bach Le [Tue, 11 Jul 2023 16:18:43 +0000 (00:18 +0800)]
llama : add classifier-free guidance (#2135)
* Initial implementation
* Remove debug print
* Restore signature of llama_init_from_gpt_params
* Free guidance context
* Make freeing of guidance_ctx conditional
* Make Classifier-Free Guidance a sampling function
* Correct typo. CFG already means context-free grammar.
* Record sampling time in llama_sample_classifier_free_guidance
* Shift all values by the max value before applying logsoftmax
* Fix styling based on review
Jinwoo Jeong [Tue, 11 Jul 2023 16:12:35 +0000 (01:12 +0900)]
docker : add '--server' option (#2174)
Chad Brewbaker [Tue, 11 Jul 2023 16:03:06 +0000 (11:03 -0500)]
readme : fix zig build instructions (#2171)
Howard Su [Tue, 11 Jul 2023 14:37:01 +0000 (22:37 +0800)]
Support using mmap when applying LoRA (#2095)
* Support using mmap when applying LoRA
* Fix Linux
* Update comment to reflect the support lora with mmap
LostRuins [Tue, 11 Jul 2023 14:01:08 +0000 (22:01 +0800)]
Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)
* This allows LLAMA models that were previously incompatible with K quants to function mostly as normal. This happens when a model has a vocab != 32000, e.g 32001 which means it's not divisible by 256 or 64. Since the problematic dimensions only apply for `tok_embeddings.weight` and `output.weight` (dimentions 4096 x n_vocab), we can simply quantize these layers to Q8_0 whereas the majority of the hidden layers are still K-quanted since they have compatible dimensions.
* Fix indentation
Co-authored-by: Georgi Gerganov <redacted>
* As an alternative, to avoid failing on Metal due to lack of Q8_0 support, instead quantize tok_embeddings.weight to Q4_0 and retain output.weight as F16. This results in a net gain of about 55mb for a 7B model compared to previous approach, but should minimize adverse impact to model quality.
---------
Co-authored-by: Georgi Gerganov <redacted>
Evan Miller [Mon, 10 Jul 2023 15:49:56 +0000 (11:49 -0400)]
mpi : add support for distributed inference via MPI (#2099)
* MPI support, first cut
* fix warnings, update README
* fixes
* wrap includes
* PR comments
* Update CMakeLists.txt
* Add GH workflow, fix test
* Add info to README
* mpi : trying to move more MPI stuff into ggml-mpi (WIP) (#2099)
* mpi : add names for layer inputs + prep ggml_mpi_graph_compute()
* mpi : move all MPI logic into ggml-mpi
Not tested yet
* mpi : various fixes - communication now works but results are wrong
* mpi : fix output tensor after MPI compute (still not working)
* mpi : fix inference
* mpi : minor
* Add OpenMPI to GH action
* [mpi] continue-on-error: true
* mpi : fix after master merge
* [mpi] Link MPI C++ libraries to fix OpenMPI
* tests : fix new llama_backend API
* [mpi] use MPI_INT32_T
* mpi : factor out recv / send in functions and reuse
* mpi : extend API to allow usage with outer backends (e.g. Metal)
---------
Co-authored-by: Georgi Gerganov <redacted>
oobabooga [Sun, 9 Jul 2023 08:59:53 +0000 (05:59 -0300)]
llama : remove "first token must be BOS" restriction (#2153)
Nigel Bosch [Sun, 9 Jul 2023 08:56:18 +0000 (03:56 -0500)]
main : escape prompt prefix/suffix (#2151)
JackJollimore [Sun, 9 Jul 2023 08:20:43 +0000 (05:20 -0300)]
readme : update Termux instructions (#2147)
The file pathing is significant when running models inside of Termux on Android devices. llama.cpp performance is improved with loading a .bin from the $HOME directory.
clyang [Sun, 9 Jul 2023 08:12:20 +0000 (16:12 +0800)]
ggml : fix buidling with Intel MKL but ask for "cblas.h" issue (#2104) (#2115)
* Fix buidling with Intel MKL but ask for "cblas.h" issue
* Use angle brackets to indicate the system library
rankaiyx [Sun, 9 Jul 2023 07:38:42 +0000 (15:38 +0800)]
readme : add more docs indexes (#2127)
* Update README.md to add more docs indexes
* Update README.md to add more docs indexes
Johannes Gäßler [Sat, 8 Jul 2023 18:01:44 +0000 (20:01 +0200)]
Fixed OpenLLaMA 3b CUDA mul_mat_vec_q (#2144)
Johannes Gäßler [Fri, 7 Jul 2023 22:25:15 +0000 (00:25 +0200)]
CUDA: add __restrict__ to mul mat vec kernels (#2140)
dylan [Fri, 7 Jul 2023 18:25:25 +0000 (11:25 -0700)]
docker : add support for CUDA in docker (#1461)
Co-authored-by: canardleteer <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 7 Jul 2023 18:23:57 +0000 (21:23 +0300)]
ci : switch threads to 1 (#2138)
Qingyou Meng [Fri, 7 Jul 2023 16:24:01 +0000 (00:24 +0800)]
ggml : change ggml_graph_compute() API to not require context (#1999)
* ggml_graph_compute: deprecate using ggml_context, try resolve issue #287
* rewrite: no longer consider backward compitability; plan and make_plan
* minor: rename ctx as plan; const
* remove ggml_graph_compute from tests/test-grad0.c, but current change breaks backward
* add static ggml_graph_compute_sugar()
* minor: update comments
* reusable buffers
* ggml : more consistent naming + metal fixes
* ggml : fix docs
* tests : disable grad / opt + minor naming changes
* ggml : add ggml_graph_compute_with_ctx()
- backwards compatible API
- deduplicates a lot of copy-paste
* ci : enable test-grad0
* examples : factor out plan allocation into a helper function
* llama : factor out plan stuff into a helper function
* ci : fix env
* llama : fix duplicate symbols + refactor example benchmark
* ggml : remove obsolete assert + refactor n_tasks section
* ggml : fix indentation in switch
* llama : avoid unnecessary bool
* ggml : remove comments from source file and match order in header
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 7 Jul 2023 15:36:37 +0000 (18:36 +0300)]
ggml : remove sched_yield() call in ggml_graph_compute_thread() (#2134)
Aarni Koskela [Fri, 7 Jul 2023 13:12:49 +0000 (16:12 +0300)]
convert.py: add mapping for safetensors bf16 (#1598)
Fixes #1473
Howard Su [Fri, 7 Jul 2023 03:34:18 +0000 (11:34 +0800)]
Fix opencl by wrap #if-else-endif with \n (#2086)
Georgi Gerganov [Thu, 6 Jul 2023 16:41:31 +0000 (19:41 +0300)]
ggml : fix restrict usage
Judd [Thu, 6 Jul 2023 16:23:49 +0000 (00:23 +0800)]
convert : update for baichuan (#2081)
1. guess n_layers;
2. relax warnings on context size;
3. add a note that its derivations are also supported.
Co-authored-by: Judd <redacted>
tslmy [Thu, 6 Jul 2023 16:17:50 +0000 (09:17 -0700)]
alpaca.sh : update model file name (#2074)
The original file name, `ggml-alpaca-7b-q4.bin`, implied the first-generation GGML. After the breaking changes (mentioned in https://github.com/ggerganov/llama.cpp/issues/382), `llama.cpp` requires GGML V3 now. Those model files are named `*ggmlv3*.bin`. We should change the example to an actually working model file, so that this thing is more likely to run out-of-the-box for more people, and less people would waste time downloading the old Alpaca model.
Tobias Lütke [Wed, 5 Jul 2023 20:51:13 +0000 (16:51 -0400)]
Expose generation timings from server & update completions.js (#2116)
* use javascript generators as much cleaner API
Also add ways to access completion as promise and EventSource
* export llama_timings as struct and expose them in server
* update readme, update baked includes
* llama : uniform variable names + struct init
---------
Co-authored-by: Georgi Gerganov <redacted>
Jesse Jojo Johnson [Wed, 5 Jul 2023 18:03:19 +0000 (18:03 +0000)]
Update Server Instructions (#2113)
* Update server instructions for web front end
* Update server README
* Remove duplicate OAI instructions
* Fix duplicate text
---------
Co-authored-by: Jesse Johnson <redacted>
Georgi Gerganov [Wed, 5 Jul 2023 17:44:11 +0000 (20:44 +0300)]
ggml : fix bug introduced in #1237
Georgi Gerganov [Wed, 5 Jul 2023 17:20:05 +0000 (20:20 +0300)]
tests : fix test-grad0
Stephan Walter [Wed, 5 Jul 2023 16:13:06 +0000 (16:13 +0000)]
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
* Generalize quantize_fns for simpler FP16 handling
* Remove call to ggml_cuda_mul_mat_get_wsize
* ci : disable FMA for mac os actions
---------
Co-authored-by: Georgi Gerganov <redacted>
Jesse Jojo Johnson [Wed, 5 Jul 2023 15:13:35 +0000 (15:13 +0000)]
Update server instructions for web front end (#2103)
Co-authored-by: Jesse Johnson <redacted>
Johannes Gäßler [Wed, 5 Jul 2023 12:19:42 +0000 (14:19 +0200)]
Quantized dot products for CUDA mul mat vec (#2067)
Howard Su [Wed, 5 Jul 2023 10:31:23 +0000 (18:31 +0800)]
llama: Don't double count the sampling time (#2107)
Johannes Gäßler [Wed, 5 Jul 2023 06:58:05 +0000 (08:58 +0200)]
Fixed OpenCL offloading prints (#2082)
Nigel Bosch [Tue, 4 Jul 2023 23:33:33 +0000 (18:33 -0500)]
embd-input: Fix input embedding example unsigned int seed (#2105)
Georgi Gerganov [Tue, 4 Jul 2023 19:25:22 +0000 (22:25 +0300)]
readme : add link web chat PR
Georgi Gerganov [Tue, 4 Jul 2023 18:54:11 +0000 (21:54 +0300)]
ggml : sync latest (new ops, macros, refactoring) (#2106)
- add ggml_argmax()
- add ggml_tanh()
- add ggml_elu()
- refactor ggml_conv_1d() and variants
- refactor ggml_conv_2d() and variants
- add helper macros to reduce code duplication in ggml.c
jwj7140 [Tue, 4 Jul 2023 18:06:12 +0000 (03:06 +0900)]
Add an API example using server.cpp similar to OAI. (#2009)
* add api_like_OAI.py
* add evaluated token count to server
* add /v1/ endpoints binding
Tobias Lütke [Tue, 4 Jul 2023 14:05:27 +0000 (10:05 -0400)]
Simple webchat for server (#1998)
* expose simple web interface on root domain
* embed index and add --path for choosing static dir
* allow server to multithread
because web browsers send a lot of garbage requests we want the server
to multithread when serving 404s for favicon's etc. To avoid blowing up
llama we just take a mutex when it's invoked.
* let's try this with the xxd tool instead and see if msvc is happier with that
* enable server in Makefiles
* add /completion.js file to make it easy to use the server from js
* slightly nicer css
* rework state management into session, expose historyTemplate to settings
---------
Co-authored-by: Georgi Gerganov <redacted>
Henri Vasserman [Tue, 4 Jul 2023 12:38:04 +0000 (15:38 +0300)]
Allow old Make to build server. (#2098)
Also make server build by default.
Tested with Make 3.82
ZhouYuChen [Tue, 4 Jul 2023 12:15:16 +0000 (20:15 +0800)]
Update Makefile: clean simple (#2097)
Erik Scholz [Mon, 3 Jul 2023 23:50:12 +0000 (01:50 +0200)]
CI: make the brew update temporarily optional. (#2092)
until they decide to fix the brew installation in the macos runners.
see the open issues. eg https://github.com/actions/runner-images/pull/7710
Govlzkoy [Mon, 3 Jul 2023 23:50:00 +0000 (07:50 +0800)]
[ggml] fix index for ne03 value in ggml_cl_mul_f32 (#2088)
Henri Vasserman [Mon, 3 Jul 2023 21:05:23 +0000 (00:05 +0300)]
fix server crashes (#2076)
Howard Su [Mon, 3 Jul 2023 18:43:55 +0000 (02:43 +0800)]
Fix crash of test-tokenizer-0 under Debug build (#2064)
* Fix crash of test-tokenizer-0 under Debug build
* Change per comment
Howard Su [Mon, 3 Jul 2023 11:58:58 +0000 (19:58 +0800)]
[llama] No need to check file version when loading vocab score (#2079)
WangHaoranRobin [Sun, 2 Jul 2023 21:38:44 +0000 (05:38 +0800)]
server: add option to output probabilities for completion (#1962)
* server: add option to output probabilities for completion
* server: fix issue when handling probability output for incomplete tokens for multibyte character generation
* server: fix llama_sample_top_k order
* examples/common.h: put all bool variables in gpt_params together
Georgi Gerganov [Sun, 2 Jul 2023 06:46:46 +0000 (09:46 +0300)]
ggml : fix build with OpenBLAS (close #2066)
Johannes Gäßler [Sat, 1 Jul 2023 19:49:44 +0000 (21:49 +0200)]
Better CUDA synchronization logic (#2057)
Johannes Gäßler [Sat, 1 Jul 2023 19:47:26 +0000 (21:47 +0200)]
Test-based VRAM scratch size + context adjustment (#2056)
Daniel Drake [Sat, 1 Jul 2023 18:31:44 +0000 (20:31 +0200)]
cmake : don't force -mcpu=native on aarch64 (#2063)
It's currently not possible to cross-compile llama.cpp for aarch64
because CMakeLists.txt forces -mcpu=native for that target.
-mcpu=native doesn't make sense if your build host is not the
target architecture, and clang rejects it for that reason, aborting the
build. This can be easily reproduced using the current Android NDK to build
for aarch64 on an x86_64 host.
If there is not a specific CPU-tuning target for aarch64 then -mcpu
should be omitted completely. I think that makes sense, there is not
enough variance in the aarch64 instruction set to warrant a fixed -mcpu
optimization at this point. And if someone is building natively and wishes
to enable any possible optimizations for the host device, then there is
already the LLAMA_NATIVE option available.
Fixes #495.
Aaron Miller [Sat, 1 Jul 2023 18:14:59 +0000 (11:14 -0700)]
metal : release buffers when freeing metal context (#2062)
Judd [Sat, 1 Jul 2023 17:00:25 +0000 (01:00 +0800)]
convert : add support of baichuan-7b (#2055)
Co-authored-by: Judd <redacted>
Georgi Gerganov [Sat, 1 Jul 2023 16:05:09 +0000 (19:05 +0300)]
llama : fix return value of llama_load_session_file_internal (#2022)
Rand Xie [Sat, 1 Jul 2023 16:02:58 +0000 (00:02 +0800)]
llama : catch llama_load_session_file_internal exceptions (#2022)
* convert checks in llama_load_session_file to throw and handle them
* make llama_load_session_file_internal static
* address feedbacks to avoid using exceptions
Georgi Gerganov [Sat, 1 Jul 2023 15:46:00 +0000 (18:46 +0300)]
embd-input : fix returning ptr to temporary
Georgi Gerganov [Sat, 1 Jul 2023 15:45:44 +0000 (18:45 +0300)]
train : fix compile warning
Qingyou Meng [Sat, 1 Jul 2023 15:42:43 +0000 (23:42 +0800)]
ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)
Will not be scheduled unless explicitly enabled.
Howard Su [Thu, 29 Jun 2023 13:15:15 +0000 (21:15 +0800)]
Use unsigned for random seed (#2006)
* Use unsigned for random seed. Keep -1 as the value to use a time based seed.
Co-authored-by: Georgi Gerganov <redacted>
LostRuins [Thu, 29 Jun 2023 03:56:43 +0000 (11:56 +0800)]
Porting the improved K-Quant CUDA kernels to OpenCL (#1966)
* Added broken new q4k quant
* xx + ib0
* Fix q2_k fast kernel
* Use preprocessor for QK_K
* Add q6_k fast matmul kernel
* ported q3k speedup successfully
* ported q2k and q5k speedups
* remove old dot kernels and template
* fixed global const struct types
* fixing address spaces
* fixed string too long CI issue
---------
Co-authored-by: 0cc4m <redacted>
m3ndax [Wed, 28 Jun 2023 18:39:08 +0000 (20:39 +0200)]
llama : replacing auto &kv with const auto &kv (#2041)
* Replacing auto &kv with const auto &kv
* Create codacy.yml
* Delete codacy.yml
Salvador E. Tropea [Wed, 28 Jun 2023 17:27:31 +0000 (14:27 -0300)]
cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)
- Not used
Salvador E. Tropea [Wed, 28 Jun 2023 17:26:26 +0000 (14:26 -0300)]
cuda : fix missing const qualifier in casts (#2027)
Howard Su [Wed, 28 Jun 2023 17:13:02 +0000 (10:13 -0700)]
llama : remove shards weight file support (#2000)
* Remove multiple shards
* Remove multiple file loaders
* Remove llama_load_tensor_shard class
* Simplify load logic
* Remove dead code guess_n_parts function
* Remove vocab_only from constructor of llama_model_loader
* Remove alignment_prevents_mmap which is not more needed.
* Remove useless check
Johannes Gäßler [Wed, 28 Jun 2023 16:35:54 +0000 (18:35 +0200)]
CUDA GPU acceleration for LoRAs + f16 models (#1970)
ningshanwutuobang [Wed, 28 Jun 2023 15:53:37 +0000 (23:53 +0800)]
llama : support input embeddings directly (#1910)
* add interface for float input
* fixed inpL shape and type
* add examples of input floats
* add test example for embd input
* fixed sampling
* add free for context
* fixed add end condition for generating
* add examples for llava.py
* add READMD for llava.py
* add READMD for llava.py
* add example of PandaGPT
* refactor the interface and fixed the styles
* add cmake build for embd-input
* add cmake build for embd-input
* Add MiniGPT-4 example
* change the order of the args of llama_eval_internal
* fix ci error
Erik Scholz [Tue, 27 Jun 2023 17:06:33 +0000 (19:06 +0200)]
fix pthreads setaffinity usage on android (#2020)
Howard Su [Tue, 27 Jun 2023 05:07:13 +0000 (13:07 +0800)]
baby-llama : fix build after ggml_rope change (#2016)
Georgi Gerganov [Mon, 26 Jun 2023 21:37:13 +0000 (00:37 +0300)]
llama : fix rope usage after ChatGLM change