]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
slaren [Sun, 23 Jul 2023 13:19:39 +0000 (15:19 +0200)]
fix n_tasks (#2342)
ggml-ci
slaren [Sun, 23 Jul 2023 12:36:02 +0000 (14:36 +0200)]
ggml: move op parameters from tensors to ggml_tensor::op_params (#2333)
* ggml: move op parameters from tensors to ggml_tensor::op_params
* alibi: use memcpy for float params
* remove `src[1] = NULL` in ops
Georgi Gerganov [Sun, 23 Jul 2023 12:09:47 +0000 (15:09 +0300)]
llama : grouped-query attention + LLaMAv2 70B support (#2276)
* CUDA: GQA implementation
* llama : support for GQA and LLaMAv2 70B
ggml-ci
* py : fix hparams parsing (if-else blocks)
ggml-ci
* py : oh boy ..
ggml-ci
* help : fix gqa value for 70B
ggml-ci
---------
Co-authored-by: JohannesGaessler <redacted>
maddes8cht [Sun, 23 Jul 2023 11:59:48 +0000 (13:59 +0200)]
llama : print help to stdout (#2338)
wzy [Sun, 23 Jul 2023 11:57:02 +0000 (19:57 +0800)]
flake : support `nix build '.#opencl'` (#2337)
Christian Demsar [Sun, 23 Jul 2023 11:56:34 +0000 (07:56 -0400)]
llama : print max tensor size to stderr (#2336)
Jose Maldonado [Sun, 23 Jul 2023 11:52:08 +0000 (07:52 -0400)]
make : fix CLBLAST compile support in FreeBSD (#2331)
* Fix Makefile for CLBLAST compile support and instructions for compile llama.cpp FreeBSD
* More general use-case for CLBLAST support (Linux and FreeBSD)
AustinMroz [Sun, 23 Jul 2023 11:16:48 +0000 (06:16 -0500)]
examples : simplify vim plugin (#2327)
Uses builtin json_encode and json_decode functions to simplify escaping
Removes the need for temp files
Jiahao Li [Sun, 23 Jul 2023 11:00:37 +0000 (19:00 +0800)]
metal : support bcast add & dup & cont op (#2323)
Kawrakow [Sun, 23 Jul 2023 05:49:20 +0000 (08:49 +0300)]
Speed up Q4_K (#2322)
Co-authored-by: Iwan Kawrakow <redacted>
Johannes Gäßler [Sat, 22 Jul 2023 19:27:34 +0000 (21:27 +0200)]
CUDA: Fixed 7b q3_K_S with mul_mat_vec_q (#2313)
Georgi Gerganov [Sat, 22 Jul 2023 18:17:57 +0000 (21:17 +0300)]
llama : optimize memory buffers (#2325)
klosax [Sat, 22 Jul 2023 12:21:24 +0000 (14:21 +0200)]
Perplexity: Compute scores correlated to HellaSwag (#2312)
* Add parameter --perplexity-lines to perplexity.cpp
whoreson [Sat, 22 Jul 2023 10:34:51 +0000 (12:34 +0200)]
examples : basic VIM plugin
VIM plugin for server exe
Georgi Gerganov [Sat, 22 Jul 2023 09:00:56 +0000 (12:00 +0300)]
ci : fix args
Georgi Gerganov [Sat, 22 Jul 2023 08:48:22 +0000 (11:48 +0300)]
ci : add 7B CUDA tests (#2319)
* ci : add 7B CUDA tests
ggml-ci
* ci : add Q2_K to the tests
* ci : bump CUDA ppl chunks
ggml-ci
* ci : increase CUDA TG len + add --ignore-eos
* ci : reduce CUDA ppl cunks down to 4 to save time
Richard Roberson [Fri, 21 Jul 2023 19:01:10 +0000 (13:01 -0600)]
examples : add easy python script to create quantized (k-bit support) GGML models from local HF Transformer models (#2311)
* Resync my fork with new llama.cpp commits
* examples : rename to use dash instead of underscore
---------
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Fri, 21 Jul 2023 14:27:51 +0000 (17:27 +0300)]
Custom RoPE + bettter memory management for CUDA (#2295)
* Custom RoPE + bettter memory management for CUDA
* Adjusted look ahead in ggml_cuda_pool_malloc to 5%
This is sufficient it seems.
We end up using about 200 MB less VRAM that way when running
the 13B model with context 8192.
---------
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Fri, 21 Jul 2023 14:05:30 +0000 (17:05 +0300)]
Faster Q3_K implementation on Metal (#2307)
* Faster Q3_K on Metal
* Additional Q3_K speedup on Metal
* Q3_K for QK_K = 64
* Better Q3_K for QK_K = 64
21.6 ms/t -> 21.1 ms/t
---------
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Fri, 21 Jul 2023 12:16:55 +0000 (15:16 +0300)]
Ikko Eltociear Ashimine [Fri, 21 Jul 2023 11:53:07 +0000 (20:53 +0900)]
examples : fix typo in minigpt4.py (#2298)
promt -> prompt
Georgi Gerganov [Fri, 21 Jul 2023 11:51:34 +0000 (14:51 +0300)]
ggml : fix rope args order + assert (#2054)
Georgi Gerganov [Fri, 21 Jul 2023 11:42:41 +0000 (14:42 +0300)]
gitignore : fix final newline
Guillaume "Vermeille" Sanchez [Fri, 21 Jul 2023 10:58:36 +0000 (12:58 +0200)]
llama : remove cfg smooth factor as it is only a reparameterization of the guidance scale (#2280)
Jose Maldonado [Fri, 21 Jul 2023 10:53:27 +0000 (06:53 -0400)]
gitignore : changes for Poetry users + chat examples (#2284)
A fix in Makefile for FreeBSD users. In the platfrom x86_64 is amd64. This fix resolve compilation using CFLAGS and CXXFLAGS with -march=native and -mtune=native
Add two examples for interactive mode using Llama2 models (thx TheBloke for models)
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 21 Jul 2023 10:50:55 +0000 (13:50 +0300)]
make : fix indentation
Georgi Gerganov [Fri, 21 Jul 2023 10:48:18 +0000 (13:48 +0300)]
ci : fix MNT realpath usage (#2250)
Sky Yan [Fri, 21 Jul 2023 10:38:57 +0000 (18:38 +0800)]
make : support customized LLAMA_CUDA_NVCC and LLAMA_CUDA_CCBIN (#2275)
Under certain environment, nvcc and gcc is installed under customized path but not standard path
Co-authored-by: Yan Lin <redacted>
wzy [Fri, 21 Jul 2023 10:26:34 +0000 (18:26 +0800)]
flake : remove intel mkl from flake.nix due to missing files (#2277)
NixOS's mkl misses some libraries like mkl-sdl.pc. See #2261
Currently NixOS doesn't have intel C compiler (icx, icpx). See https://discourse.nixos.org/t/packaging-intel-math-kernel-libraries-mkl/975
So remove it from flake.nix
Some minor changes:
- Change pkgs.python310 to pkgs.python3 to keep latest
- Add pkgconfig to devShells.default
- Remove installPhase because we have `cmake --install` from #2256
Georgi Gerganov [Fri, 21 Jul 2023 10:10:51 +0000 (13:10 +0300)]
llama : make tensor_split ptr instead of array (#2272)
Jiří Podivín [Fri, 21 Jul 2023 10:09:16 +0000 (12:09 +0200)]
make : add new target for test binaries (#2244)
Programs in the tests directory are now build with target tests
and placed in the same location.
* clean target was expanded to remove new binaries
* test target binaries are listed in a variable
* Locations of binaries were added to the .gitignore
Signed-off-by: Jiri Podivin <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Hatsune Miku [Fri, 21 Jul 2023 08:13:18 +0000 (08:13 +0000)]
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
* Miku.sh: Set default model to llama-2-7b-chat
* Miku.sh: Set ctx_size to 4096
* Miku.sh: Add in-prefix/in-suffix opts
* Miku.sh: Switch sampler to mirostat_v2 and tiny prompt improvements
Kawrakow [Fri, 21 Jul 2023 07:44:40 +0000 (10:44 +0300)]
Faster Q2_K on Metal (#2297)
* Faster Q2_K on Metal
* Deleting unnoticed and dangereous trailing white space
* Fixed bug in new metal Q2_K implementation
---------
Co-authored-by: Iwan Kawrakow <redacted>
Przemysław Pawełczyk [Fri, 21 Jul 2023 07:42:21 +0000 (09:42 +0200)]
make : fix embdinput library and server examples building on MSYS2 (#2235)
* make : fix embdinput library and server examples building on MSYS2
* cmake : fix server example building on MSYS2
Kawrakow [Thu, 20 Jul 2023 15:19:45 +0000 (18:19 +0300)]
Faster Q5_K and Q6_K on Metal (#2294)
* Faster Q6_K on Metal
* Faster Q5_K on Metal
* Another Q5_K speedup
---------
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Thu, 20 Jul 2023 12:18:43 +0000 (15:18 +0300)]
Faster Q4_K on Metal (#2290)
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Thu, 20 Jul 2023 10:47:26 +0000 (13:47 +0300)]
llama : fix regression from #2000 - could not load no-mmap models
Shouzheng Liu [Thu, 20 Jul 2023 10:32:22 +0000 (06:32 -0400)]
metal: minor q4 optimization and reduce code size (#2248)
* metal: use uint16_t instead of uint8_t.
Apple GPU doesn't like uint8_t. For every operation on uint8_t
the gpu need to copy the uint8_t to an empty 16 bit register, then
it can issue other instructions.
For the matrix-vector multiplication kernel only, we observed a
340~350 GB/s memory read speed on M1 Max after this commit, which is
very close to the reported hardware limit.
* metal: update rms_norm kernel
This commit double the speed of rms_norm operations by using 512 threads
per threadgroup, combining with SIMD primitives to minimize the need for
thread group barriers.
* metal: use template to reduce size
Revert modifications on block_q4_0 and block_q4_1.
Rinne [Wed, 19 Jul 2023 07:06:40 +0000 (15:06 +0800)]
llama : extend API to get max devices at runtime (#2253)
wzy [Wed, 19 Jul 2023 07:01:55 +0000 (15:01 +0800)]
flake : update flake.nix (#2270)
When `isx86_32 || isx86_64`, it will use mkl, else openblas
According to
https://discourse.nixos.org/t/rpath-of-binary-contains-a-forbidden-reference-to-build/12200/3,
add -DCMAKE_SKIP_BUILD_RPATH=ON
Fix #2261, Nix doesn't provide mkl-sdl.pc.
When we build with -DBUILD_SHARED_LIBS=ON, -DLLAMA_BLAS_VENDOR=Intel10_lp64
replace mkl-sdl.pc by mkl-dynamic-lp64-iomp.pc
wzy [Wed, 19 Jul 2023 07:01:11 +0000 (15:01 +0800)]
cmake : install targets (#2256)
fix #2252
Georgi Gerganov [Tue, 18 Jul 2023 11:24:43 +0000 (14:24 +0300)]
ci : integrate with ggml-org/ci (#2250)
* ci : run ctest
ggml-ci
* ci : add open llama 3B-v2 tests
ggml-ci
* ci : disable wget progress output
ggml-ci
* ci : add open llama 3B-v2 tg tests for q4 and q5 quantizations
ggml-ci
* tests : try to fix tail free sampling test
ggml-ci
* ci : add K-quants
ggml-ci
* ci : add short perplexity tests
ggml-ci
* ci : add README.md
* ppl : add --chunks argument to limit max number of chunks
ggml-ci
* ci : update README
Georgi Gerganov [Tue, 18 Jul 2023 08:50:49 +0000 (11:50 +0300)]
llama : shorten quantization descriptions
Jiahao Li [Mon, 17 Jul 2023 17:39:29 +0000 (01:39 +0800)]
Support dup & cont ops on CUDA (#2242)
Alex Klinkhamer [Sun, 16 Jul 2023 21:01:45 +0000 (14:01 -0700)]
llama : fix t_start_sample_us initialization warning (#2238)
Qingyou Meng [Sun, 16 Jul 2023 19:57:28 +0000 (03:57 +0800)]
ggml : fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG (#2219)
* fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG
* remove ifdef GGML_PERF; update fmt
Jiří Podivín [Sun, 16 Jul 2023 19:54:47 +0000 (21:54 +0200)]
py : turn verify-checksum-models.py into executable (#2245)
README.md was adjusted to reflect the change.
Signed-off-by: Jiri Podivin <redacted>
Xiao-Yong Jin [Sat, 15 Jul 2023 10:34:16 +0000 (06:34 -0400)]
llama : add custom RoPE (#2054)
* Implement customizable RoPE
The original RoPE has pre-defined parameters
theta_i = 10000^(−2(i−1)/d), for i in [1, 2, ..., d/2]
Our customizable RoPE, ggml_rope_custom_inplace, uses
theta_i = scale * base^(−2(i−1)/d), for i in [1, 2, ..., d/2]
with the default matches the original
scale = 1.0
base = 10000
The new command line arguments
--rope-freq-base
--rope-freq-scale
set the two new RoPE parameter.
Recent researches show changing these two parameters extends the context limit with minimal loss.
1. Extending Context to 8K
kaiokendev
https://kaiokendev.github.io/til#extending-context-to-8k
2. Extending Context Window of Large Language Models via Positional Interpolation
Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian
https://arxiv.org/abs/2306.15595
3. NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.
https://www.reddit.com/user/bloc97
https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/
For the bold, try adding the following command line parameters to your favorite model:
-c 16384 --rope-freq-base 80000 --rope-freq-scale 0.5
* ggml-metal: fix custom rope
* common: fix argument names in help
* llama: increase MEM_REQ_EVAL for MODEL_3B
It avoids crashing for quantized weights on CPU.
Better ways to calculate the required buffer size would be better.
* llama: make MEM_REQ_EVAL depend on n_ctx
* server: use proper Content-Type in curl examples
Without the header Content-Type: application/json, curl will POST with
Content-Type: application/x-www-form-urlencoded
Though our simple server doesn't care, the httplib.h used has a limit
with CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH 8192
With Content-Type: application/json, we can send large json data.
* style : minor fixes, mostly indentations
* ggml : fix asserts
---------
Co-authored-by: Georgi Gerganov <redacted>
Dave Della Costa [Fri, 14 Jul 2023 19:13:38 +0000 (15:13 -0400)]
flake : add runHook preInstall/postInstall to installPhase so hooks function (#2224)
wzy [Fri, 14 Jul 2023 19:05:08 +0000 (03:05 +0800)]
make : use pkg-config for OpenBLAS (#2222)
Bach Le [Fri, 14 Jul 2023 19:00:58 +0000 (03:00 +0800)]
cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer (#2220)
Evan Miller [Fri, 14 Jul 2023 18:55:56 +0000 (14:55 -0400)]
ggml : fix static_assert with older compilers #2024 (#2218)
Bach Le [Fri, 14 Jul 2023 18:55:24 +0000 (02:55 +0800)]
llama : add functions that work directly on model (#2197)
* Remove vocab reference from context
* Add functions that works directly with model
Ali Chraghi [Fri, 14 Jul 2023 18:50:58 +0000 (11:50 -0700)]
build.zig : install config header (#2216)
Shangning Xu [Fri, 14 Jul 2023 18:40:05 +0000 (02:40 +0800)]
examples : fixed path typos in embd-input (#2214)
Jiahao Li [Fri, 14 Jul 2023 18:38:24 +0000 (02:38 +0800)]
cuda : support broadcast add & mul (#2192)
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Fri, 14 Jul 2023 17:44:08 +0000 (19:44 +0200)]
CUDA: mul_mat_vec_q kernels for k-quants (#2203)
James Reynolds [Fri, 14 Jul 2023 17:34:40 +0000 (11:34 -0600)]
make : fix combination of LLAMA_METAL and LLAMA_MPI (#2208)
Fixes https://github.com/ggerganov/llama.cpp/issues/2166 by moving commands after the CFLAGS are changed.
Georgi Gerganov [Fri, 14 Jul 2023 13:36:41 +0000 (16:36 +0300)]
ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)
Kawrakow [Fri, 14 Jul 2023 09:46:21 +0000 (12:46 +0300)]
Metal: faster Q4_0 and Q4_1 matrix x vector kernels (#2212)
* 3-5% faster Q4_0 on Metal
* 7-25% faster Q4_1 on Metal
* Oops, forgot to delete the original Q4_1 kernel
---------
Co-authored-by: Iwan Kawrakow <redacted>
Howard Su [Thu, 13 Jul 2023 13:58:25 +0000 (21:58 +0800)]
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Has perf regression when mlock is used.
This reverts commit
2347463201a9f4159ae95b737e1544dd300569c8 .
Howard Su [Thu, 13 Jul 2023 13:58:09 +0000 (21:58 +0800)]
Fix compile error on Windows CUDA (#2207)
Bodo Graumann [Thu, 13 Jul 2023 13:49:14 +0000 (15:49 +0200)]
devops : add missing quotes to bash script (#2193)
This prevents accidentally expanding arguments that contain spaces.
Shouzheng Liu [Wed, 12 Jul 2023 20:10:55 +0000 (16:10 -0400)]
metal : new q4_0 matrix-vector kernel (#2188)
Prefetch data to improve GPU utilization. ~48% faster for 33B model.
Georgi Gerganov [Wed, 12 Jul 2023 17:51:29 +0000 (20:51 +0300)]
ggml : broadcast mul_mat + conv batch support (#2199)
* ggml : broadcast mul_mat + conv batch support
* ggml : apply mul_mat broadcast fix by @jploski
Georgi Gerganov [Wed, 12 Jul 2023 17:27:03 +0000 (20:27 +0300)]
ggml : add ggml_pool_1d and ggml_pool_2d
Georgi Gerganov [Wed, 12 Jul 2023 17:26:18 +0000 (20:26 +0300)]
cuda : add gelu support
Howard Su [Wed, 12 Jul 2023 12:18:40 +0000 (20:18 +0800)]
FP16 is supported in CM=6.0 (#2177)
* FP16 is supported in CM=6.0
* Building PTX code for both of 60 and 61
Co-authored-by: Johannes Gäßler <redacted>
Johannes Gäßler [Wed, 12 Jul 2023 08:38:52 +0000 (10:38 +0200)]
Fixed __dp4a compute capability: 6.0 -> 6.1 (#2189)
Georgi Gerganov [Wed, 12 Jul 2023 07:54:19 +0000 (10:54 +0300)]
ggml : revert CUDA broadcast changes from #2183 (#2191)
Georgi Gerganov [Tue, 11 Jul 2023 19:53:34 +0000 (22:53 +0300)]
ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)
Spencer Sutton [Tue, 11 Jul 2023 16:31:10 +0000 (12:31 -0400)]
ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)
* Add ggml changes
* Update train-text-from-scratch for change
* mpi : adapt to new ggml_tensor->src
---------
Co-authored-by: Georgi Gerganov <redacted>
Bach Le [Tue, 11 Jul 2023 16:18:43 +0000 (00:18 +0800)]
llama : add classifier-free guidance (#2135)
* Initial implementation
* Remove debug print
* Restore signature of llama_init_from_gpt_params
* Free guidance context
* Make freeing of guidance_ctx conditional
* Make Classifier-Free Guidance a sampling function
* Correct typo. CFG already means context-free grammar.
* Record sampling time in llama_sample_classifier_free_guidance
* Shift all values by the max value before applying logsoftmax
* Fix styling based on review
Jinwoo Jeong [Tue, 11 Jul 2023 16:12:35 +0000 (01:12 +0900)]
docker : add '--server' option (#2174)
Chad Brewbaker [Tue, 11 Jul 2023 16:03:06 +0000 (11:03 -0500)]
readme : fix zig build instructions (#2171)
Howard Su [Tue, 11 Jul 2023 14:37:01 +0000 (22:37 +0800)]
Support using mmap when applying LoRA (#2095)
* Support using mmap when applying LoRA
* Fix Linux
* Update comment to reflect the support lora with mmap
LostRuins [Tue, 11 Jul 2023 14:01:08 +0000 (22:01 +0800)]
Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)
* This allows LLAMA models that were previously incompatible with K quants to function mostly as normal. This happens when a model has a vocab != 32000, e.g 32001 which means it's not divisible by 256 or 64. Since the problematic dimensions only apply for `tok_embeddings.weight` and `output.weight` (dimentions 4096 x n_vocab), we can simply quantize these layers to Q8_0 whereas the majority of the hidden layers are still K-quanted since they have compatible dimensions.
* Fix indentation
Co-authored-by: Georgi Gerganov <redacted>
* As an alternative, to avoid failing on Metal due to lack of Q8_0 support, instead quantize tok_embeddings.weight to Q4_0 and retain output.weight as F16. This results in a net gain of about 55mb for a 7B model compared to previous approach, but should minimize adverse impact to model quality.
---------
Co-authored-by: Georgi Gerganov <redacted>
Evan Miller [Mon, 10 Jul 2023 15:49:56 +0000 (11:49 -0400)]
mpi : add support for distributed inference via MPI (#2099)
* MPI support, first cut
* fix warnings, update README
* fixes
* wrap includes
* PR comments
* Update CMakeLists.txt
* Add GH workflow, fix test
* Add info to README
* mpi : trying to move more MPI stuff into ggml-mpi (WIP) (#2099)
* mpi : add names for layer inputs + prep ggml_mpi_graph_compute()
* mpi : move all MPI logic into ggml-mpi
Not tested yet
* mpi : various fixes - communication now works but results are wrong
* mpi : fix output tensor after MPI compute (still not working)
* mpi : fix inference
* mpi : minor
* Add OpenMPI to GH action
* [mpi] continue-on-error: true
* mpi : fix after master merge
* [mpi] Link MPI C++ libraries to fix OpenMPI
* tests : fix new llama_backend API
* [mpi] use MPI_INT32_T
* mpi : factor out recv / send in functions and reuse
* mpi : extend API to allow usage with outer backends (e.g. Metal)
---------
Co-authored-by: Georgi Gerganov <redacted>
oobabooga [Sun, 9 Jul 2023 08:59:53 +0000 (05:59 -0300)]
llama : remove "first token must be BOS" restriction (#2153)
Nigel Bosch [Sun, 9 Jul 2023 08:56:18 +0000 (03:56 -0500)]
main : escape prompt prefix/suffix (#2151)
JackJollimore [Sun, 9 Jul 2023 08:20:43 +0000 (05:20 -0300)]
readme : update Termux instructions (#2147)
The file pathing is significant when running models inside of Termux on Android devices. llama.cpp performance is improved with loading a .bin from the $HOME directory.
clyang [Sun, 9 Jul 2023 08:12:20 +0000 (16:12 +0800)]
ggml : fix buidling with Intel MKL but ask for "cblas.h" issue (#2104) (#2115)
* Fix buidling with Intel MKL but ask for "cblas.h" issue
* Use angle brackets to indicate the system library
rankaiyx [Sun, 9 Jul 2023 07:38:42 +0000 (15:38 +0800)]
readme : add more docs indexes (#2127)
* Update README.md to add more docs indexes
* Update README.md to add more docs indexes
Johannes Gäßler [Sat, 8 Jul 2023 18:01:44 +0000 (20:01 +0200)]
Fixed OpenLLaMA 3b CUDA mul_mat_vec_q (#2144)
Johannes Gäßler [Fri, 7 Jul 2023 22:25:15 +0000 (00:25 +0200)]
CUDA: add __restrict__ to mul mat vec kernels (#2140)
dylan [Fri, 7 Jul 2023 18:25:25 +0000 (11:25 -0700)]
docker : add support for CUDA in docker (#1461)
Co-authored-by: canardleteer <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 7 Jul 2023 18:23:57 +0000 (21:23 +0300)]
ci : switch threads to 1 (#2138)
Qingyou Meng [Fri, 7 Jul 2023 16:24:01 +0000 (00:24 +0800)]
ggml : change ggml_graph_compute() API to not require context (#1999)
* ggml_graph_compute: deprecate using ggml_context, try resolve issue #287
* rewrite: no longer consider backward compitability; plan and make_plan
* minor: rename ctx as plan; const
* remove ggml_graph_compute from tests/test-grad0.c, but current change breaks backward
* add static ggml_graph_compute_sugar()
* minor: update comments
* reusable buffers
* ggml : more consistent naming + metal fixes
* ggml : fix docs
* tests : disable grad / opt + minor naming changes
* ggml : add ggml_graph_compute_with_ctx()
- backwards compatible API
- deduplicates a lot of copy-paste
* ci : enable test-grad0
* examples : factor out plan allocation into a helper function
* llama : factor out plan stuff into a helper function
* ci : fix env
* llama : fix duplicate symbols + refactor example benchmark
* ggml : remove obsolete assert + refactor n_tasks section
* ggml : fix indentation in switch
* llama : avoid unnecessary bool
* ggml : remove comments from source file and match order in header
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 7 Jul 2023 15:36:37 +0000 (18:36 +0300)]
ggml : remove sched_yield() call in ggml_graph_compute_thread() (#2134)
Aarni Koskela [Fri, 7 Jul 2023 13:12:49 +0000 (16:12 +0300)]
convert.py: add mapping for safetensors bf16 (#1598)
Fixes #1473
Howard Su [Fri, 7 Jul 2023 03:34:18 +0000 (11:34 +0800)]
Fix opencl by wrap #if-else-endif with \n (#2086)
Georgi Gerganov [Thu, 6 Jul 2023 16:41:31 +0000 (19:41 +0300)]
ggml : fix restrict usage
Judd [Thu, 6 Jul 2023 16:23:49 +0000 (00:23 +0800)]
convert : update for baichuan (#2081)
1. guess n_layers;
2. relax warnings on context size;
3. add a note that its derivations are also supported.
Co-authored-by: Judd <redacted>
tslmy [Thu, 6 Jul 2023 16:17:50 +0000 (09:17 -0700)]
alpaca.sh : update model file name (#2074)
The original file name, `ggml-alpaca-7b-q4.bin`, implied the first-generation GGML. After the breaking changes (mentioned in https://github.com/ggerganov/llama.cpp/issues/382), `llama.cpp` requires GGML V3 now. Those model files are named `*ggmlv3*.bin`. We should change the example to an actually working model file, so that this thing is more likely to run out-of-the-box for more people, and less people would waste time downloading the old Alpaca model.
Tobias Lütke [Wed, 5 Jul 2023 20:51:13 +0000 (16:51 -0400)]
Expose generation timings from server & update completions.js (#2116)
* use javascript generators as much cleaner API
Also add ways to access completion as promise and EventSource
* export llama_timings as struct and expose them in server
* update readme, update baked includes
* llama : uniform variable names + struct init
---------
Co-authored-by: Georgi Gerganov <redacted>
Jesse Jojo Johnson [Wed, 5 Jul 2023 18:03:19 +0000 (18:03 +0000)]
Update Server Instructions (#2113)
* Update server instructions for web front end
* Update server README
* Remove duplicate OAI instructions
* Fix duplicate text
---------
Co-authored-by: Jesse Johnson <redacted>
Georgi Gerganov [Wed, 5 Jul 2023 17:44:11 +0000 (20:44 +0300)]
ggml : fix bug introduced in #1237
Georgi Gerganov [Wed, 5 Jul 2023 17:20:05 +0000 (20:20 +0300)]
tests : fix test-grad0
Stephan Walter [Wed, 5 Jul 2023 16:13:06 +0000 (16:13 +0000)]
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
* Generalize quantize_fns for simpler FP16 handling
* Remove call to ggml_cuda_mul_mat_get_wsize
* ci : disable FMA for mac os actions
---------
Co-authored-by: Georgi Gerganov <redacted>
Jesse Jojo Johnson [Wed, 5 Jul 2023 15:13:35 +0000 (15:13 +0000)]
Update server instructions for web front end (#2103)
Co-authored-by: Jesse Johnson <redacted>