]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agoMIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
Hatsune Miku [Fri, 21 Jul 2023 08:13:18 +0000 (08:13 +0000)]
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun ðŸŽ‰ (#2287)

* Miku.sh: Set default model to llama-2-7b-chat

* Miku.sh: Set ctx_size to 4096

* Miku.sh: Add in-prefix/in-suffix opts

* Miku.sh: Switch sampler to mirostat_v2 and tiny prompt improvements

2 years agoFaster Q2_K on Metal (#2297)
Kawrakow [Fri, 21 Jul 2023 07:44:40 +0000 (10:44 +0300)]
Faster Q2_K on Metal (#2297)

* Faster Q2_K on Metal

* Deleting unnoticed and dangereous trailing white space

* Fixed bug in new metal Q2_K implementation

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agomake : fix embdinput library and server examples building on MSYS2 (#2235)
Przemysław Pawełczyk [Fri, 21 Jul 2023 07:42:21 +0000 (09:42 +0200)]
make : fix embdinput library and server examples building on MSYS2 (#2235)

* make : fix embdinput library and server examples building on MSYS2

* cmake : fix server example building on MSYS2

2 years agoFaster Q5_K and Q6_K on Metal (#2294)
Kawrakow [Thu, 20 Jul 2023 15:19:45 +0000 (18:19 +0300)]
Faster Q5_K and Q6_K on Metal (#2294)

* Faster Q6_K on Metal

* Faster Q5_K on Metal

* Another Q5_K speedup

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoFaster Q4_K on Metal (#2290)
Kawrakow [Thu, 20 Jul 2023 12:18:43 +0000 (15:18 +0300)]
Faster Q4_K on Metal (#2290)

Co-authored-by: Iwan Kawrakow <redacted>
2 years agollama : fix regression from #2000 - could not load no-mmap models
Georgi Gerganov [Thu, 20 Jul 2023 10:47:26 +0000 (13:47 +0300)]
llama : fix regression from #2000 - could not load no-mmap models

2 years agometal: minor q4 optimization and reduce code size (#2248)
Shouzheng Liu [Thu, 20 Jul 2023 10:32:22 +0000 (06:32 -0400)]
metal: minor q4 optimization and reduce code size (#2248)

* metal: use uint16_t instead of uint8_t.

Apple GPU doesn't like uint8_t. For every operation on uint8_t
the gpu need to copy the uint8_t to an empty 16 bit register, then
it can issue other instructions.

For the matrix-vector multiplication kernel only, we observed a
340~350 GB/s memory read speed on M1 Max after this commit, which is
very close to the reported hardware limit.

* metal: update rms_norm kernel

This commit double the speed of rms_norm operations by using 512 threads
per threadgroup, combining with SIMD primitives to minimize the need for
thread group barriers.

* metal: use template to reduce size

Revert modifications on block_q4_0 and block_q4_1.

2 years agollama : extend API to get max devices at runtime (#2253)
Rinne [Wed, 19 Jul 2023 07:06:40 +0000 (15:06 +0800)]
llama : extend API to get max devices at runtime (#2253)

2 years agoflake : update flake.nix (#2270)
wzy [Wed, 19 Jul 2023 07:01:55 +0000 (15:01 +0800)]
flake : update flake.nix (#2270)

When `isx86_32 || isx86_64`, it will use mkl, else openblas

According to
https://discourse.nixos.org/t/rpath-of-binary-contains-a-forbidden-reference-to-build/12200/3,
add -DCMAKE_SKIP_BUILD_RPATH=ON

Fix #2261, Nix doesn't provide mkl-sdl.pc.
When we build with -DBUILD_SHARED_LIBS=ON, -DLLAMA_BLAS_VENDOR=Intel10_lp64
replace mkl-sdl.pc by mkl-dynamic-lp64-iomp.pc

2 years agocmake : install targets (#2256)
wzy [Wed, 19 Jul 2023 07:01:11 +0000 (15:01 +0800)]
cmake : install targets (#2256)

fix #2252

2 years agoci : integrate with ggml-org/ci (#2250)
Georgi Gerganov [Tue, 18 Jul 2023 11:24:43 +0000 (14:24 +0300)]
ci : integrate with ggml-org/ci (#2250)

* ci : run ctest

ggml-ci

* ci : add open llama 3B-v2 tests

ggml-ci

* ci : disable wget progress output

ggml-ci

* ci : add open llama 3B-v2 tg tests for q4 and q5 quantizations

ggml-ci

* tests : try to fix tail free sampling test

ggml-ci

* ci : add K-quants

ggml-ci

* ci : add short perplexity tests

ggml-ci

* ci : add README.md

* ppl : add --chunks argument to limit max number of chunks

ggml-ci

* ci : update README

2 years agollama : shorten quantization descriptions
Georgi Gerganov [Tue, 18 Jul 2023 08:50:49 +0000 (11:50 +0300)]
llama : shorten quantization descriptions

2 years agoSupport dup & cont ops on CUDA (#2242)
Jiahao Li [Mon, 17 Jul 2023 17:39:29 +0000 (01:39 +0800)]
Support dup & cont ops on CUDA (#2242)

2 years agollama : fix t_start_sample_us initialization warning (#2238)
Alex Klinkhamer [Sun, 16 Jul 2023 21:01:45 +0000 (14:01 -0700)]
llama : fix t_start_sample_us initialization warning (#2238)

2 years agoggml : fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG...
Qingyou Meng [Sun, 16 Jul 2023 19:57:28 +0000 (03:57 +0800)]
ggml : fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG (#2219)

* fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG

* remove ifdef GGML_PERF; update fmt

2 years agopy : turn verify-checksum-models.py into executable (#2245)
Jiří Podivín [Sun, 16 Jul 2023 19:54:47 +0000 (21:54 +0200)]
py : turn verify-checksum-models.py into executable (#2245)

README.md was adjusted to reflect the change.

Signed-off-by: Jiri Podivin <redacted>
2 years agollama : add custom RoPE (#2054)
Xiao-Yong Jin [Sat, 15 Jul 2023 10:34:16 +0000 (06:34 -0400)]
llama : add custom RoPE (#2054)

* Implement customizable RoPE

The original RoPE has pre-defined parameters

theta_i = 10000^(−2(i−1)/d), for i in [1, 2, ..., d/2]

Our customizable RoPE, ggml_rope_custom_inplace, uses

theta_i = scale * base^(−2(i−1)/d), for i in [1, 2, ..., d/2]

with the default matches the original

scale = 1.0
base = 10000

The new command line arguments
--rope-freq-base
--rope-freq-scale
set the two new RoPE parameter.

Recent researches show changing these two parameters extends the context limit with minimal loss.

1. Extending Context to 8K
   kaiokendev
   https://kaiokendev.github.io/til#extending-context-to-8k

2. Extending Context Window of Large Language Models via Positional Interpolation
   Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian
   https://arxiv.org/abs/2306.15595

3. NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.
   https://www.reddit.com/user/bloc97
   https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/

For the bold, try adding the following command line parameters to your favorite model:
-c 16384 --rope-freq-base 80000 --rope-freq-scale 0.5

* ggml-metal: fix custom rope

* common: fix argument names in help

* llama: increase MEM_REQ_EVAL for MODEL_3B

It avoids crashing for quantized weights on CPU.
Better ways to calculate the required buffer size would be better.

* llama: make MEM_REQ_EVAL depend on n_ctx

* server: use proper Content-Type in curl examples

Without the header Content-Type: application/json, curl will POST with
Content-Type: application/x-www-form-urlencoded

Though our simple server doesn't care, the httplib.h used has a limit
with CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH 8192

With Content-Type: application/json, we can send large json data.

* style : minor fixes, mostly indentations

* ggml : fix asserts

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoflake : add runHook preInstall/postInstall to installPhase so hooks function (#2224)
Dave Della Costa [Fri, 14 Jul 2023 19:13:38 +0000 (15:13 -0400)]
flake : add runHook preInstall/postInstall to installPhase so hooks function (#2224)

2 years agomake : use pkg-config for OpenBLAS (#2222)
wzy [Fri, 14 Jul 2023 19:05:08 +0000 (03:05 +0800)]
make : use pkg-config for OpenBLAS (#2222)

2 years agocuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer (#2220)
Bach Le [Fri, 14 Jul 2023 19:00:58 +0000 (03:00 +0800)]
cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer (#2220)

2 years agoggml : fix static_assert with older compilers #2024 (#2218)
Evan Miller [Fri, 14 Jul 2023 18:55:56 +0000 (14:55 -0400)]
ggml : fix static_assert with older compilers #2024 (#2218)

2 years agollama : add functions that work directly on model (#2197)
Bach Le [Fri, 14 Jul 2023 18:55:24 +0000 (02:55 +0800)]
llama : add functions that work directly on model (#2197)

* Remove vocab reference from context

* Add functions that works directly with model

2 years agobuild.zig : install config header (#2216)
Ali Chraghi [Fri, 14 Jul 2023 18:50:58 +0000 (11:50 -0700)]
build.zig : install config header (#2216)

2 years agoexamples : fixed path typos in embd-input (#2214)
Shangning Xu [Fri, 14 Jul 2023 18:40:05 +0000 (02:40 +0800)]
examples : fixed path typos in embd-input (#2214)

2 years agocuda : support broadcast add & mul (#2192)
Jiahao Li [Fri, 14 Jul 2023 18:38:24 +0000 (02:38 +0800)]
cuda : support broadcast add & mul (#2192)

Co-authored-by: Georgi Gerganov <redacted>
2 years agoCUDA: mul_mat_vec_q kernels for k-quants (#2203)
Johannes Gäßler [Fri, 14 Jul 2023 17:44:08 +0000 (19:44 +0200)]
CUDA: mul_mat_vec_q kernels for k-quants (#2203)

2 years agomake : fix combination of LLAMA_METAL and LLAMA_MPI (#2208)
James Reynolds [Fri, 14 Jul 2023 17:34:40 +0000 (11:34 -0600)]
make : fix combination of LLAMA_METAL and LLAMA_MPI (#2208)

Fixes https://github.com/ggerganov/llama.cpp/issues/2166 by moving commands after the CFLAGS are changed.

2 years agoggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)
Georgi Gerganov [Fri, 14 Jul 2023 13:36:41 +0000 (16:36 +0300)]
ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)

2 years agoMetal: faster Q4_0 and Q4_1 matrix x vector kernels (#2212)
Kawrakow [Fri, 14 Jul 2023 09:46:21 +0000 (12:46 +0300)]
Metal: faster Q4_0 and Q4_1 matrix x vector kernels (#2212)

* 3-5% faster Q4_0 on Metal

* 7-25% faster Q4_1 on Metal

* Oops, forgot to delete the original Q4_1 kernel

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoRevert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su [Thu, 13 Jul 2023 13:58:25 +0000 (21:58 +0800)]
Revert "Support using mmap when applying LoRA (#2095)" (#2206)

Has perf regression when mlock is used.

This reverts commit 2347463201a9f4159ae95b737e1544dd300569c8.

2 years agoFix compile error on Windows CUDA (#2207)
Howard Su [Thu, 13 Jul 2023 13:58:09 +0000 (21:58 +0800)]
Fix compile error on Windows CUDA (#2207)

2 years agodevops : add missing quotes to bash script (#2193)
Bodo Graumann [Thu, 13 Jul 2023 13:49:14 +0000 (15:49 +0200)]
devops : add missing quotes to bash script (#2193)

This prevents accidentally expanding arguments that contain spaces.

2 years agometal : new q4_0 matrix-vector kernel (#2188)
Shouzheng Liu [Wed, 12 Jul 2023 20:10:55 +0000 (16:10 -0400)]
metal : new q4_0 matrix-vector kernel (#2188)

Prefetch data to improve GPU utilization. ~48% faster for 33B model.

2 years agoggml : broadcast mul_mat + conv batch support (#2199)
Georgi Gerganov [Wed, 12 Jul 2023 17:51:29 +0000 (20:51 +0300)]
ggml : broadcast mul_mat + conv batch support (#2199)

* ggml : broadcast mul_mat + conv batch support

* ggml : apply mul_mat broadcast fix by @jploski

2 years agoggml : add ggml_pool_1d and ggml_pool_2d
Georgi Gerganov [Wed, 12 Jul 2023 17:27:03 +0000 (20:27 +0300)]
ggml : add ggml_pool_1d and ggml_pool_2d

2 years agocuda : add gelu support
Georgi Gerganov [Wed, 12 Jul 2023 17:26:18 +0000 (20:26 +0300)]
cuda : add gelu support

2 years agoFP16 is supported in CM=6.0 (#2177)
Howard Su [Wed, 12 Jul 2023 12:18:40 +0000 (20:18 +0800)]
FP16 is supported in CM=6.0 (#2177)

* FP16 is supported in CM=6.0

* Building PTX code for both of 60 and 61

Co-authored-by: Johannes Gäßler <redacted>
2 years agoFixed __dp4a compute capability: 6.0 -> 6.1 (#2189)
Johannes Gäßler [Wed, 12 Jul 2023 08:38:52 +0000 (10:38 +0200)]
Fixed __dp4a compute capability: 6.0 -> 6.1 (#2189)

2 years agoggml : revert CUDA broadcast changes from #2183 (#2191)
Georgi Gerganov [Wed, 12 Jul 2023 07:54:19 +0000 (10:54 +0300)]
ggml : revert CUDA broadcast changes from #2183 (#2191)

2 years agoggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)
Georgi Gerganov [Tue, 11 Jul 2023 19:53:34 +0000 (22:53 +0300)]
ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)

2 years agoggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)
Spencer Sutton [Tue, 11 Jul 2023 16:31:10 +0000 (12:31 -0400)]
ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)

* Add ggml changes

* Update train-text-from-scratch for change

* mpi : adapt to new ggml_tensor->src

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agollama : add classifier-free guidance (#2135)
Bach Le [Tue, 11 Jul 2023 16:18:43 +0000 (00:18 +0800)]
llama : add classifier-free guidance (#2135)

* Initial implementation

* Remove debug print

* Restore signature of llama_init_from_gpt_params

* Free guidance context

* Make freeing of guidance_ctx conditional

* Make Classifier-Free Guidance a sampling function

* Correct typo. CFG already means context-free grammar.

* Record sampling time in llama_sample_classifier_free_guidance

* Shift all values by the max value before applying logsoftmax

* Fix styling based on review

2 years agodocker : add '--server' option (#2174)
Jinwoo Jeong [Tue, 11 Jul 2023 16:12:35 +0000 (01:12 +0900)]
docker : add '--server' option (#2174)

2 years agoreadme : fix zig build instructions (#2171)
Chad Brewbaker [Tue, 11 Jul 2023 16:03:06 +0000 (11:03 -0500)]
readme : fix zig build instructions (#2171)

2 years agoSupport using mmap when applying LoRA (#2095)
Howard Su [Tue, 11 Jul 2023 14:37:01 +0000 (22:37 +0800)]
Support using mmap when applying LoRA (#2095)

* Support using mmap when applying LoRA

* Fix Linux

* Update comment to reflect the support lora with mmap

2 years agoPossible solution to allow K-quants on models with n_vocab!=32000 (#2148)
LostRuins [Tue, 11 Jul 2023 14:01:08 +0000 (22:01 +0800)]
Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)

* This allows LLAMA models that were previously incompatible with K quants to function mostly as normal. This happens when a model has a vocab != 32000, e.g 32001 which means it's not divisible by 256 or 64. Since the problematic dimensions only apply for `tok_embeddings.weight` and `output.weight` (dimentions 4096 x n_vocab), we can simply quantize these layers to Q8_0 whereas the majority of the hidden layers are still K-quanted since they have compatible dimensions.

* Fix indentation

Co-authored-by: Georgi Gerganov <redacted>
* As an alternative, to avoid failing on Metal due to lack of Q8_0 support, instead quantize tok_embeddings.weight to Q4_0 and retain output.weight as F16. This results in a net gain of about 55mb for a 7B model compared to previous approach, but should minimize adverse impact to model quality.

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agompi : add support for distributed inference via MPI (#2099)
Evan Miller [Mon, 10 Jul 2023 15:49:56 +0000 (11:49 -0400)]
mpi : add support for distributed inference via MPI (#2099)

* MPI support, first cut

* fix warnings, update README

* fixes

* wrap includes

* PR comments

* Update CMakeLists.txt

* Add GH workflow, fix test

* Add info to README

* mpi : trying to move more MPI stuff into ggml-mpi (WIP) (#2099)

* mpi : add names for layer inputs + prep ggml_mpi_graph_compute()

* mpi : move all MPI logic into ggml-mpi

Not tested yet

* mpi : various fixes - communication now works but results are wrong

* mpi : fix output tensor after MPI compute (still not working)

* mpi : fix inference

* mpi : minor

* Add OpenMPI to GH action

* [mpi] continue-on-error: true

* mpi : fix after master merge

* [mpi] Link MPI C++ libraries to fix OpenMPI

* tests : fix new llama_backend API

* [mpi] use MPI_INT32_T

* mpi : factor out recv / send in functions and reuse

* mpi : extend API to allow usage with outer backends (e.g. Metal)

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agollama : remove "first token must be BOS" restriction (#2153)
oobabooga [Sun, 9 Jul 2023 08:59:53 +0000 (05:59 -0300)]
llama : remove "first token must be BOS" restriction (#2153)

2 years agomain : escape prompt prefix/suffix (#2151)
Nigel Bosch [Sun, 9 Jul 2023 08:56:18 +0000 (03:56 -0500)]
main : escape prompt prefix/suffix (#2151)

2 years agoreadme : update Termux instructions (#2147)
JackJollimore [Sun, 9 Jul 2023 08:20:43 +0000 (05:20 -0300)]
readme : update Termux instructions (#2147)

The file pathing is significant when running models inside of Termux on Android devices. llama.cpp performance is improved with loading a .bin from the $HOME directory.

2 years agoggml : fix buidling with Intel MKL but ask for "cblas.h" issue (#2104) (#2115)
clyang [Sun, 9 Jul 2023 08:12:20 +0000 (16:12 +0800)]
ggml : fix buidling with Intel MKL but ask for "cblas.h" issue (#2104) (#2115)

* Fix buidling with Intel MKL but ask for "cblas.h" issue

* Use angle brackets to indicate the system library

2 years agoreadme : add more docs indexes (#2127)
rankaiyx [Sun, 9 Jul 2023 07:38:42 +0000 (15:38 +0800)]
readme : add more docs indexes (#2127)

* Update README.md to add more docs indexes

* Update README.md to add more docs indexes

2 years agoFixed OpenLLaMA 3b CUDA mul_mat_vec_q (#2144)
Johannes Gäßler [Sat, 8 Jul 2023 18:01:44 +0000 (20:01 +0200)]
Fixed OpenLLaMA 3b CUDA mul_mat_vec_q (#2144)

2 years agoCUDA: add __restrict__ to mul mat vec kernels (#2140)
Johannes Gäßler [Fri, 7 Jul 2023 22:25:15 +0000 (00:25 +0200)]
CUDA: add __restrict__ to mul mat vec kernels (#2140)

2 years agodocker : add support for CUDA in docker (#1461)
dylan [Fri, 7 Jul 2023 18:25:25 +0000 (11:25 -0700)]
docker : add support for CUDA in docker (#1461)

Co-authored-by: canardleteer <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoci : switch threads to 1 (#2138)
Georgi Gerganov [Fri, 7 Jul 2023 18:23:57 +0000 (21:23 +0300)]
ci : switch threads to 1 (#2138)

2 years agoggml : change ggml_graph_compute() API to not require context (#1999)
Qingyou Meng [Fri, 7 Jul 2023 16:24:01 +0000 (00:24 +0800)]
ggml : change ggml_graph_compute() API to not require context (#1999)

* ggml_graph_compute: deprecate using ggml_context, try resolve issue #287

* rewrite: no longer consider backward compitability; plan and make_plan

* minor: rename ctx as plan; const

* remove ggml_graph_compute from tests/test-grad0.c, but current change breaks backward

* add static ggml_graph_compute_sugar()

* minor: update comments

* reusable buffers

* ggml : more consistent naming + metal fixes

* ggml : fix docs

* tests : disable grad / opt + minor naming changes

* ggml : add ggml_graph_compute_with_ctx()

- backwards compatible API
- deduplicates a lot of copy-paste

* ci : enable test-grad0

* examples : factor out plan allocation into a helper function

* llama : factor out plan stuff into a helper function

* ci : fix env

* llama : fix duplicate symbols + refactor example benchmark

* ggml : remove obsolete assert + refactor n_tasks section

* ggml : fix indentation in switch

* llama : avoid unnecessary bool

* ggml : remove comments from source file and match order in header

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoggml : remove sched_yield() call in ggml_graph_compute_thread() (#2134)
Georgi Gerganov [Fri, 7 Jul 2023 15:36:37 +0000 (18:36 +0300)]
ggml : remove sched_yield() call in ggml_graph_compute_thread() (#2134)

2 years agoconvert.py: add mapping for safetensors bf16 (#1598)
Aarni Koskela [Fri, 7 Jul 2023 13:12:49 +0000 (16:12 +0300)]
convert.py: add mapping for safetensors bf16 (#1598)

Fixes #1473

2 years agoFix opencl by wrap #if-else-endif with \n (#2086)
Howard Su [Fri, 7 Jul 2023 03:34:18 +0000 (11:34 +0800)]
Fix opencl by wrap #if-else-endif with \n (#2086)

2 years agoggml : fix restrict usage
Georgi Gerganov [Thu, 6 Jul 2023 16:41:31 +0000 (19:41 +0300)]
ggml : fix restrict usage

2 years agoconvert : update for baichuan (#2081)
Judd [Thu, 6 Jul 2023 16:23:49 +0000 (00:23 +0800)]
convert : update for baichuan (#2081)

1. guess n_layers;
2. relax warnings on context size;
3. add a note that its derivations are also supported.

Co-authored-by: Judd <redacted>
2 years agoalpaca.sh : update model file name (#2074)
tslmy [Thu, 6 Jul 2023 16:17:50 +0000 (09:17 -0700)]
alpaca.sh : update model file name (#2074)

The original file name, `ggml-alpaca-7b-q4.bin`, implied the first-generation GGML. After the breaking changes (mentioned in https://github.com/ggerganov/llama.cpp/issues/382), `llama.cpp` requires GGML V3 now. Those model files are named `*ggmlv3*.bin`. We should change the example to an actually working model file, so that this thing is more likely to run out-of-the-box for more people, and less people would waste time downloading the old Alpaca model.

2 years agoExpose generation timings from server & update completions.js (#2116)
Tobias Lütke [Wed, 5 Jul 2023 20:51:13 +0000 (16:51 -0400)]
Expose generation timings from server & update completions.js (#2116)

* use javascript generators as much cleaner API

Also add ways to access completion as promise and EventSource

* export llama_timings as struct and expose them in server

* update readme, update baked includes

* llama : uniform variable names + struct init

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoUpdate Server Instructions (#2113)
Jesse Jojo Johnson [Wed, 5 Jul 2023 18:03:19 +0000 (18:03 +0000)]
Update Server Instructions (#2113)

* Update server instructions for web front end
* Update server README
* Remove duplicate OAI instructions
* Fix duplicate text

---------

Co-authored-by: Jesse Johnson <redacted>
2 years agoggml : fix bug introduced in #1237
Georgi Gerganov [Wed, 5 Jul 2023 17:44:11 +0000 (20:44 +0300)]
ggml : fix bug introduced in #1237

2 years agotests : fix test-grad0
Georgi Gerganov [Wed, 5 Jul 2023 17:20:05 +0000 (20:20 +0300)]
tests : fix test-grad0

2 years agoggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
Stephan Walter [Wed, 5 Jul 2023 16:13:06 +0000 (16:13 +0000)]
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)

* Generalize quantize_fns for simpler FP16 handling

* Remove call to ggml_cuda_mul_mat_get_wsize

* ci : disable FMA for mac os actions

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoUpdate server instructions for web front end (#2103)
Jesse Jojo Johnson [Wed, 5 Jul 2023 15:13:35 +0000 (15:13 +0000)]
Update server instructions for web front end (#2103)

Co-authored-by: Jesse Johnson <redacted>
2 years agoQuantized dot products for CUDA mul mat vec (#2067)
Johannes Gäßler [Wed, 5 Jul 2023 12:19:42 +0000 (14:19 +0200)]
Quantized dot products for CUDA mul mat vec (#2067)

2 years agollama: Don't double count the sampling time (#2107)
Howard Su [Wed, 5 Jul 2023 10:31:23 +0000 (18:31 +0800)]
llama: Don't double count the sampling time (#2107)

2 years agoFixed OpenCL offloading prints (#2082)
Johannes Gäßler [Wed, 5 Jul 2023 06:58:05 +0000 (08:58 +0200)]
Fixed OpenCL offloading prints (#2082)

2 years agoembd-input: Fix input embedding example unsigned int seed (#2105)
Nigel Bosch [Tue, 4 Jul 2023 23:33:33 +0000 (18:33 -0500)]
embd-input: Fix input embedding example unsigned int seed (#2105)

2 years agoreadme : add link web chat PR
Georgi Gerganov [Tue, 4 Jul 2023 19:25:22 +0000 (22:25 +0300)]
readme : add link web chat PR

2 years agoggml : sync latest (new ops, macros, refactoring) (#2106)
Georgi Gerganov [Tue, 4 Jul 2023 18:54:11 +0000 (21:54 +0300)]
ggml : sync latest (new ops, macros, refactoring) (#2106)

- add ggml_argmax()
- add ggml_tanh()
- add ggml_elu()
- refactor ggml_conv_1d() and variants
- refactor ggml_conv_2d() and variants
- add helper macros to reduce code duplication in ggml.c

2 years agoAdd an API example using server.cpp similar to OAI. (#2009)
jwj7140 [Tue, 4 Jul 2023 18:06:12 +0000 (03:06 +0900)]
Add an API example using server.cpp similar to OAI. (#2009)

* add api_like_OAI.py
* add evaluated token count to server
* add /v1/ endpoints binding

2 years agoSimple webchat for server (#1998)
Tobias Lütke [Tue, 4 Jul 2023 14:05:27 +0000 (10:05 -0400)]
Simple webchat for server (#1998)

* expose simple web interface on root domain

* embed index and add --path for choosing static dir

* allow server to multithread

because web browsers send a lot of garbage requests we want the server
to multithread when serving 404s for favicon's etc. To avoid blowing up
llama we just take a mutex when it's invoked.

* let's try this with the xxd tool instead and see if msvc is happier with that

* enable server in Makefiles

* add /completion.js file to make it easy to use the server from js

* slightly nicer css

* rework state management into session, expose historyTemplate to settings

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoAllow old Make to build server. (#2098)
Henri Vasserman [Tue, 4 Jul 2023 12:38:04 +0000 (15:38 +0300)]
Allow old Make to build server. (#2098)

Also make server build by default.

Tested with Make 3.82

2 years agoUpdate Makefile: clean simple (#2097)
ZhouYuChen [Tue, 4 Jul 2023 12:15:16 +0000 (20:15 +0800)]
Update Makefile: clean simple (#2097)

2 years agoCI: make the brew update temporarily optional. (#2092)
Erik Scholz [Mon, 3 Jul 2023 23:50:12 +0000 (01:50 +0200)]
CI: make the brew update temporarily optional. (#2092)

until they decide to fix the brew installation in the macos runners.
see the open issues. eg https://github.com/actions/runner-images/pull/7710

2 years ago[ggml] fix index for ne03 value in ggml_cl_mul_f32 (#2088)
Govlzkoy [Mon, 3 Jul 2023 23:50:00 +0000 (07:50 +0800)]
[ggml] fix index for ne03 value in ggml_cl_mul_f32 (#2088)

2 years agofix server crashes (#2076)
Henri Vasserman [Mon, 3 Jul 2023 21:05:23 +0000 (00:05 +0300)]
fix server crashes (#2076)

2 years agoFix crash of test-tokenizer-0 under Debug build (#2064)
Howard Su [Mon, 3 Jul 2023 18:43:55 +0000 (02:43 +0800)]
Fix crash of test-tokenizer-0 under Debug build (#2064)

* Fix crash of test-tokenizer-0 under Debug build

* Change per comment

2 years ago[llama] No need to check file version when loading vocab score (#2079)
Howard Su [Mon, 3 Jul 2023 11:58:58 +0000 (19:58 +0800)]
[llama] No need to check file version when loading vocab score (#2079)

2 years agoserver: add option to output probabilities for completion (#1962)
WangHaoranRobin [Sun, 2 Jul 2023 21:38:44 +0000 (05:38 +0800)]
server: add option to output probabilities for completion (#1962)

* server: add option to output probabilities for completion
* server: fix issue when handling probability output for incomplete tokens for multibyte character generation
* server: fix llama_sample_top_k order
* examples/common.h: put all bool variables in gpt_params together

2 years agoggml : fix build with OpenBLAS (close #2066)
Georgi Gerganov [Sun, 2 Jul 2023 06:46:46 +0000 (09:46 +0300)]
ggml : fix build with OpenBLAS (close #2066)

2 years agoBetter CUDA synchronization logic (#2057)
Johannes Gäßler [Sat, 1 Jul 2023 19:49:44 +0000 (21:49 +0200)]
Better CUDA synchronization logic (#2057)

2 years agoTest-based VRAM scratch size + context adjustment (#2056)
Johannes Gäßler [Sat, 1 Jul 2023 19:47:26 +0000 (21:47 +0200)]
Test-based VRAM scratch size + context adjustment (#2056)

2 years agocmake : don't force -mcpu=native on aarch64 (#2063)
Daniel Drake [Sat, 1 Jul 2023 18:31:44 +0000 (20:31 +0200)]
cmake : don't force -mcpu=native on aarch64 (#2063)

It's currently not possible to cross-compile llama.cpp for aarch64
because CMakeLists.txt forces -mcpu=native for that target.

-mcpu=native doesn't make sense if your build host is not the
target architecture, and clang rejects it for that reason, aborting the
build. This can be easily reproduced using the current Android NDK to build
for aarch64 on an x86_64 host.

If there is not a specific CPU-tuning target for aarch64 then -mcpu
should be omitted completely. I think that makes sense, there is not
enough variance in the aarch64 instruction set to warrant a fixed -mcpu
optimization at this point. And if someone is building natively and wishes
to enable any possible optimizations for the host device, then there is
already the LLAMA_NATIVE option available.

Fixes #495.

2 years agometal : release buffers when freeing metal context (#2062)
Aaron Miller [Sat, 1 Jul 2023 18:14:59 +0000 (11:14 -0700)]
metal : release buffers when freeing metal context (#2062)

2 years agoconvert : add support of baichuan-7b (#2055)
Judd [Sat, 1 Jul 2023 17:00:25 +0000 (01:00 +0800)]
convert : add support of baichuan-7b (#2055)

Co-authored-by: Judd <redacted>
2 years agollama : fix return value of llama_load_session_file_internal (#2022)
Georgi Gerganov [Sat, 1 Jul 2023 16:05:09 +0000 (19:05 +0300)]
llama : fix return value of llama_load_session_file_internal (#2022)

2 years agollama : catch llama_load_session_file_internal exceptions (#2022)
Rand Xie [Sat, 1 Jul 2023 16:02:58 +0000 (00:02 +0800)]
llama : catch llama_load_session_file_internal exceptions (#2022)

* convert checks in llama_load_session_file to throw and handle them

* make llama_load_session_file_internal static

* address feedbacks to avoid using exceptions

2 years agoembd-input : fix returning ptr to temporary
Georgi Gerganov [Sat, 1 Jul 2023 15:46:00 +0000 (18:46 +0300)]
embd-input : fix returning ptr to temporary

2 years agotrain : fix compile warning
Georgi Gerganov [Sat, 1 Jul 2023 15:45:44 +0000 (18:45 +0300)]
train : fix compile warning

2 years agoggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)
Qingyou Meng [Sat, 1 Jul 2023 15:42:43 +0000 (23:42 +0800)]
ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)

Will not be scheduled unless explicitly enabled.

2 years agoUse unsigned for random seed (#2006)
Howard Su [Thu, 29 Jun 2023 13:15:15 +0000 (21:15 +0800)]
Use unsigned for random seed (#2006)

* Use unsigned for random seed. Keep -1 as the value to use a time based seed.

Co-authored-by: Georgi Gerganov <redacted>
2 years agoPorting the improved K-Quant CUDA kernels to OpenCL (#1966)
LostRuins [Thu, 29 Jun 2023 03:56:43 +0000 (11:56 +0800)]
Porting the improved K-Quant CUDA kernels to OpenCL (#1966)

* Added broken new q4k quant

* xx + ib0

* Fix q2_k fast kernel

* Use preprocessor for QK_K

* Add q6_k fast matmul kernel

* ported q3k speedup successfully

* ported q2k and q5k speedups

* remove old dot kernels and template

* fixed global const struct types

* fixing address spaces

* fixed string too long CI issue

---------

Co-authored-by: 0cc4m <redacted>
2 years agollama : replacing auto &kv with const auto &kv (#2041)
m3ndax [Wed, 28 Jun 2023 18:39:08 +0000 (20:39 +0200)]
llama : replacing auto &kv with const auto &kv (#2041)

* Replacing auto &kv with const auto &kv

* Create codacy.yml

* Delete codacy.yml

2 years agocuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)
Salvador E. Tropea [Wed, 28 Jun 2023 17:27:31 +0000 (14:27 -0300)]
cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)

- Not used