]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Equim [Fri, 11 Aug 2023 22:35:14 +0000 (06:35 +0800)]
server: fixed wrong variable name in timing json (#2579)
* server: fixed wrong variable name in timing json
* remove redunct entry
DannyDaemonic [Thu, 10 Aug 2023 20:11:36 +0000 (13:11 -0700)]
Handle `ENABLE_VIRTUAL_TERMINAL_PROCESSING` more gracefully on earlier versions of Windows.
Christian Demsar [Thu, 10 Aug 2023 14:28:27 +0000 (10:28 -0400)]
Add --n-predict -2 for stopping generation on full context (#2565)
Martin Krasser [Thu, 10 Aug 2023 10:16:38 +0000 (12:16 +0200)]
Fix grammar-based sampling issue in server (#2566)
Sam Spilsbury [Wed, 9 Aug 2023 20:47:42 +0000 (23:47 +0300)]
ggml-alloc: Don't try to re-use buffers of external tensors (#2562)
* ggml-alloc: Don't try to re-use buffers of external tensors
They might be weights that came from another context, so we
have no control over them (and they might be re-used elsewhere
so writing to them would be a bad idea).
* ggml-alloc: >= when checking for out-of-bounds
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
grahameth [Wed, 9 Aug 2023 20:46:40 +0000 (22:46 +0200)]
add log_callback to llama_context_params for custom logging. (#2234)
* add log_callback to llama_context_params for custom logging.
* Fix macro expansion on gcc
* Add struct llama_state for global variables and move log_callback there
* Turn log level into enum and some minor changes.
* Remove model_for_logging parameter (not needed anymore)
* Convert remaining fprintf(stderr, ...) calls to use new macros.
* Fix enum and initialize g_state
* Fix log calls after merge
* Fix missing static
* Add back all the new lines in the logging strings
* Add comment for llama_log_callback and replace remaining printf calls
---------
Co-authored-by: grahameth <->
Co-authored-by: Helmut <redacted>
Johannes Gäßler [Wed, 9 Aug 2023 07:42:34 +0000 (09:42 +0200)]
CUDA: tuned mul_mat_q kernels (#2546)
Martin Krasser [Tue, 8 Aug 2023 13:29:19 +0000 (15:29 +0200)]
Allow passing grammar to completion endpoint (#2532)
* Allow passing grammar to completion endpoint
Johannes Gäßler [Tue, 8 Aug 2023 12:38:16 +0000 (14:38 +0200)]
CUDA: tighter VRAM scratch size for 65b/70b (#2551)
chaihahaha [Tue, 8 Aug 2023 12:07:02 +0000 (20:07 +0800)]
llm.vim : multiline autocompletion, get rid of "^@" (#2543)
Georgi Gerganov [Tue, 8 Aug 2023 12:05:30 +0000 (15:05 +0300)]
vim : bring back simple llm.vim example
AustinMroz [Tue, 8 Aug 2023 11:44:48 +0000 (06:44 -0500)]
vim : streaming and more (#2495)
* Update Vim plugin
* Remove getbufoneline usage, Add input bind example.
getbufoneline() appears to be a recently added function and has been
replaced with getbufline for compatibility.
An additional example that explains how to add a keybind that works in
insert mode was added.
klosax [Mon, 7 Aug 2023 17:07:19 +0000 (19:07 +0200)]
Add --rope-scale parameter (#2544)
* common.cpp : Add --rope-scale parameter
* README.md : Add info about using linear rope scaling
Georgi Gerganov [Mon, 7 Aug 2023 11:25:58 +0000 (14:25 +0300)]
ggml : mul mat tweaks (#2372)
* ggml : mul mat wip
ggml-ci
* ggml : alternative thread distribution for mul_mat
ggml-ci
* ggml : mul_mat block tiling attempt
* ggml : mul_mat threads yield
ggml-ci
Georgi Gerganov [Mon, 7 Aug 2023 11:24:42 +0000 (14:24 +0300)]
ggml : pad result of ggml_nbytes()
Georgi Gerganov [Mon, 7 Aug 2023 10:55:18 +0000 (13:55 +0300)]
ggml : change params pointer (style change) (#2539)
ggml-ci
Georgi Gerganov [Mon, 7 Aug 2023 10:20:09 +0000 (13:20 +0300)]
ggml : sync (custom ops) (#2537)
ggml-ci
Johannes Gäßler [Mon, 7 Aug 2023 08:09:40 +0000 (10:09 +0200)]
Fixed mmap prefetch for GPU offloading (#2529)
Georgi Gerganov [Mon, 7 Aug 2023 07:52:57 +0000 (10:52 +0300)]
metal : fix out-of-bounds access + inc concurrency nodes (#2416)
* metal : fix out-of-bounds access + style changes
* metal : increase concurrency nodes to 2*GGML_MAX_NODES
GiviMAD [Mon, 7 Aug 2023 06:21:46 +0000 (23:21 -0700)]
[Makefile] Move ARM CFLAGS before compilation (#2536)
Henri Vasserman [Mon, 7 Aug 2023 05:35:53 +0000 (08:35 +0300)]
[Zig] Rewrite build for Zig 0.11 (#2514)
* zig build fixes
* Disable LTO on Windows.
DannyDaemonic [Sun, 6 Aug 2023 06:49:34 +0000 (23:49 -0700)]
console : fix issue related to Windows 11 PowerShell console mode persistence (#2521)
Keiichi Tabata [Sun, 6 Aug 2023 06:34:05 +0000 (15:34 +0900)]
convert.py : add missing abstract methods for quantized data (#2491)
Johannes Gäßler [Sat, 5 Aug 2023 16:20:44 +0000 (18:20 +0200)]
CUDA: faster k-quant mul_mat_q kernels (#2525)
Jonas Wunderlich [Fri, 4 Aug 2023 20:16:11 +0000 (20:16 +0000)]
fix firefox autoscroll (#2519)
Cebtenzzre [Fri, 4 Aug 2023 19:00:57 +0000 (15:00 -0400)]
server: regenerate completion.js.hpp (#2515)
Cebtenzzre [Fri, 4 Aug 2023 15:35:22 +0000 (11:35 -0400)]
CUDA: use min compute capability of GPUs actually used (#2506)
Cebtenzzre [Fri, 4 Aug 2023 15:34:32 +0000 (11:34 -0400)]
CUDA: check if event is NULL before cudaStreamWaitEvent (#2505)
Fixes #2503
DannyDaemonic [Fri, 4 Aug 2023 15:20:12 +0000 (08:20 -0700)]
Add --simple-io option for subprocesses and break out console.h and cpp (#1558)
Stephen Nichols [Fri, 4 Aug 2023 11:37:24 +0000 (06:37 -0500)]
Fixing race condition in server and partial stream handling in frontend. (#2391)
* Fixing race condition in server.cpp and partial stream handling in completion.js
* Reverting assert edits.
* Adding newline to eof
l3utterfly [Fri, 4 Aug 2023 11:29:52 +0000 (19:29 +0800)]
Stream save llama context data to file instead of allocating entire buffer upfront (#2488)
* added stream saving context data to file to avoid allocating unnecessary amounts of memory
* generalised copying state data to file or buffer
* added comments explaining how copy_state_data works
* fixed trailing whitespaces
* fixed save load state example
* updated save load state to use public function in llama.cpp
* - restored breakage of the llama_copy_state_data API
- moved new logic for copying llama state data to internal function
* fixed function declaration order
* restored save load state example
* fixed whitepace
* removed unused llama-util.h include
* Apply suggestions from code review
Co-authored-by: slaren <redacted>
* Apply code review suggestions
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Borislav Stanimirov [Fri, 4 Aug 2023 10:07:21 +0000 (13:07 +0300)]
build : fix several cast and printf warnings (#2499)
Evan Jones [Thu, 3 Aug 2023 02:05:44 +0000 (22:05 -0400)]
examples : generate JSON according to schema (#1887)
* examples : add JSON schema grammars
* complete JSON grammar
* ensure primitive types can be used as root of schema
* support integer type and adjust usage text
Johannes Gäßler [Wed, 2 Aug 2023 16:04:04 +0000 (18:04 +0200)]
CUDA: faster non k-quant mul_mat_q kernels (#2483)
Johannes Gäßler [Wed, 2 Aug 2023 14:48:10 +0000 (16:48 +0200)]
CUDA: Fix models with output size != 32000 (#2480)
ldwang [Wed, 2 Aug 2023 08:21:11 +0000 (16:21 +0800)]
readme : add Aquila-7B model series to supported models (#2487)
* support bpe tokenizer in convert
Signed-off-by: ldwang <redacted>
* support bpe tokenizer in convert
Signed-off-by: ldwang <redacted>
* support bpe tokenizer in convert, fix
Signed-off-by: ldwang <redacted>
* Add Aquila-7B models in README.md
Signed-off-by: ldwang <redacted>
* Up Aquila-7B models in README.md
Signed-off-by: ldwang <redacted>
---------
Signed-off-by: ldwang <redacted>
Co-authored-by: ldwang <redacted>
Eve [Wed, 2 Aug 2023 08:06:19 +0000 (04:06 -0400)]
tests : Fix compilation warnings (Linux/GCC) (#2451)
* fix hellaswag print format, cast away warning in test-double-float
* c++11 cannot use designated initializers
* add static to test-grad0.c internal functions
* use memcpy in test-double-float.c
* port c tests to c++
* use initializer list for ggml_init_params
Yiming Cui [Wed, 2 Aug 2023 06:18:31 +0000 (14:18 +0800)]
readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475)
* add support for chinese llama-2 / alpaca-2
* remove white spaces
Bono Lv [Tue, 1 Aug 2023 12:54:28 +0000 (20:54 +0800)]
fix a typo in examples/server/README.md (#2478)
ebraminio [Tue, 1 Aug 2023 08:56:23 +0000 (01:56 -0700)]
server : Support dark mode (#2414)
* server : Support dark mode
So it respects user system light / dark settings.
* Update index.html.hpp by running ./deps.sh
Matteo Boschini [Tue, 1 Aug 2023 07:43:12 +0000 (09:43 +0200)]
metal : add gqa8 kernel to allow llama-2-70B on metal (#2459)
* Added gqa8 kernel to allow llama-2-70B on metal
* Update ggml-metal.m
Co-authored-by: Cebtenzzre <redacted>
* Extend kernel_mul_mat_f16_f32 to handle gqa broadcast
* Added ne03==ne13 assertion
---------
Co-authored-by: Cebtenzzre <redacted>
Johannes Gäßler [Mon, 31 Jul 2023 19:02:19 +0000 (21:02 +0200)]
CUDA: fixed LLAMA_FAST compilation option (#2473)
Johannes Gäßler [Mon, 31 Jul 2023 17:52:22 +0000 (19:52 +0200)]
CUDA: fixed cmake F16 option (#2471)
Johannes Gäßler [Mon, 31 Jul 2023 13:44:35 +0000 (15:44 +0200)]
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler [Mon, 31 Jul 2023 12:32:30 +0000 (14:32 +0200)]
CUDA: Implemented row flattening for non-glm RoPE (#2468)
Johannes Gäßler [Mon, 31 Jul 2023 11:18:51 +0000 (13:18 +0200)]
CUDA: fewer memory bank conflicts for mul_mat_q (#2458)
slaren [Mon, 31 Jul 2023 09:02:53 +0000 (11:02 +0200)]
Fix Metal backend broken from the allocator changes (#2455)
* fix Metal backend broken from the allocator changes
slaren [Sun, 30 Jul 2023 13:58:01 +0000 (15:58 +0200)]
ggml : add graph tensor allocator (#2411)
* ggml : add graph tensor allocator
* ggml : don't calculate data pointer of unallocated tensors when creating a view with an offset
* ggml : refactor ggml_view_Nd into ggml_view_tensor_offset
Johannes Gäßler [Sat, 29 Jul 2023 21:04:44 +0000 (23:04 +0200)]
CUDA: Quantized matrix matrix multiplication (#2160)
* mmq implementation for non k-quants
* q6_K
* q2_K
* q3_k
* q4_K
* vdr
* q5_K
* faster q8_1 loading
* loop unrolling
* add __restrict__
* q2_K sc_high
* GGML_CUDA_MMQ_Y
* Updated Makefile
* Update Makefile
* DMMV_F16 -> F16
* Updated README, CMakeLists
* Fix CMakeLists.txt
* Fix CMakeLists.txt
* Fix multi GPU out-of-bounds
Johannes Gäßler [Sat, 29 Jul 2023 21:04:10 +0000 (23:04 +0200)]
CUDA: faster multi GPU synchronization (#2448)
klosax [Fri, 28 Jul 2023 18:25:36 +0000 (20:25 +0200)]
perplexity : add Hellaswag calculation (#2389)
* common.h : add hellaswag / remove perplexity-lines
* common.cpp : add hellaswag / remove perplexity-lines
* perplexity.cpp : add hellswag scores / remove perplexity-lines
* perplexity.cpp : clean up
* common.h : change default param value
* common.cpp : Change default param
* perplexity.cpp : alter wording
* common.h : alter wording
* common.cpp : alter wording
Lee [Fri, 28 Jul 2023 18:17:45 +0000 (02:17 +0800)]
ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c (#2405)
eric8607242 [Fri, 28 Jul 2023 18:10:05 +0000 (02:10 +0800)]
llama : support more diverse tokenizers? (#2420)
* supporting more diverse tokenizers
* Update llama.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 28 Jul 2023 18:05:08 +0000 (21:05 +0300)]
examples : fix whitespace
nhamanasu [Fri, 28 Jul 2023 18:02:10 +0000 (03:02 +0900)]
examples : server chat mode with llama2 (#2400)
* add: server chat mode with llama2
* fix: remove the unnecessary last \n
Weird Constructor [Fri, 28 Jul 2023 08:44:43 +0000 (10:44 +0200)]
readme : fix the description of the Tail free sampling (TFS) method (#2431)
Rand Xie [Fri, 28 Jul 2023 08:42:53 +0000 (01:42 -0700)]
llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)
niansa/tuxifan [Fri, 28 Jul 2023 01:14:11 +0000 (03:14 +0200)]
Obtaining LLaMA 2 instructions (#2308)
* Obtaining LLaMA 2 instructions
* Removed sharing warning for LLaMA 2
* Linked TheBloke's GGML repos
* Add LLaMA 2 to list of supported models
* Added LLaMA 2 usage instructions
* Added links to LLaMA 2 70B models
mj-shifu [Thu, 27 Jul 2023 20:39:17 +0000 (22:39 +0200)]
convert.py : Update to support 70B HF format model files (#2427)
* convert.py : fix llama 2 70b conversion from Huggingface
Georgi Gerganov [Thu, 27 Jul 2023 08:00:54 +0000 (11:00 +0300)]
metal : disable graph concurrency optimization due to bug (#2413)
slaren [Wed, 26 Jul 2023 21:57:23 +0000 (23:57 +0200)]
ggml : fix assert in ggml_set_unary_op (#2410)
Cebtenzzre [Wed, 26 Jul 2023 18:00:04 +0000 (14:00 -0400)]
make : build with -Wmissing-prototypes (#2394)
slaren [Wed, 26 Jul 2023 13:56:53 +0000 (15:56 +0200)]
ggml : allocate graphs in a context (#2392)
* ggml : graph allocation in contexts
* allocate work buffer as a ggml_object in ggml_graph_compute_with_ctx
* llama.cpp : allocate graph in the context
* add GGML_PAD
---------
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Tue, 25 Jul 2023 15:35:53 +0000 (18:35 +0300)]
Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)
Co-authored-by: Iwan Kawrakow <redacted>
slaren [Tue, 25 Jul 2023 14:20:12 +0000 (16:20 +0200)]
ggml : fix ggml_flash_attn to use op_params (#2387)
* ggml : fix ggml_flash_attn to use op_params
ldwang [Tue, 25 Jul 2023 13:22:09 +0000 (21:22 +0800)]
convert.py : support bpe tokenizer (#2228)
* support bpe tokenizer in convert
Signed-off-by: ldwang <redacted>
* support bpe tokenizer in convert
Signed-off-by: ldwang <redacted>
* support bpe tokenizer in convert, fix
Signed-off-by: ldwang <redacted>
---------
Signed-off-by: ldwang <redacted>
Co-authored-by: ldwang <redacted>
Jiahao Li [Tue, 25 Jul 2023 12:58:32 +0000 (20:58 +0800)]
ggml : relax contiguous constraints in activation function (#2371)
slaren [Tue, 25 Jul 2023 12:32:20 +0000 (14:32 +0200)]
ggml : improve graph build time via hash table lookup (#2329)
* improve graph build time
* ggml_tensor : use 1 bit per flag
* use a hash table instead
Hesen Peng [Tue, 25 Jul 2023 12:24:09 +0000 (05:24 -0700)]
build : fix line breaking error in build-info.sh (#2349)
* fix line breaking
* build number line break removal
Xiao-Yong Jin [Tue, 25 Jul 2023 12:19:11 +0000 (07:19 -0500)]
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
* add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS
The BOS precedes the string specified by `--in-prefix`.
Model generated EOS is now kept in the context.
It provides a way to strictly following the prompt format used in
Llama-2-chat.
The EOS handling also benefits some existing finetunes that uses
EOS to mark the end of turn.
* examples/common: move input_prefix_bos to other bools
Eve [Tue, 25 Jul 2023 12:16:13 +0000 (08:16 -0400)]
ci : add non-AVX scalar build/test (#2356)
* noavx build and test
* we don't need to remove f16c in windows
katsu560 [Tue, 25 Jul 2023 12:13:41 +0000 (21:13 +0900)]
k_quants : add AVX support to dot functions with QK_K as 64 (#2339)
* add AVX to ggml_vec_dot_q2_K_q8_K()
* add AVX to ggml_vec_dot_q3_K_q8_K()
* add AVX to ggml_vec_dot_q4_K_q8_K()
* add AVX to ggml_vec_dot_q5_K_q8_K()
* add AVX to ggml_vec_dot_q6_K_q8_K()
* refactor AVX code in ggml_vec_dot_q6_K_q8_K()
Shouzheng Liu [Tue, 25 Jul 2023 12:00:19 +0000 (08:00 -0400)]
metal : concurrently dispatch commands (#2358)
* metal: concurrently dispatch commands
Function `ggml_metal_graph_find_concurrency` will run and write
commands that can be issued concurrently to metal context `concur_list`
array, when `ggml_metal_graph_compute` is called for the first time.
* metal: don't call find_concurrency automatically.
* metal : code style changes
---------
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Tue, 25 Jul 2023 10:48:29 +0000 (13:48 +0300)]
Another speed gain for Q4_0 and Q4_1 on Metal (#2375)
* Another speed gain for Q4_0 and Q4_1 on Metal
* Have N_DST, etc., be template parameters
---------
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Tue, 25 Jul 2023 10:48:04 +0000 (13:48 +0300)]
Fix Q4_K and Q5_K for QK_K = 64 on CUDA (#2359)
* Fix Q4_K and Q5_K for QK_K = 64
* Very slightly better Q5_K bit fiddling
---------
Co-authored-by: Iwan Kawrakow <redacted>
slaren [Tue, 25 Jul 2023 09:36:17 +0000 (11:36 +0200)]
server: add rms_norm_eps parameter (#2380)
Henri Vasserman [Tue, 25 Jul 2023 07:27:34 +0000 (10:27 +0300)]
[Server] Escape HTML in webchat (#2368)
* escape HTML in webchat
* add amp
slaren [Mon, 24 Jul 2023 15:57:12 +0000 (17:57 +0200)]
make rms_norm_eps a parameter (#2374)
* make rms_norm_eps a parameter
* add rms_norm_eps to command line
* fix baby llama, test-grad0
* use scientific notation for eps param in the help
ggml-ci
Aarni Koskela [Mon, 24 Jul 2023 14:54:22 +0000 (17:54 +0300)]
Chat UI extras (#2366)
* makefile: correct deps for server
* server: tighten settings layout a little
* server: expose all currently configured generation params in UI
* server: expose remaining generation params, for the adventurous
* server: embetter mirostat fields
Georgi Gerganov [Mon, 24 Jul 2023 11:46:21 +0000 (14:46 +0300)]
ggml : sync (unary ops refactor, static-correctness) (#2370)
* ggml : sync (unary ops, tests)
ggml-ci
* tests : remove unnecessary funcs
Kawrakow [Mon, 24 Jul 2023 09:55:02 +0000 (12:55 +0300)]
Fix scalar version of Q5_K when QK_K = 64 (#2362)
Co-authored-by: Iwan Kawrakow <redacted>
Evan Jones [Mon, 24 Jul 2023 03:58:10 +0000 (23:58 -0400)]
llama : add grammar-based sampling (#1773)
* llama, main : constrain sampling to grammar
* allow loading grammar from file
* fix whitespace errors
* handle & print parser errors
* add comments to grammar syntax and allow newlines where unambiguous
* add missing include
* support alternates in root rule
* fix bugs with empty token and EOS
* adjust JSON grammar
* remove swp file
* rewrite ternary expressions
Co-authored-by: Henri Vasserman <redacted>
* use struct for grammar elements and add Unicode support
* add unicode escapes
* add inverse char ranges
* only sample full tokens (no peeking or truncation)
* llama : minor style changes
blindly applied in online editor - hopefully I didn't break something
* update help text
* add warning message if EOS is disabled
---------
Co-authored-by: Henri Vasserman <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Sun, 23 Jul 2023 21:19:47 +0000 (00:19 +0300)]
Some more Q4_K and Q5_K speedup on CUDA (#2346)
* Faster Q5_K on CUDA
* Small Q5_K improvement on older GPUs
* Spped up Q4_K on CUDA
GTX1660: 29.5 ms/t -> 25.6 ms/t
RTX4080: 8.40 ms/t -> 8.25 ms/t
* Spped up Q4_K on CUDA
GTX1660: 36.7 ms/t -> 35.6 ms/t
RTX4080: 9.8 ms/t -> 9.5 ms/t
* Address PR comments
* Add some comments to satisfy PR reviewer
---------
Co-authored-by: Iwan Kawrakow <redacted>
IgnacioFDM [Sun, 23 Jul 2023 20:31:17 +0000 (17:31 -0300)]
Add gqa parameter support to the server (#2351)
* Add gqa parameter support to the server
* Change help from stderr to stdout
Johannes Gäßler [Sun, 23 Jul 2023 15:49:06 +0000 (17:49 +0200)]
Fix __dp4a documentation (#2348)
wzy [Sun, 23 Jul 2023 13:33:02 +0000 (21:33 +0800)]
common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)
* Fix #2345, fix incorrect n_threads
* Update examples/common.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
slaren [Sun, 23 Jul 2023 13:19:39 +0000 (15:19 +0200)]
fix n_tasks (#2342)
ggml-ci
slaren [Sun, 23 Jul 2023 12:36:02 +0000 (14:36 +0200)]
ggml: move op parameters from tensors to ggml_tensor::op_params (#2333)
* ggml: move op parameters from tensors to ggml_tensor::op_params
* alibi: use memcpy for float params
* remove `src[1] = NULL` in ops
Georgi Gerganov [Sun, 23 Jul 2023 12:09:47 +0000 (15:09 +0300)]
llama : grouped-query attention + LLaMAv2 70B support (#2276)
* CUDA: GQA implementation
* llama : support for GQA and LLaMAv2 70B
ggml-ci
* py : fix hparams parsing (if-else blocks)
ggml-ci
* py : oh boy ..
ggml-ci
* help : fix gqa value for 70B
ggml-ci
---------
Co-authored-by: JohannesGaessler <redacted>
maddes8cht [Sun, 23 Jul 2023 11:59:48 +0000 (13:59 +0200)]
llama : print help to stdout (#2338)
wzy [Sun, 23 Jul 2023 11:57:02 +0000 (19:57 +0800)]
flake : support `nix build '.#opencl'` (#2337)
Christian Demsar [Sun, 23 Jul 2023 11:56:34 +0000 (07:56 -0400)]
llama : print max tensor size to stderr (#2336)
Jose Maldonado [Sun, 23 Jul 2023 11:52:08 +0000 (07:52 -0400)]
make : fix CLBLAST compile support in FreeBSD (#2331)
* Fix Makefile for CLBLAST compile support and instructions for compile llama.cpp FreeBSD
* More general use-case for CLBLAST support (Linux and FreeBSD)
AustinMroz [Sun, 23 Jul 2023 11:16:48 +0000 (06:16 -0500)]
examples : simplify vim plugin (#2327)
Uses builtin json_encode and json_decode functions to simplify escaping
Removes the need for temp files
Jiahao Li [Sun, 23 Jul 2023 11:00:37 +0000 (19:00 +0800)]
metal : support bcast add & dup & cont op (#2323)
Kawrakow [Sun, 23 Jul 2023 05:49:20 +0000 (08:49 +0300)]
Speed up Q4_K (#2322)
Co-authored-by: Iwan Kawrakow <redacted>
Johannes Gäßler [Sat, 22 Jul 2023 19:27:34 +0000 (21:27 +0200)]
CUDA: Fixed 7b q3_K_S with mul_mat_vec_q (#2313)
Georgi Gerganov [Sat, 22 Jul 2023 18:17:57 +0000 (21:17 +0300)]
llama : optimize memory buffers (#2325)
klosax [Sat, 22 Jul 2023 12:21:24 +0000 (14:21 +0200)]
Perplexity: Compute scores correlated to HellaSwag (#2312)
* Add parameter --perplexity-lines to perplexity.cpp
whoreson [Sat, 22 Jul 2023 10:34:51 +0000 (12:34 +0200)]
examples : basic VIM plugin
VIM plugin for server exe