]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/shortlog
pkg/ggml/sources/llama.cpp
2023-05-01 slarencuBLAS: fall back to pageable memory if pinned alloc...
2023-05-01 Alex Klinkhamerllama : let context be const when accessing const data...
2023-04-30 Georgi Gerganovggml : fix UB (int << 31)
2023-04-30 Pavol Rusnakbuild: add armv{6,7,8} support to cmake (#1251)
2023-04-30 jon-chuangcommon : better default number of threads (#934)
2023-04-30 0cc4mggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels...
2023-04-30 Georgi Gerganovggml : add Q5 WASM SIMD + GGML_FTYPE
2023-04-30 Stephan WalterVarious fixes to mat_mul benchmark (#1253)
2023-04-30 Georgi Gerganovggml : fix labels for GGML_OP_ALIBI
2023-04-29 Georgi Gerganovggml : fix 32-bit ARM NEON
2023-04-29 Georgi Gerganovggml : use vzip instead of vuzp for consistency
2023-04-29 Georgi Gerganovggml : fix visibility and unused warnings
2023-04-29 Georgi Gerganovggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)
2023-04-29 Georgi Gerganovggml : adjust mul_mat_f16 work memory (#1226)
2023-04-29 Georgi Gerganovbuild : fix reference to old llama_util.h
2023-04-29 Georgi Gerganovexamples : fix save-load-state + rename llama-util.h
2023-04-29 Georgi Gerganovcommon : change default parameters to pre-#1126 (#1223)
2023-04-29 Ivan Stepanovllama : new sampling algorithms (#1126)
2023-04-29 slarencuBLAS: use host pinned memory and dequantize while...
2023-04-28 Henri VassermancuBLAS: non-contiguous tensor support (#1215)
2023-04-28 Stephan WalterRemove Q4_3 which is no better than Q5 (#1218)
2023-04-28 Georgi Gerganovreadme : update hot topics
2023-04-28 Georgi Gerganovggml : sync ggml (ggml_alibi)
2023-04-28 CRD716examples : add Jeopardy example (#1168)
2023-04-28 Evan Jonesllama : add session file format and saved sessions...
2023-04-28 Georgi Gerganovggml : add helper debug printf in soft_max
2023-04-28 0cc4mggml : add CLBlast support (#1164)
2023-04-28 Folko-VenCorrecting link to w64devkit (#1214)
2023-04-28 Johannes GäßlerAdd Manjaro CUDA include and lib dirs to Makefile ...
2023-04-28 Yann Folletadd avx2 for dot_q8_0_q8_0, 2x faster than scalar ...
2023-04-26 Stephan Walterggml : slightly faster AVX2 implementation for Q5 ...
2023-04-26 Georgi Gerganovreadme : add quantization info
2023-04-26 Georgi Gerganovggml : add Q5_0 and Q5_1 quantization (#1187)
2023-04-26 Ásgeir Bjarni... Allow setting the rng seed after initialization. (...
2023-04-26 DaniAndTheWebUpdating build instructions to include BLAS support...
2023-04-26 Pavol Rusnakquantize : use `map` to assign quantization type from...
2023-04-25 Stephan WalterUpdate SHA256SUMS after quantization change (#1181)
2023-04-25 ostix360py : cast lora_alpha to int in convert-lora-to-ggml...
2023-04-25 Pavol Rusnaknix: use convert.py instead of legacy wrapper convert...
2023-04-25 Georgi Gerganovggml : add Q8_0 quantization format (rename the old...
2023-04-25 unboundedggml : use full range for Q4_0 and Q4_2 quantization...
2023-04-24 xaedesggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
2023-04-24 Georgi Gerganovggml : export symbols (#1155)
2023-04-24 xaedesexamples : add save_load_state example (#1150)
2023-04-24 Georgi Gerganovllama : increase scratch buffer size for 65B (ref ...
2023-04-24 mgroeber9110examples/main README improvements and some light refact...
2023-04-24 Stephan WalterFix build for gcc 8 and test in CI (#1154)
2023-04-24 slarenFix cuda compilation (#1128)
2023-04-24 Georgi Gerganovllama : refactor get / set state + remove redundant...
2023-04-23 slarenFix LoRA acronym (#1145)
2023-04-23 Georgi Gerganovscripts : add helper scripts to synch ggml repo
2023-04-23 DannyDaemonicAdded README.md for main with examples and explanations...
2023-04-23 Georgi Gerganovggml : do not print perf ops that have not been used...
2023-04-23 Georgi Gerganovggml : better PERF prints + support "LLAMA_PERF=1 make"
2023-04-23 Stephan WalterImprove AVX2 for vec_dot_q4_3_q8_0 (#1138)
2023-04-23 Pavol Rusnakreadme : update gpt4all instructions (#980)
2023-04-23 Yishuo WangA better `packNibbles` and `mul_sum_i8_pairs_float...
2023-04-22 Georgi Gerganovggml : fix Q4_3 cuBLAS
2023-04-22 Stephan Walterci : trigger CI for drafts, but not most PR actions...
2023-04-22 Stephan WalterFix CI: ARM NEON, quantization unit tests, editorconfig...
2023-04-22 unboundedggml : unit test for quantization functions (#953)
2023-04-22 wbpxre150llama : print timings on ctrl+c exit (#1021)
2023-04-22 eieryllama : have n_batch default to 512 (#1091)
2023-04-22 Howard Sucmake : fix build under Windows when enable BUILD_SHARE...
2023-04-22 Georgi Gerganovggml : fix AVX build + update to new Q8_0 format
2023-04-22 Georgi Gerganovggml : alternative Q4_3 implementation using modified...
2023-04-22 Stephan Walterggml : AVX2 optimization for vec_dot_q4_3_q8_0 and...
2023-04-22 Clint Herronexamples : Improve Alpaca Default Repeat Penalty: Bette...
2023-04-22 xaedesllama : add api for getting/setting the complete state...
2023-04-21 slarenImprove cuBLAS performance by using a memory pool ...
2023-04-21 apazllama : fixed rlimit error message (#888)
2023-04-21 源文雨cmake : link threads publicly to ggml (#1042)
2023-04-21 Alex Klinkhamermain : evaluate tokens in batches after swapping contex...
2023-04-21 xaedesllama : remember and restore kv cache data pointers...
2023-04-21 Kawrakowggml : a faster version for Q4_1 x Q8_0 dot products...
2023-04-21 slarenShow perplexity ETA in hours and minutes (#1096)
2023-04-21 Georgi Gerganovllama : fix comment for "output.weight" tensor
2023-04-20 Stephan WalterAdd ggml-model-*.bin checksums for 7B, 13B, 30B, 65B...
2023-04-20 Georgi Gerganovggml : sync ggml (add GPT-NeoX RoPE implementation)
2023-04-20 Georgi Gerganovggml : fix bug in ggml_compute_forward_dup_f32()
2023-04-20 slarenAdd Q4_3 support to cuBLAS (#1086)
2023-04-20 Georgi Gerganovggml : do not break cuBLAS build (Q4_3 is not yet imple...
2023-04-20 Georgi Gerganovggml : fix Q4_3 quantization
2023-04-20 Kawrakowllama : multi-threaded quantization (#1075)
2023-04-20 Georgi Gerganovggml : add Q4_3 quantization (#1082)
2023-04-20 Ivan Komarovci : remove the LLAMA_ACCELERATE matrix dimension from...
2023-04-20 源文雨fix: LLAMA_CUBLAS=1 undefined reference 'shm_open'...
2023-04-20 Stephan WalterAVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
2023-04-20 slarenImprove cuBLAS performance by dequantizing on the GPU...
2023-04-19 CRD716Minor: Readme fixed grammar, spelling, and misc updates...
2023-04-19 KawrakowQ4_2 quantization with rmse-optimized scale and quants...
2023-04-19 Georgi Gerganovggml : use 8-bit precision for Q4_1 intermediate result...
2023-04-19 Georgi Gerganovreadme : add warning about Q4_2 and Q4_3
2023-04-19 Stephan Walterggml : Q4 cleanup - remove 4-bit dot product code ...
2023-04-19 slarenAdd NVIDIA cuBLAS support (#1044)
2023-04-18 slarenMulti-threaded ggml_cpy (#1035)
2023-04-18 Georgi Gerganovggml : add new Q4_2 quantization (ARM only) (#1046)
2023-04-18 Georgi Gerganovggml : scratch that - vmlaq_n_f32 is always better
2023-04-18 Georgi Gerganovgitignore : vdot
2023-04-18 Georgi Gerganovggml : optimize ggml_vec_dot_q4_0_q8_0() using vectoriz...
next