]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/shortlog
pkg/ggml/sources/llama.cpp
2023-09-01 Georgi Gerganovllama2c : rename function
2023-09-01 Cebtenzzremake : use unaligned vector moves on MinGW (#2945)
2023-09-01 m3ndaxminor : add const qualifiers (#2853)
2023-09-01 Konstantin... docs : add java-llama.cpp to README.md (#2935)
2023-09-01 Cebtenzzrebuild : fix most gcc and clang warnings (#2861)
2023-09-01 Ben Siraphobexamples : add C grammar (#2357)
2023-09-01 Tameemggml : add RISC-V vector intrinsics support (#2929)
2023-09-01 Georgi Gerganovmetal : slight speed-up for add and mul kernels (#2917)
2023-09-01 staviqlogs : fix mingw-like builds (fixes #2898) (#2911)
2023-09-01 Cebtenzzrellama2c : fix segfault and alloc-dealloc-mismatch ...
2023-09-01 Kawrakowmetal: somewhat faster f16 x f32 matrix multiply kernel...
2023-09-01 Cebtenzzreconvert : fix another python 3.8 issue (#2949)
2023-08-31 slarenremove convert-llama-7b-pth-to-gguf.py and convert...
2023-08-31 Kerfufflescripts: Use local gguf package when running from repo...
2023-08-31 DannyDaemonic@vxiiduu's fix for PrefetchVirtualMemory (#2930)
2023-08-31 Cebtenzzreconvert : fix python 3.8 support, modernize type annota...
2023-08-30 Johannes GäßlerCUDA: mul_mat_q=true llama_context_params default ...
2023-08-30 Henri Vasserman[Docker] fix tools.sh argument passing. (#2884)
2023-08-30 Georgi Gerganovconvert.py : use dir name to name the llama
2023-08-30 Georgi Gerganovexamples : fix underscore in beam-search + .gitignore...
2023-08-30 M. Yusuf Sarıgözgguf : add workflow for Pypi publishing (#2896)
2023-08-30 alonfarajmake : add test and update CI (#2897)
2023-08-30 Gilad Sdocs : add `node-llama-cpp` to `README.md` (#2885)
2023-08-30 Kerfuffleconvert : various script cleanups/fixes + merges and...
2023-08-30 chaihahahallm.vim : stop generation at multiple linebreaks, bind...
2023-08-30 staviqmain : log file (#2748)
2023-08-30 Cebtenzzretests : add a C compliance test (#2848)
2023-08-29 slarenggml : add view_src and view_offs to ggml_tensor for...
2023-08-29 slarenremove outdated references to -eps and -gqa from README...
2023-08-29 KawrakowTell users attmepting to run perplexity with too few...
2023-08-29 Kawrakow10X faster BPE tokenizer (#2876)
2023-08-29 maddes8chtpy : fix "usage" messages (#2873)
2023-08-29 jameswu2014convert.py : fix baichuan7B support (#2870)
2023-08-29 Jhen-Jie Hongreadme : add react-native binding (#2869)
2023-08-29 Cebtenzzremake : fix clang tests build, add missing examples...
2023-08-29 Georgi Gerganovmetal : add option to disable debug logs (close #2764)
2023-08-29 Georgi Gerganovscripts : add pipefail
2023-08-29 Marcus Dunnadded `struct` to llama_dump_timing_info_yaml's `llama_...
2023-08-28 xaedestrain : mem usage and other improvements (#2439)
2023-08-28 slarenllama-bench : set locale to utf8 (#2832)
2023-08-28 Johannes GäßlerYAML result logging + preset script (#2657)
2023-08-28 alonfarajmake : fix tests build (#2855)
2023-08-28 grahamethllama.cpp : fix wrong vsnprintf call in MS compiler...
2023-08-28 Ronny Brendelggml : tiny ggml_vec_dot_q4_K_q8_K AVX2 improvement...
2023-08-28 Georgi Gerganovggml : sync (mem align to header + conv_transpose_2d...
2023-08-28 Johannes GäßlerCUDA: fix RoPE asserts, block sizes (#2833)
2023-08-28 igarnierllama.h : add missing struct keyword for C compat in...
2023-08-28 Georgi Gerganovmetal : fix memory leak (#2762)
2023-08-28 Cebtenzzrequantize : make output filename optional again (#2823)
2023-08-28 JohnnyBdevops : added systemd units and set versioning to...
2023-08-27 Georgi Gerganovgguf : fix strings to not be null-terminated (#2839)
2023-08-27 Georgi Gerganovllama : fix MPI threads (close #2827)
2023-08-27 Olivier Chafikexamples : update llama2.c converter to read vocab...
2023-08-27 Kawrakowllama : speedup tokenization (#2831)
2023-08-27 Georgi Gerganovfalcon : fix CUDA inference by making K and Q contiguou...
2023-08-27 Georgi Gerganovreadme : fix headings
2023-08-27 Georgi Gerganovscripts : helper convert script
2023-08-27 Kawrakowk_quants tuning for Falcon-7b (#2816)
2023-08-27 Georgi Gerganovreadme : update hot topics
2023-08-27 Georgi Gerganovgguf : add 64-bit support (GGUF v2) (#2821)
2023-08-27 Georgi Gerganovllama : more tokenizer fixes (#2810)
2023-08-27 Przemysław... ggml : detect SSSE3 (#2825)
2023-08-27 slarenci : add LoRA test to CI (#2650)
2023-08-26 Bruce MacDonaldserver : add `/detokenize` endpoint (#2802)
2023-08-26 Kerfuffleconvert.py : advanced option (#2753)
2023-08-26 Tim Millerllama : use Unicode Escape Sequence to replace encoded...
2023-08-26 Tungsten842flake.nix : add rocm support and cleanup (#2808)
2023-08-26 Cebtenzzrellama : move #includes out of _GNU_SOURCE conditional...
2023-08-26 Dr. Tom Murphy... main : fix bug (penalize_nl=false doesn't work) + suppr...
2023-08-26 Cebtenzzrellama : use std::abs in llama_sample_tail_free (#2800)
2023-08-26 Georgi Gerganovk-quants : remove unnecessary tensor shape restrictions...
2023-08-26 KawrakowBetter perplexity for 2- and 3-bit quantization for...
2023-08-26 KawrakowFix HellaSwag (#2805)
2023-08-26 Volodymyr Vitvitskyiflake : build llama.cpp on Intel with nix (#2795)
2023-08-26 Nigel BoschHandle null rope scaling value (#2793)
2023-08-26 klosaxFix spm whitespaces (#2806)
2023-08-26 lonexamples : skip unnecessary external lib in server...
2023-08-25 Marcus Dunnllama : fix struct decl (#2790)
2023-08-25 KawrakowFaster perplexity computation (#2786)
2023-08-25 Matt Pulverllama : add llama_beam_search() (#2267)
2023-08-25 Nigel Boschconvert.py : Get rope scale from HuggingFace models...
2023-08-25 slarenllama-bench : add model sizes (#2771)
2023-08-25 slarenconvert.py : export rope freq_base when converting...
2023-08-25 Jhen-Jie Hongserver : display token probabilities in the UI (#2489)
2023-08-25 Georgi Gerganovci : pip install gguf in editable mode (#2782)
2023-08-25 M. Yusuf Sarıgözgguf : export objects to user code (#2780)
2023-08-25 Henri VassermanROCm Port (#1087)
2023-08-25 Georgi Gerganovcuda : add RoPE kernel for mode == 2 (NeoX) (#2760)
2023-08-25 M. Yusuf Sarıgözgguf : make gguf pip-installable
2023-08-25 Shouzheng Liuggml-alloc : enlarge size of parse_seq (#2776)
2023-08-24 Marcus DunnAdded `enum` to `llama_token_get_type` return type...
2023-08-24 slarenconvert.py : try to determine n_ctx automatically for...
2023-08-24 slarengguf : add rope_freq_base parameter for CodeLlama ...
2023-08-24 Georgi Gerganovfalcon : write file type
2023-08-24 Shouzheng Liumetal : bug-fix when enable ggml-alloc (#2757)
2023-08-24 Georgi Gerganovconvert : auto-determine model name based on dir +...
2023-08-24 KerfuffleFix for main example getting stuck when -n -2 and ...
2023-08-24 slarenfix convert.py for codellama, add llama 34B to the...
2023-08-24 DannyDaemonicTag release with build number (#2732)
2023-08-24 Georgi Gerganovmetal : add Q8_0 support (#2763)
next