]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/shortlog
pkg/ggml/sources/llama.cpp
2024-05-13 k.h.laillava-cli: fix base64 prompt (#7248)
2024-05-13 Johannes Gäßlerperplexity: add BF16 vs. FP16 results (#7150)
2024-05-13 Neo Zhang[SYCL] rm wait() (#7233)
2024-05-13 Joan Fontanalsllama : rename jina tokenizers to v2 (#7249)
2024-05-13 Brianconvert.py: Outfile default name change and additional...
2024-05-13 Benjamin Findleychange default temperature of OAI compat API from 0...
2024-05-13 Neo Zhang[SYCL] Add oneapi runtime dll files to win release...
2024-05-13 Neo Zhang[SYCL] update CI with oneapi 2024.1 (#7235)
2024-05-12 Johannes GäßlerCUDA: add FP32 FlashAttention vector kernel (#7188)
2024-05-12 Georgi Gerganovcmake : fix version cmp (#7227)
2024-05-12 slarenremove convert-lora-to-ggml.py (#7204)
2024-05-11 Georgi Gerganovmetal : fix warnings (skipme) (#0)
2024-05-11 Georgi Gerganovsync : ggml
2024-05-11 Georgi Gerganovmetal : fix indent (ggml/0)
2024-05-11 Georgi Gerganovggml : resolve merge (ggml/0)
2024-05-11 Josh RamerScripting & documenting debugging one test without...
2024-05-11 Xuan Son Nguyenfix system prompt handling (#7153)
2024-05-11 compiladeconvert-hf : support bfloat16 conversion (#7158)
2024-05-11 Georgi Gerganovsync : ggml
2024-05-11 Justina Chofeat: implemented sigmoid function (ggml/806)
2024-05-11 Borislav Stanimirovbuild: fix and ignore msvc warnings (ggml/805)
2024-05-11 CrispStrobeconvert : skip unaccessible HF repos (#7210)
2024-05-11 Steve Grubbserver : free llama_batch on exit (#7212)
2024-05-11 Haoxiang Feillama : lookup word in vocab before doing BPE merges...
2024-05-11 Johannes Gäßlerserver: fix reported top tokens for temperature 0 ...
2024-05-11 Joan Fontanalsllama : add Jina Embeddings architecture (#6826)
2024-05-11 Georgi Gerganovggml : full ALiBi support (#7192)
2024-05-10 slarenllama-bench : add pp+tg test type (#7199)
2024-05-10 Georgi Gerganovmetal : fix flash attention kernel requirements (#7169)
2024-05-10 Georgi Gerganovconvert : print "ignore_merges" field
2024-05-10 slarenllama : use n_vocab to differentiate between mistral...
2024-05-10 Justine TunneyFix memory bug in grammar parser (#7194)
2024-05-10 HanishKVCMain+: optionally allow special tokens from user in...
2024-05-10 Andreillava : fix moondream support (#7163)
2024-05-10 Ouadie EL FAROUKIMinor arithmetic improvement to mmvq wrapper kernel...
2024-05-09 slareneval-callback : fix conversion to float (#7184)
2024-05-09 0cc4mVulkan Bugfixes and Improvements (#7084)
2024-05-09 Georgi Gerganovreadme : add scheduled server workflow status badge
2024-05-09 l3utterflyreadme : add app (#6371)
2024-05-09 jaime-m-pllama3 custom regex split (#6965)
2024-05-09 Johannes GäßlerCUDA: generalize FP16 fattn vec kernel (#7061)
2024-05-09 GalunidAdd warning if token is invalid (#7173)
2024-05-09 Daniel Beveniusllama : update llama_timings.n_p_eval setting (#7160)
2024-05-09 Sigbjørn Skjæretgguf-py : add special token modification capability...
2024-05-09 Albert Jinopencl : alignment size converted from bits to bytes...
2024-05-09 Ahmet ZeerTypoFix (#7162)
2024-05-08 Jared Van Bortelcmake : fix typo (#7151)
2024-05-08 compiladeconvert-hf : save memory with lazy evaluation (#7075)
2024-05-08 agray3Introduction of CUDA Graphs to LLama.cpp (#6766)
2024-05-08 Johannes GäßlerJSON: [key] -> .at(key), assert() -> GGML_ASSERT (...
2024-05-08 Georgi GerganovRevert "llava : add support for moondream vision langua...
2024-05-08 JohnnyBserver : add themes + favicon (#6848)
2024-05-08 Gilad Smetal : use `vm_allocate` instead of `posix_memalign...
2024-05-08 Dawid Potockimain : add --conversation / -cnv flag (#7108)
2024-05-08 Evesgemm : AVX Q4_0 and Q8_0 (#6891)
2024-05-08 Johanserver : add_special option for tokenize endpoint ...
2024-05-08 20kdcconvert.py : --vocab-only generates false but valid...
2024-05-08 Ren Xuanchengllama : add BPE pre-tokenization for Qwen2 (#7114)
2024-05-08 Xuan Son Nguyenclean up json_value & server_log (#7142)
2024-05-08 DAN™convert : add BPE pre-tokenization for DBRX (#7132)
2024-05-08 Georgi Gerganovpy : also print the normalizers
2024-05-08 Briancompare-llama-bench.py: add missing basicConfig (#7138)
2024-05-08 Justine Tunneyggml : introduce bfloat16 support (#6412)
2024-05-08 Georgi Gerganovmetal : fix unused warning
2024-05-08 JeximoFurther tidy on Android instructions README.md (#7077)
2024-05-08 jukofyorkFixed save_imatrix to match old behaviour for MoE ...
2024-05-07 Johannes Gäßlerserver: fix incorrectly reported token probabilities...
2024-05-07 nopperlFix OLMo HF to GGUF conversion (#6910)
2024-05-07 Kyle Misteleserver : update readme with undocumented options (...
2024-05-07 Georgi Gerganovreadme : update hot topics
2024-05-07 RhinoDevelmain : update log text (EOS to EOG) (#7104)
2024-05-07 omahsdocs: fix typos (#7124)
2024-05-07 Georgi Gerganovci : add GG_BUILD_EXTRA_TESTS_0 env (#7098)
2024-05-06 William TambelliniAdd an option to build without CUDA VMM (#7067)
2024-05-06 Georgi Gerganovflake.lock: Update (#7079)
2024-05-06 Georgi Gerganovminor : fix trailing whitespace
2024-05-05 kunnisAdding support for the --numa argument for llama-bench...
2024-05-05 Sigbjørn SkjæretDisable benchmark on forked repo (#7034)
2024-05-05 Lyle Deanreadme : add note that LLaMA 3 is not supported with...
2024-05-05 DAN™command-r : add BPE pre-tokenization (#7063)
2024-05-05 Brianpy : logging and flake8 suppression refactoring (#7081)
2024-05-04 Xuan Son Nguyengguf-split: add --no-tensor-first-split (#7072)
2024-05-04 JeximoTidy Android Instructions README.md (#7016)
2024-05-04 viricFix Linux /sys cpu path to guess number of cores (...
2024-05-04 maor-psIf first token generated from the server is the stop...
2024-05-04 Georgi Gerganovtests : add test-tokenizer-0.sh + fix some tokenizers...
2024-05-03 Brianconvert.py : add python logging instead of print()...
2024-05-03 Daniel Beveniusllama : rename ctx to user_data in progress_callback...
2024-05-02 BartowskiRemove .attention from skipped tensors to match more...
2024-05-02 alwqxchore: fix typo in llama.cpp (#7032)
2024-05-01 Andrew DowningUpdate LOG_IMPL and LOG_TEE_IMPL (#7029)
2024-05-01 l3utterflymain : fix off by one error for context shift (#6921)
2024-05-01 Johannes GäßlerServer: add tests for batch size, different seeds ...
2024-05-01 Johannes GäßlerCUDA: CUDART < 11.7 workaround for __hmax, __hmax2...
2024-05-01 slarenci : exempt confirmed bugs from being tagged as stale...
2024-04-30 Johannes Gäßlerperplexity: more statistics, added documentation (...
2024-04-30 Kevin Gibbonsswitch to using localizedDescription (#7010)
2024-04-30 Georgi Gerganovmetal : remove deprecated error code (#7008)
2024-04-30 Kevin Gibbonsmetal : log more info on error (#6987)
2024-04-30 Georgi Gerganovggml : add Flash Attention (#5021)
next