]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/shortlog
pkg/ggml/sources/llama.cpp
2024-08-08 Conrad Kramermetal : add abort callback (ggml/905)
2024-08-08 Pablo Dubouemake : clean llamafile objects (#8923)
2024-08-07 slarenmake : use C compiler to build metal embed object ...
2024-08-07 slarenggml-backend : fix async copy from CPU (#8897)
2024-08-07 Ouadie EL FAROUKI[SYCL] Updated SYCL device filtering (#8901)
2024-08-07 Johannes GäßlerCUDA/HIP: fix tests/test-backend-ops (#8896)
2024-08-07 Zhenwei Jinllama-bench : add support for getting cpu info on Windo...
2024-08-06 Daniel Beveniusquantize : update usage comment in quantize.cpp (#8889)
2024-08-06 Nexes the Oldtypo correction (#8891)
2024-08-06 Xuan Son Nguyenserver : add lora hotswap endpoint (WIP) (#8857)
2024-08-06 Johannes GäßlerCUDA: fix padding logic for FP16/FP32 (#8884)
2024-08-06 Daniel Beveniussimple : update name of executable to llama-simple...
2024-08-06 Jaeden Amerocmake : Link vulkan-shaders-gen with pthreads (#8835)
2024-08-06 MaggotHATE[Vulkan] Fix compilation of `vulkan-shaders-gen` on...
2024-08-06 Georgi Gerganovcontributing : add note about write access
2024-08-06 Molly Sophiaggml : add epsilon as a parameter for group_norm (...
2024-08-06 Douglas Hanleyconvert : add support for XLMRoberta embedding models...
2024-08-06 Mengqing Cao[CANN]: Fix ggml_backend_cann_buffer_get_tensor (#8871)
2024-08-06 Neo Zhang[SYCL] correct cmd name (#8877)
2024-08-05 Liu Jiacommon : Changed tuple to struct (TODO fix) (#8823)
2024-08-05 wangshuai09cann: fix buffer_num and runtime speed slowly error...
2024-08-05 Eric Curtinreadme : add ramalama to the availables UI (#8811)
2024-08-05 Justine Tunneyggml : fix overflows in elu function (#8866)
2024-08-05 Brianpy: Add more authorship metadata from model card (...
2024-08-05 fairydreamingStop the generation when <|eom_id|> token is encountere...
2024-08-05 stduhpfcmake: fix paths for vulkan shaders compilation on...
2024-08-05 BarfingLemursreadme : update model list (#8851)
2024-08-05 Georgi Gerganovllama : better replace_all (#8852)
2024-08-05 0cc4mvulkan : fix Qantized Mat-Vec Mul on AMD GPUs for ncols...
2024-08-05 Georgi Gerganovsync : ggml
2024-08-05 0cc4mvulkan : implement Stable Diffusion operators (ggml...
2024-08-05 Daniel Beveniusggml : move c parameter comment to ggml_rope_ext (ggml...
2024-08-05 wangshuai09cann: support q4_0 model (#8822)
2024-08-04 Brandon SquizzatoInstall curl in runtime layer (#8693)
2024-08-04 ardforkServer: Don't ignore llama.cpp params (#8754)
2024-08-04 Brian Cunniebatched-bench : handle empty `-npl` (#8839)
2024-08-04 Daniel Beveniusbaby-llama : remove duplicate vector include
2024-08-04 Georgi Gerganovflake.lock: Update (#8847)
2024-08-03 jdomkeggml : reading the runtime sve config of the cpu (...
2024-08-02 Sigbjørn SkjæretFix conversion of unnormalized BF16->BF16 weights ...
2024-08-02 Mengqing Caocann: Fix ggml_cann_im2col for 1D im2col (#8819)
2024-08-02 Ouadie EL FAROUKI[SYCL] Fixing wrong VDR iq4nl value (#8812)
2024-08-01 matteoggml-cuda: Adding support for unified memory (#8035)
2024-08-01 Alex O'ConnellBuild: Only include execinfo.h on linux systems that...
2024-08-01 slarencuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X...
2024-08-01 wangshuai09cann: support q8_0 for Ascend\b backend (#8805)
2024-07-31 Igor Okulistserver : update llama-server embedding flag documentati...
2024-07-31 Clint HerronBuild: Fix potential race condition (#8781)
2024-07-31 pcullitonAdding Gemma 2 2B configs (#8784)
2024-07-31 Borislav Stanimirovcmake : fix use of external ggml (#8787)
2024-07-30 Someonenix: cuda: rely on propagatedBuildInputs (#8772)
2024-07-30 Brianpy: add_array() will not add to kv store if value is...
2024-07-30 l3utterflyadded android implementation of ggml_print_backtrace_sy...
2024-07-30 Georgi Gerganovflake.lock: Update (#8729)
2024-07-30 wangshuai09cann: update cmake (#8765)
2024-07-30 zhentaoyu[SYCL] Add `TIMESTEP_EMBEDDING` OP (#8707)
2024-07-29 CarterLi999ggml: bugfix: fix the inactive elements is agnostic...
2024-07-29 R0CKSTARcuda : organize vendor-specific headers into vendors...
2024-07-29 Meng, Hengyu[SYCL] add conv support (#8688)
2024-07-28 Johannes Gäßlercmake: use 1 more thread for non-ggml in CI (#8740)
2024-07-28 Austinchore : Fix vulkan related compiler warnings, add help...
2024-07-28 compiladellama : refactor session file management (#8699)
2024-07-27 R0CKSTARfeat: Support Moore Threads GPU (#8383)
2024-07-27 Georgi Gerganovscripts : sync vulkan-shaders (#0)
2024-07-27 Georgi Gerganovscripts : sync ggml-aarch64 sources
2024-07-27 Georgi Gerganovggml : add missing semicolon (#0)
2024-07-27 Georgi Gerganovsync : ggml
2024-07-27 Mahesh Madhavggml : loop tiling optimizations for scalar path (ggml...
2024-07-27 Ivan Filipovggml: add support for float16 input tensors in pooling...
2024-07-27 Tony Wasserkavulkan : initialize vk_buffer_struct members to VK_NULL...
2024-07-27 Borislav Stanimirovcmake : only enable GGML_NATIVE and x86 flags if not...
2024-07-27 Daniel Beveniusggml : remove unnecessary UNUSED macro call (ggml/880)
2024-07-27 Jeffrey Morganllama : add support for llama 3.1 rope scaling factors...
2024-07-27 Georgi Gerganovllama : add function for model-based max number of...
2024-07-27 Daniel Beveniuscommon : add --no-warmup option for main/llama-cli...
2024-07-27 wangshuai09cann: Fix Multi-NPU execution error (#8710)
2024-07-27 slarenggml : reduce hash table reset cost (#8698)
2024-07-26 Juddllama : fix order of parameters (#8706)
2024-07-25 Yaikoserver : add Speech Recognition & Synthesis to UI ...
2024-07-25 Xuan Son Nguyenexamples : export-lora : fix issue with quantized base...
2024-07-25 DavidKorczynskiggml: handle ggml_init failure to fix NULL pointer...
2024-07-25 Georgi Gerganovllama : fix build + fix fabs compile warnings (#8683)
2024-07-25 Andreas (Andi... ggml : fix build on Windows with Snapdragon X (#8531)
2024-07-25 Georgi Gerganovtests : fix printfs (#8068)
2024-07-25 Chen Xi[SYCL] fix multi-gpu issue on sycl (#8554)
2024-07-25 Georgi Gerganovggml : add and use ggml_cpu_has_llamafile() (#8664)
2024-07-25 Xuan Son Nguyenexamples : remove `finetune` and `train-text-from-scrat...
2024-07-25 Ujjawal Panchaldocs : Quantum -> Quantized (#8666)
2024-07-25 Fan Shupeillama: use sliding window for phi3 (#8627)
2024-07-24 MorganRO8readme : update games list (#8673)
2024-07-24 Joe ToddBuild Llama SYCL Intel with static libs (#8668)
2024-07-24 Thorsten Sommerreadme : update UI list [no ci] (#8505)
2024-07-24 Xuan Son Nguyenllama : fix `llama_chat_format_single` for mistral...
2024-07-24 Joe ToddRe-add erroneously removed -fsycl from GGML_EXTRA_LIBS...
2024-07-24 Xuan Son Nguyenadd llama_lora_adapter_clear (#8653)
2024-07-23 Xuan Son Nguyenexamples : Fix `llama-export-lora` example (#8607)
2024-07-23 Vali Malinoiuserver : fix URL.parse in the UI (#8646)
2024-07-23 Joe Toddsycl : Add support for non-release DPC++ & oneMKL ...
2024-07-23 Georgi Gerganovllama : move vocab, grammar and sampling into separate...
2024-07-23 0cc4mVulkan IQ4_NL Support (#8613)
next