]> git.djapps.eu Git - pkg/ggml/sources/ggml/shortlog
pkg/ggml/sources/ggml
2024-07-29 Daniel Beveniusggml : move c parameter comment to ggml_rope_ext (...
2024-07-29 Johannes Gäßlerexamples: add TensorFlow to requirements.txt (#902)
2024-07-27 0cc4mggml : sync vulkan shaders (#0)
2024-07-27 Georgi Gerganovggml : resolve sync conflicst (#0)
2024-07-27 Georgi Gerganovcommon : handle new quant types (#0)
2024-07-27 Dibakar Gopeggml : add ggml-aarch64 (#0)
2024-07-27 wangshuai09cann: Fix Multi-NPU execution error (llama/8710)
2024-07-27 slarenggml : reduce hash table reset cost (llama/8698)
2024-07-27 DavidKorczynskiggml: handle ggml_init failure to fix NULL pointer...
2024-07-27 Andreas (Andi... ggml : fix build on Windows with Snapdragon X (llama...
2024-07-27 Chen Xifix multi-gpu issue on sycl (llama/8554)
2024-07-27 Georgi Gerganovggml : add and use ggml_cpu_has_llamafile() (llama...
2024-07-27 Joe ToddRe-add erroneously removed -fsycl from GGML_EXTRA_LIBS...
2024-07-27 Joe Toddsycl : Add support for non-release DPC++ & oneMKL ...
2024-07-27 0cc4mVulkan IQ4_NL Support (llama/8613)
2024-07-27 Jeroen MostertAllow all RDNA2 archs to use sdot4 intrinsic (llama...
2024-07-27 luoyu-intelfix scratch size of softmax (llama/8642)
2024-07-27 Mark Zhuangggml: fix compile error for RISC-V (llama/8623)
2024-07-27 Johannes GäßlerCUDA: MMQ code deduplication + iquant support (llama...
2024-07-27 Georgi Gerganovgguf : handle null name during init (llama/8587)
2024-07-27 slarenggml : fix quant dot product with odd number of blocks...
2024-07-27 Clint Herronggml : add friendlier error message to fopen errors...
2024-07-27 Johannes GäßlerCUDA: fix partial offloading for ne0 % 256 != 0 (llama...
2024-07-27 65acmake : install all ggml public headers (llama/8480)
2024-07-27 hipuddingAdd Ascend NPU backend (llama/6035)
2024-07-27 Johannes Gäßlermake/cmake: add missing force MMQ/cuBLAS for HIP (llama...
2024-07-27 Xuan Son NguyenRefactor lora adapter support (llama/8332)
2024-07-27 Daniel Beveniusggml : suppress unknown pragma 'GCC' on windows (llama...
2024-07-27 Meng, Hengyuadd concat through dim 1/2 (llama/8483)
2024-07-27 0cc4mVulkan MMQ Fix (llama/8479)
2024-07-27 bandotivulkan : cmake integration (llama/8119)
2024-07-27 Georgi Gerganovmetal : template-ify some of the kernels (llama/8447)
2024-07-27 Georgi Gerganovggml : minor naming changes (llama/8433)
2024-07-27 Chen Xifix the mul_mat_id ut issues (llama/8427)
2024-07-27 Nicholai Tukanovggml : add NVPL BLAS support (#8329) (llama/8425)
2024-07-27 Daniel Beveniuscuda : suppress 'noreturn' warn in no_device_code ...
2024-07-27 Johannes GäßlerCUDA: optimize and refactor MMQ (llama/8416)
2024-07-27 AidanBeltonSUse multi_ptr to clean up deprecated warnings (llama...
2024-07-27 Georgi Gerganovggml : move sgemm sources to llamafile subfolder (llama...
2024-07-27 Dibakar Gopeggml : add AArch64 optimized GEMV and GEMM Q4 kernels...
2024-07-27 Alberto Cabrera... sycl : Reenabled mmvq path for the SYCL Nvidia Backend...
2024-07-27 Alberto Cabrera... sycl : fix powf call in device code (llama/8368)
2024-07-25 Mahesh Madhavggml : loop tiling optimizations for scalar path (...
2024-07-22 Ivan Filipovggml: add support for float16 input tensors in pooling...
2024-07-22 Briangguf.md: naming convention synced to llama.cpp (#896)
2024-07-21 Briangguf.md: kv store has new authorship metadata keys...
2024-07-20 Tony Wasserkavulkan : initialize vk_buffer_struct members to VK_NULL...
2024-07-20 Georgi Gerganovpy : update pacakges + fix yolo warning
2024-07-12 Borislav Stanimirovcmake : only enable GGML_NATIVE and x86 flags if not...
2024-07-08 Georgi Gerganovsync : whisper.cpp
2024-07-08 Georgi Gerganovexamples : fix compile warnings [no ci] (whisper/0)
2024-07-08 Daniel Beveniusggml : remove unnecessary UNUSED macro call (#880)
2024-07-08 Georgi Gerganovsync : llama.cpp
2024-07-08 Georgi Gerganovtests : fix whitespace (llama/0)
2024-07-08 Natsucmake : add GGML_BUILD and GGML_SHARED macro definition...
2024-07-08 Ouadie EL FAROUKIEnabled more data types for oneMKL gemm_batch (llama...
2024-07-08 Johannes GäßlerCUDA: MMQ support for iq4_nl, iq4_xs (llama/8278)
2024-07-08 DanieleCUDA: revert part of the RDNA1 optimizations (llama...
2024-07-08 Johannes GäßlerCUDA: fix MMQ stream-k rounding if ne00 % 128 != 0...
2024-07-08 luoyu-intelFix WARP_SIZE=16 bug of Intel GPU (llama/8266)
2024-07-08 Neo Zhang Jianyurm get_work_group_size() by local cache for performance...
2024-07-08 AidanBeltonSRemove unneeded semicolons (llama/8280)
2024-07-08 DanieleDefine and optimize RDNA1 (llama/8085)
2024-07-08 Juddfix typo (llama/8267)
2024-07-08 AidanBeltonSDequant improvements rebase (llama/8255)
2024-07-08 Clint HerronRemoves multiple newlines at the end of files that...
2024-07-08 slarencuda : update supports_op for matrix multiplication...
2024-07-08 luoyu-intelFix win build conflict of math library (llama/8230)
2024-07-08 luoyu-intelFix the sub group size of Intel (llama/8106)
2024-07-08 Johannes GäßlerCUDA: refactor and optimize IQ MMVQ (llama/8215)
2024-07-08 zhentaoyuUpdate SYCL-Rope op and Refactor (llama/8157)
2024-07-08 Johannes GäßlerCUDA: fix MMQ stream-k for --split-mode row (llama...
2024-07-02 slarenfix uses of GGML_USE_CUBLAS in tests and examples ...
2024-07-02 John Balisfeat: cuda implementation for `ggml_conv_transpose_1d...
2024-06-30 Yilong Guosycl : add build instruction (#870)
2024-06-30 John Balisupdate "Using cuBLAS" to use correct update cuda compil...
2024-06-26 Georgi Gerganovsync : whisper.cpp
2024-06-26 Georgi Gerganovwhisper : disable CUDA mel + fix FFMPEG
2024-06-26 Georgi Gerganovsync : llama.cpp
2024-06-26 slarenggml : add GGML_CUDA_USE_GRAPHS option, restore GGML_CU...
2024-06-26 Georgi Gerganovsync : llama.cpp, whisper.cpp
2024-06-26 Georgi Gerganovggml : reorganize source code + improve CMake (#865)
2024-06-21 Georgi Gerganovfiles : remove old (#0)
2024-06-18 Georgi Gerganovsync : whisper.cpp
2024-06-18 Georgi Gerganovwhisper : use ggml_backend_sched (whisper/2239)
2024-06-16 Georgi Gerganovsync : whisper.cpp
2024-06-16 Georgi Gerganovcuda : fix bounds check for src0 rows in MMVQ kernel...
2024-06-16 Borislav Stanimirovwhisper : remove `speed_up` and `phase_vocoder*` functi...
2024-06-16 William Tambelliniexamples : add support for decoding input with ffmpeg...
2024-06-16 Georgi Gerganovexamples : remove whisper (#860)
2024-06-16 slarenmove BLAS to a separate backend (cont) (llama/6210)
2024-06-16 Georgi Gerganovscripts : sync ggml-blas
2024-06-16 0cc4mVulkan Shader Refactor, Memory Debugging Option (llama...
2024-06-16 Georgi Gerganovggml : remove OpenCL (#0)
2024-06-16 Georgi Gerganovcmake : fix cuda vars (#0)
2024-06-16 Georgi Gerganovscripts : update sync
2024-06-16 Hong Bo PENGggml : fix and optimize ppc64le (#849)
2024-06-16 Daniel Beveniusggml : remove duplicate include of ggml-common.h (...
2024-06-16 Yilong Guosycl : remove global variables (cont) (llama/7710)
2024-06-16 Yilong Guoscripts : add ggml-sycl to sync scripts (#857)
next