]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/shortlog
pkg/ggml/sources/whisper.cpp
2025-02-27 Jeff Bolzvulkan: account for lookup tables when checking shared...
2025-02-27 Karol Kontnyggml: Fix data race in ggml threadpool (llama/11736)
2025-02-27 Johannes GäßlerCUDA: fix min. version for movmatrix (llama/11751)
2025-02-27 Jeff Bolzvulkan: print shared memory size (llama/11719)
2025-02-27 Akarshan BiswasSYCL: remove XMX info from print devices (llama/11712)
2025-02-27 Jinyang Heggml : optimize and build warning fix for LoongArch...
2025-02-27 Akarshan BiswasSYCL: Adjust support condition for norm operators ...
2025-02-27 junchao-zhaoggml : fix LoongArch compile error with 128-bit SIMD...
2025-02-27 Jeff Bolzvulkan: optimize coopmat2 iq2/iq3 callbacks (llama...
2025-02-27 Rémy Ovulkan: initial support for IQ4_XS quantization (llama...
2025-02-27 Jeff Bolzvulkan: use smaller combined allocations to avoid fragm...
2025-02-27 Charles Duffymetal : avoid breaking build when metal API predates...
2025-02-27 Georgi Gerganovmetal : adjust support conditions for norm operators...
2025-02-27 Johannes GäßlerCUDA: support for mat. mul. with ne03 != ne13 (llama...
2025-02-27 Johannes GäßlerCUDA: non-contiguous (RMS) norm support (llama/11659)
2025-02-27 fxzjshmHIP: force max threads per block to be 1024 (llama...
2025-02-27 Jhen-Jie Hongmetal : use residency set for other platforms (llama...
2025-02-27 Patrick Pengrpc: fix known RCE in rpc-server (ggml/1103)
2025-02-25 masahjistream : add beam size parameter(#2836)
2025-02-25 Thomas Fitzsimmonswhisper : restore big endian support (#2816)
2025-02-06 JuddFixes for Windows (#2790)
2025-02-05 midnightcmake : fix compile assumptions for power9/etc (#2777)
2025-02-04 Georgi Gerganovauthors : update upstream/1.7.4+95
2025-02-04 Georgi Gerganovsync : ggml
2025-02-04 Christian Kastnercmake: Add ability to pass in GGML_BUILD_NUMBER (ggml...
2025-02-04 Georgi Gerganovreadme : add maintenance roadmap
2025-02-04 Georgi Gerganovci : add stalebot
2025-02-03 billyctnode : add max_len params in node addon (#2760)
2025-02-03 Georgi Gerganovtalk-llama : sync llama.cpp
2025-02-03 mgrachtencoreml : always convert to "neuralnetwork" (#2770)
2025-02-03 Georgi Gerganovci : more git
2025-02-03 Georgi Gerganovci : install git
2025-02-03 Georgi Gerganovci : use ubuntu-22.04 instead of ubuntu-latest
2025-02-03 Georgi Gerganovcmake : sync cmake scripts
2025-02-03 Georgi Gerganovsync : ggml
2025-02-03 Georgi Gerganovscripts : fix sync paths
2025-02-03 Johannes GäßlerCUDA: fix Volta FlashAttention logic (llama/11615)
2025-02-03 Johannes GäßlerHIP: fix flash_attn_stream_k_fixup warning (llama/11604)
2025-02-03 uvosCUDA/HIP: add support for selectable warp size to mmv...
2025-02-03 uvosHIP: add GGML_CUDA_CC_IS_* for amd familys as increasin...
2025-02-03 Johannes GäßlerCUDA: use mma PTX instructions for FlashAttention ...
2025-02-03 Olivier Chafik`ci`: use sccache on windows instead of ccache (llama...
2025-02-03 uvosHIP: require at least HIP 5.5
2025-02-03 uvosHIP: Prepare reduction operators for wave 64
2025-02-03 uvosCUDA/HIP: add warp_size to cuda_device_info
2025-02-03 Rémy Oudomphengvulkan: implement initial support for IQ2 and IQ3 quant...
2025-02-03 Jeff Bolzvulkan: Catch pipeline creation failure and print an...
2025-02-03 uvosHIP: Supress transformation warning in softmax.cu
2025-02-03 Nikita SarychevHIP: Only call rocblas_initialize on rocblas versions...
2025-02-03 someone13574cmake : don't fail on `GGML_CPU=OFF` (llama/11457)
2025-02-03 Akarshan BiswasSYCL : SOFTMAX F16 mask support and other fixes (llama...
2025-02-03 Haus1AMD: parse the architecture as supplied by gcnArchName...
2025-02-03 Ihar Hrachyshkametal: Handle null returned from MTLCreateSystemDefault...
2025-02-03 Georgi Gerganovmetal : use residency sets (llama/11427)
2025-02-03 bandoticmake: add ggml find package (llama/11369)
2025-02-03 Jeff Bolzvulkan: compile shaders on-demand (llama/11406)
2025-02-03 uvosHip: disable VMM on hip as it seams that it dosent...
2025-02-03 uvoship : Add hipGraph and VMM support to ROCM (llama/11362)
2025-02-03 Johannes GäßlerCUDA: fix FP16 cuBLAS GEMM (llama/11396)
2025-02-03 uvosrocBLAS: Avoid fp32->fp16->fp32 conversion on cdna...
2025-02-03 Johannes GäßlerCPU/CUDA: fix (GQA) mul mat back, add CUDA support...
2025-02-03 Bernhard M... cmake : avoid -march=native when reproducible build...
2025-02-03 amd-dwangVulkan-run-test: fix mmq_wg_denoms (llama/11343)
2025-02-03 Jeff Bolzvulkan: sort shaders for more deterministic binary...
2025-02-03 Jeff Bolzvulkan: fix diag_mask_inf (llama/11323)
2025-02-03 Radoslav Gerganovrpc : better caching of the base buffer pointer (llama...
2025-02-03 Georgi Gerganovmetal : fix out-of-bounds write (llama/11314)
2025-02-03 Jeff Bolzvulkan: fix coopmat2 validation failures (llama/11284)
2025-02-03 Nicolò ScipioneSYCL: Introducing memory host pool (llama/11251)
2025-02-03 Georgi Gerganovcmake : add sanitizer flags for llama.cpp (llama/11279)
2025-02-03 Jeff Bolzvulkan: fix coopmat2 flash attention for non-contiguous...
2025-02-03 Radoslav Gerganovrpc : early register backend devices (llama/11262)
2025-02-03 Jeff Bolzvulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1...
2025-02-03 Jeff Bolzvulkan: optimize coopmat2 q4_k/q5_k dequant functions...
2025-02-03 Jeff Bolzvulkan: optimize coopmat2 q2_k dequant function (llama...
2025-02-03 Johannes GäßlerCUDA: backwards pass for misc. ops, add tests (llama...
2025-02-03 fj-y-saitoggml: aarch64: implement SVE kernels for q4_K_q8_K...
2025-02-03 Evevulkan: scale caching for k quants + misc fixes (llama...
2025-02-03 Junil Kimfix: ggml: fix vulkan-shaders-gen build (llama/10448)
2025-02-03 Johannes GäßlerRoPE: fix back, CUDA support for back + noncont. (llama...
2025-02-03 Akarshan BiswasSYCL: Add gated linear attention kernel (llama/11175)
2025-02-03 William Tambelliniggml : add option to not print stack on abort (ggml...
2025-02-03 issixxggml-cpu : fix ggml_graph_compute_thread did not termin...
2025-02-03 Georgi Gerganovci : dummy commit to trigger CI
2025-01-21 KITAITI Makotoruby : Make context accept initial parameters, API... upstream/1.7.4+33
2025-01-18 Corey Earwoodwhisper.objc : fix build and CI
2025-01-14 Georgi Gerganovtalk-llama : sync llama.cpp
2025-01-14 Georgi Gerganovsync : ggml
2025-01-14 Johannes GäßlerGGUF: C++ refactor, backend support, misc fixes (skip...
2025-01-14 lhezggml : add opencl backend (skip) (llama/10693)
2025-01-14 Andreas Kieslingercuda : CUDA Graph Compute Function Refactor (precursor...
2025-01-14 Radoslav Gerganovggml : do not define GGML_USE_CUDA when building with...
2025-01-14 0cc4mVulkan: Fix float16 use on devices without float16...
2025-01-14 Molly Sophiallama: add support for QRWKV6 model architecture (llama...
2025-01-14 Akarshan BiswasSYCL: Refactor ggml_sycl_compute_forward (llama/11121)
2025-01-14 hydaifix: add missing msg in static_assert (llama/11143)
2025-01-14 amritahs-ibmllamafile : ppc64le MMA INT8 implementation (llama...
2025-01-14 Mathieu BaudierDisable GL_KHR_cooperative_matrix Vulkan extension...
2025-01-14 ag2s20150909fix: Vulkan shader gen binary path when Cross-compiling...
2025-01-14 Johannes GäßlerGGUF: C++ refactor, backend support, misc fixes (llama...
next