]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/shortlog
pkg/ggml/sources/whisper.cpp
2025-12-31 Georgi Gerganovsync : ggml
2025-12-31 Georgi Gerganovggml : bump version to 0.9.5 (ggml/1410)
2025-12-31 Georgi Gerganovtalk-llama : sync llama.cpp
2025-12-31 Georgi Gerganovsync : ggml
2025-12-31 gatbontonpcmetal : add count_equal op (llama/18314)
2025-12-31 Johannes GäßlerCUDA: fix KQ max calculation (llama/18487)
2025-12-31 Georgi Gerganovmetal : remove BF16 x F16 kernels (llama/18456)
2025-12-31 Aman Guptasycl: add newline at the end of CMakeLists.txt (llama...
2025-12-31 Rahul SatheWork around broken IntelSYCLConfig.cmake in Intel oneAP...
2025-12-31 Charles Xukleidiai: add and integrate SVE 256-bit vector-length...
2025-12-31 Aman GuptaCUDA: add log line when mxfp4 acceleration is used...
2025-12-31 Johannes GäßlerCUDA: fix replacment of bad archs in CMake (llama/18457)
2025-12-31 Johannes GäßlerCUDA: Blackwell features for non-native builds (llama...
2025-12-31 Aman Guptacuda: fix race condition in cumsum (llama/18448)
2025-12-31 uvosHIP: Use mmq on MFMA devices for MUL_MAT_ID in cases...
2025-12-31 Aman GuptaRevert "ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if...
2025-12-31 o7sirpc: fix segfault on invalid endpoint format (llama...
2025-12-31 Boian Berberovcmake: Added more x86_64 CPU backends when building...
2025-12-31 QDeltaggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when...
2025-12-31 lhezopencl: allow resizing transpose buffers (llama/18384)
2025-12-31 Aman Guptaggml-cuda: Use same regex for GGML_NATIVE=OFF (llama...
2025-12-31 Jeff Bolzvulkan: preprocess mul_mat_id experts and discard workg...
2025-12-31 Jeff Bolzvulkan: optimize decodeFuncB in coopmat2 mul_mat_id...
2025-12-31 Jeff Bolzvulkan: Use BK=32 for coopmat2 mul_mat_id (llama/18332)
2025-12-31 Evevulkan: small dequantization improvements (llama/18380)
2025-12-31 Jeff Bolzvulkan: Support UPSCALE w/antialias (llama/18327)
2025-12-31 Jeff Bolzvulkan: handle rope with large number of rows (llama...
2025-12-31 0MarbleCANN: implement the SSM_CONV operator (llama/17737)
2025-12-31 Aman Guptaggml-cuda: fix regex for arch list (llama/18371)
2025-12-31 Aman Guptacuda: optimize cumsum cub path (llama/18362)
2025-12-31 Aman Guptaggml-cuda: fix blackwell native builds (llama/18361)
2025-12-31 Penglin CaiCANN: Add support for CONV_TRANSPOSE_1D when kernel...
2025-12-31 Aadeshveer... ggml : optimize cuda cumsum fallback kernel (llama...
2025-12-31 Aman GuptaCUDA: experimental native mxfp4 support for blackwell...
2025-12-31 Jeff Bolzvulkan: fix command buffer corruption in ggml_backend_v...
2025-12-31 Wang WeixuanCANN : refactor ACL graph cache (llama/17752)
2025-12-31 Ruben Ortlamvulkan: use fewer FA rows for small cache runs (llama...
2025-12-31 TianHao324CANN: Uses yarn_ramp cache in ROPE (llama/17725)
2025-12-31 Chris Rohlfrpc : add check for rpc buffer type (llama/18242)
2025-12-31 nullnameggml-hexagon: create generalized functions for cpu...
2025-12-31 Shouyuggml-hexagon: gelu optimization (llama/18151)
2025-12-31 Taimur Ahmadllamafile: add rvv support for sgemm kernels (llama...
2025-12-31 lhezopencl: unpack q4_0 for adreno in get_tensor (llama...
2025-12-31 Jeff Bolzvulkan: Extend rope fusions to allow mrope (llama/18264)
2025-12-31 Jeff Bolzvulkan: Implement set_tensor_async and the event interf...
2025-12-31 Johannes Gäßlerllama: fix RPC for -fit on (llama/18233)
2025-12-31 Jeff Bolzvulkan: fix im2col overflowing maxworkgroupcount (llama...
2025-12-31 Jeff Bolzvulkan/cuda: fix topk_moe with exp_probs_b (llama/18071)
2025-12-31 Jeff Bolzvulkan: support GGML_UNARY_OP_XIELU (llama/18062)
2025-12-31 Jeff Bolzvulkan: in graph_optimize, try to group ADD operations...
2025-12-31 lovedheartVulkan: some improvement on mul_mat_iq2_xs (llama/18031)
2025-12-31 Aadeshveer... Added comments explaining thread block size selection...
2025-12-31 Alfredggml-hexagon: Implement true Q8_0 quantization on Hexag...
2025-12-31 Jeff Bolzvulkan: Add perf logger mode with concurrency (llama...
2025-12-31 Xuan-Son Nguyenmodel : add ASR support for LFM2-Audio-1.5B (conformer...
2025-12-31 Taimur Ahmadggml-cpu: extend support for RVV floating-point kernels...
2025-12-31 yuloremove i_major_dual (llama/18157)
2025-12-31 Shouyuggml-hexagon: swiglu_oai operation (llama/18114)
2025-12-31 Shouyuggml-hexagon: gelu operation (llama/17921)
2025-12-31 Alberto Cabrera... ggml-cpu: ARM64: repack version of q8_0 (dotprod and...
2025-12-31 yuloHIP: Refactor mma for RDNA and CDNA (llama/17990)
2025-12-24 KITAITI Makotoruby : add Whisper::Token, fix model URI (#3575)
2025-12-18 Georgi Gerganovtalk-llama : sync llama.cpp
2025-12-18 Georgi Gerganovsync : ggml
2025-12-18 Naco Sirenllama.android : Rewrite Android binding (w/o cpu_featur...
2025-12-18 Aadeshveer... ggml : use WARP_SIZE/2 for argmax reduction offset...
2025-12-18 Shouyuggml-hexagon: mm for mtmd (llama/17894)
2025-12-18 Jeremy Demeulemetal: use shared buffers on eGPU (llama/17866)
2025-12-18 Johannes Gäßlerllama: automatically set parameters not set by the...
2025-12-18 Neo Zhang JianyuSupport gpt-oss by OPs add-id, mul_mat for mxfp4, swigl...
2025-12-18 Ruben Ortlamvulkan: fix mul_mat_vec_iq1_s formatting (llama/18026)
2025-12-18 Jeff Bolzvulkan: Fix data race/hang in scalar/cm1 flash attentio...
2025-12-18 lovedheartvulkan: improve mul_mat_vec_iq1_s speed (llama/17874)
2025-12-18 Evevulkan: faster q6_k matmul (llama/17813)
2025-12-18 Georgi Gerganovggml : arm repack fix build (llama/0)
2025-12-18 Jeff Bolzvulkan: support get_rows for i32 (llama/17941)
2025-12-18 Jeff Bolzvulkan: support GGML_OP_DIAG (llama/17893)
2025-12-18 Jeff Bolzvulkan: Multi-pass softmax for large number of cols...
2025-12-18 Jeff Bolzvulkan: Allow non-pow2 n_experts in topk_moe (llama...
2025-12-18 Johannes GäßlerCUDA: fix overflow in MMA kernel without stream-k ...
2025-12-18 Sigbjørn Skjæretcann : fix ops broken by circular padding guard (llama...
2025-12-18 ixgbeggml-cpu : fix RISC-V Q4_0 repack select and RVV featur...
2025-12-18 yuloHIP: enable mmf for RDNA3 (llama/17879)
2025-12-18 Piotr Wilkin... SOLVE_TRI extension to more dimensions (llama/17793)
2025-12-17 Russbuild: link whisper target against Threads::Threads...
2025-12-13 Marcos Del... server: allow custom temp directory for ffmpeg (#3564)
2025-12-13 Georgi Gerganovggml : arm repack fix build (#0)
2025-12-12 Georgi Gerganovtalk-llama : sync llama.cpp
2025-12-12 Georgi Gerganovsync : ggml
2025-12-12 Georgi Gerganovwhisper : adjust to ggml changes (#0)
2025-12-12 Congcong Caicmake : set `CMAKE_RUNTIME_OUTPUT_DIRECTORY` for non...
2025-12-12 Georgi Gerganovggml-alloc : fix reuse-parent logic for misaligned...
2025-12-12 nullnameggml-hexagon: fix `rope` failure at `test-backend-ops...
2025-12-12 Max KrasnyanskyFix race conditions in threadpool when dealing with...
2025-12-12 Georgi Gerganovggml : remove GGML_KQ_MASK_PAD constant (llama/17910)
2025-12-12 Sigbjørn Skjæretcuda : add missing support check for xielu (llama/17895)
2025-12-12 Johannes GäßlerCUDA: fix unpadded strides in MMA FA kernel (llama...
2025-12-12 Neo Zhang Jianyufix softmax for iGPU (llama/17838)
2025-12-12 Gabe Goodhartmetal: SSM kernel improvements (llama/17876)
2025-12-12 Piotr Wilkin... Add DIAG for CUDA (llama/17873)
next