]> git.djapps.eu Git - pkg/ggml/sources/ggml/shortlog
pkg/ggml/sources/ggml
2025-09-09 distlibsgitignore : ignore idea files (#1339)
2025-09-05 Georgi Gerganovsync : llama.cpp
2025-09-05 Gabe Goodhartmetal : Add template specialization for mul_mm_id w...
2025-09-05 Chenguang LiCANN: Refactor ND to NZ workspace to be per-device...
2025-09-05 leejetggml: add ops for WAN video model (cuda && cpu) (llama...
2025-09-05 hipuddingCANN: Fix precision issue on 310I DUO multi-devices...
2025-09-05 rmatifopencl: add hs=40 to FA (llama/15758)
2025-09-05 Chenguang LiCANN: fix acl_rstd allocation size in ggml_cann_rms_nor...
2025-09-05 Ruben Ortlamvulkan: fix mmv subgroup16 selection (llama/15775)
2025-09-05 Jeff Bolzvulkan: don't use std::string in load_shaders, to impro...
2025-09-05 Daniel Beveniusvulkan : update ggml_vk_instance_validation_ext_availab...
2025-09-05 Shin-myoung... ggml vulkan: add hardsigmoid and hardswish operations...
2025-09-05 Oliver SimonsCUDA: Optimize `rms_norm_f32` kernel and its fused...
2025-09-05 hipuddingCANN: Add RoPE contiguous check for 310I DUP device...
2025-09-05 xctanggml-cpu : optimize RVV kernels (llama/15720)
2025-09-05 hipuddingCANN: Mask unsupported TRANSPOSE_1D operator (llama...
2025-09-05 Chenguang LiCANN: Fix type float_t to float (llama/15736)
2025-09-05 Ruben Ortlamvulkan: fix shaders gen when no integer dot is availabl...
2025-09-05 hipuddingCANN: Resolve soft_max precision issue (llama/15730)
2025-09-05 Jeff Bolzvulkan: Fix macro parameter order for f32 matmul shader...
2025-09-05 rmatifopencl: add attn sinks support for FA kernels (llama...
2025-09-05 Chenguang LiCANN: Support eager execution mode under ACL graph...
2025-09-05 hipuddingCANN: Support ext_factor in rope (llama/15710)
2025-09-05 Johannes Gäßlerggml-backend: raise GGML_MAX_SPLIT_INPUTS (llama/15722)
2025-09-05 Gilad S.vulkan: use memory budget extension to read memory...
2025-09-05 Jeff Bolzvulkan: add missing clamps in new mul_mat_id paths...
2025-09-05 Ruben Ortlamvulkan: disable large mmv subgroups on older Nvidia...
2025-09-05 s-goto-11ggml: SVE support for exponential functions (llama...
2025-09-05 Prashant Vithuleggml: aarch64: Implement SVE F16 kernels for vector...
2025-09-05 Ruben OrtlamVulkan: Add Integer Dot Product mul_mat_vec shader...
2025-09-05 Daniel Beveniusggml : WebGPU add TRANSPOSE and RESHAPE to supported...
2025-09-05 Akarshan BiswasCUDA: fix build error from ambiguous __half conversions...
2025-09-05 hipuddingCANN: Optimize MUL_MAT_ID (llama/15658)
2025-09-05 hipuddingCANN: fix RoPE cache issue on multi-device (llama/15629)
2025-09-05 Georgi Gerganovmetal : fix checks for available FA kernels (llama...
2025-09-05 Diego Devesallama : separate compute buffer reserve from fattn...
2025-09-05 Jeff Bolzvulkan: handle large sizes for get_rows (llama/15686)
2025-09-05 Jeff Bolzvulkan: mul_mat_id coopmat2 optimizations (llama/15546)
2025-09-05 Daniel Beveniusvulkan : remove unused portability_enumeration_ext...
2025-09-05 Jeff Bolzvulkan: Allow fallback to sysmem memory when vidmem...
2025-09-05 Jeff Bolzvulkan: clamp matmul and FA results to the max finite...
2025-09-05 Charles Xuggml: update kleidiai to v1.13.0 (llama/15663)
2025-09-05 Johannes Gäßlerllama: use FA + max. GPU layers by default (llama/15434)
2025-09-05 Johannes GäßlerCUDA: use FP32 arithmetic for conv2d (llama/15683)
2025-09-05 Jeff Bolzvulkan: Skip syncing for prealloc_y when it is reused...
2025-09-05 Chenguang LiCANN: FIx compiler warnings (llama/15661)
2025-09-05 Aman GuptaCUDA: fix bug in rms_norm fusion (llama/15660)
2025-09-05 Aman GuptaCUDA: fuse adds, fuse add with rms norm (llama/15631)
2025-09-05 mnehete32CUDA: add conv2d (llama/15635)
2025-09-05 Aaron Teoggml-cpu: fix invalid hsum build in debug s390x (llama...
2025-09-05 compiladeggml : fix SSM_SCAN for n_groups > 1 (llama/15625)
2025-09-05 Georgi Gerganovkv-cache : remove LLAMA_SET_ROWS checks (llama/15505)
2025-09-05 matiaslincuda: Add cublasLt_static linking when GGML_STATIC...
2025-09-05 uvosHIP: Enable support for ggml_backend_cuda_register_host...
2025-09-05 Chenguang LiCANN: refactor mask handling and improve performance...
2025-09-05 xctanggml-cpu : add basic RVV support for vector f32 ops...
2025-09-05 rmatifOpenCL: add fused group_norm/norm, mul, add (llama...
2025-09-05 Diego Devesatests : fix test-opt with GGML_BACKEND_DL (llama/15599)
2025-09-05 Akarshan BiswasSYCL: fix rms_norm_mul_add for tensor dim not a multipl...
2025-09-05 Evetests: add performance test for mul mat id (llama/15543)
2025-09-05 shalinib-ibmllamafile: PowerPC Sgemm Optimization (llama/15558)
2025-09-05 Johannes GäßlerCUDA: return -1 for nonexistent compiled arch (llama...
2025-09-05 Georgi Gerganovmetal : optimize FA vec for large sequences and BS...
2025-09-05 Georgi Gerganovmetal : improve `MUL_MAT_ID` (llama/15541)
2025-09-05 Sigbjørn Skjæretmetal : remove contiguous assertion for src0 in IM2COL...
2025-09-05 Yoshi_likes_e4Add a warning for special devices (llama/15563)
2025-09-05 Jeff Bolzvulkan: Remove splitting for mul_mat_id (llama/15568)
2025-09-05 QeeweewCUDA: Accelerate MXFP4 table lookup using `__byte_perm...
2025-09-05 lhezopencl: fix support ops condition for `rms_norm` (llama...
2025-09-05 Ruben Ortlamvulkan: fix min subgroup 16 condition for mmid subgroup...
2025-09-05 Jeff Bolztests: Generate unique input values for count_equal...
2025-09-05 Ihar Hrachyshkametal: fix regression when no metal devices are present...
2025-09-05 Johannes GäßlerCUDA: MoE helper in device code, better tile sizes...
2025-09-05 Georgi Gerganovmetal : add FA kernels for HS=40 (llama/15559)
2025-09-05 Chenguang LiCANN: ROPE cache sin/cos repeat (llama/15501)
2025-09-05 Ruben Ortlamvulkan: apply MUL_MAT_ID subgroup optimization to non...
2025-09-05 Jeff Bolzvulkan: Support FA with any multiple of 8 head sizes...
2025-09-05 Ruben Ortlamvulkan: enable Conv2D for Apple after MoltenVK fixed...
2025-09-05 Jeff Bolzvulkan: workaround MoltenVK compile failure in multi_ad...
2025-09-05 Johannes GäßlerCUDA: fix half2 -> half conversion for HIP (llama/15529)
2025-09-05 Jeff Bolzvulkan: optimize rms_norm, and allow the work to spread...
2025-09-05 Jeff Bolzvulkan: Rewrite synchronization to allow some overlap...
2025-09-05 Aclyvulkan : support ggml_mean (llama/15393)
2025-09-05 Jeff Bolzvulkan: optimize mul_mat_id loading row ids into shared...
2025-09-05 Johannes Gäßlertest-opt: allow slight inprecision (llama/15503)
2025-09-05 Reese Levineggml WebGPU: add support for quantization types (llama...
2025-09-05 rmatifggml: add `conv3d` op (llama/15182)
2025-09-05 Yavor Ivanovcuda : add Pad Reflect 1D support (llama/14659)
2025-09-05 Aaron Teoggml-cpu: Support Q5_0 and Q5_1 on s390x (llama/15486)
2025-09-05 Chenguang LiCANN: Optimize RMS_NORM using cache (llama/15419)
2025-09-05 Diego Devesasched : fix possible use of wrong ids tensor when offlo...
2025-09-05 Aclyvulkan : support conv_2d_dw with f16 weights (llama...
2025-09-05 Dong Won Kimvulkan: add exp operation (llama/15456)
2025-09-05 Jeff Bolzvulkan: Reuse conversion results in prealloc_y (llama...
2025-09-05 Xuan-Son Nguyenggml : fix condition of im2col on Metal backend (llama...
2025-09-05 R0CKSTARmusa: add GGML_UNUSED_VARS (llama/15446)
2025-09-05 Diego Devesasched : copy only the used experts when offloading...
2025-09-05 Johannes GäßlerCUDA: refactor FA support/selection code (llama/15454)
2025-09-05 Johannes GäßlerCUDA: replace GGML_CUDA_F16 with CUDA arch checks ...
2025-09-05 Jeff Bolzvulkan: shorten pipeline name strings (llama/15431)
next