]> git.djapps.eu Git - pkg/ggml/sources/ggml/shortlog
pkg/ggml/sources/ggml
2024-11-04 R0CKSTARmusa: workaround for Guilty Lockup in cleaning src0...
2024-11-04 Georgi Gerganovscripts : update sync
2024-11-02 Yuri Khrustalevcmake : make it possible linking ggml as external lib...
2024-11-01 Plamen Minevmetal : fix minor string leaks (#1004)
2024-11-01 Georgi Gerganovsync : whisper.cpp
2024-11-01 Georgi Gerganovggml : alloc ggml_contexts on the heap (whisper/2525)
2024-10-26 Georgi Gerganovggml : remove sync artifacts
2024-10-26 Ma Mingfeiggml : add AMX backend (llama/8998)
2024-10-26 Georgi Gerganovsync : llama.cpp
2024-10-26 Georgi Gerganovmetal : support permuted matrix multiplicaions (llama...
2024-10-26 Johannes GäßlerCUDA: fix insufficient buffer clearing for MMQ (llama...
2024-10-26 Johannes GäßlerCUDA: fix MMQ for non-contiguous src0, add tests (llama...
2024-10-26 Georgi Gerganovscripts : fix sync scripts (amx)
2024-10-23 bssrdfincrease cuda_cpy block size (#996)
2024-10-23 Georgi Gerganovsync : llama.cpp
2024-10-23 Jun Hee Yoometal : add POOL2D and fix IM2COL (llama/9943)
2024-10-23 leo-ponyAdapt to dynamically loadable backends mechanism (llama...
2024-10-23 Georgi Gerganovggml : add asserts for type conversion in fattn kernels...
2024-10-23 Radoslav Gerganovrpc : pack only RPC structs (llama/9959)
2024-10-23 Neo Zhang Jianyufix mul_mat_vec_q and *_vec_q error (llama/9939)
2024-10-23 Radoslav Gerganovrpc : backend refactoring (llama/9912)
2024-10-23 Ouadie EL FAROUKIAdd SYCL Backend registry, device and Event Interfaces...
2024-10-23 Ma Mingfeiadd amx kernel for gemm (llama/8998)
2024-10-23 Diego Devesavulkan : add backend registry / device interfaces ...
2024-10-23 Gilad Sfix: allocating CPU buffer with size `0` (llama/9917)
2024-10-23 Gilad Sfix: use `vm_allocate` to allocate CPU backend buffer...
2024-10-18 Johannes GäßlerCUDA: fix 1D im2col, add tests (#993)
2024-10-16 Daniel Beveniusggml : remove redundant set of contexts used field...
2024-10-16 Georgi Gerganovtests : update type traits call (#0)
2024-10-16 Georgi Gerganovsync : llama.cpp
2024-10-16 leo-ponyFix cann compilation error (llama/9891)
2024-10-16 agray3Vectorize load instructions in dmmv f16 CUDA kernel...
2024-10-16 Diego Devesaggml : move more prints to the ggml log system (llama...
2024-10-16 Diego Devesarpc : add backend registry / device interfaces (llama...
2024-10-16 R0CKSTARmusa: add docker image support (llama/9685)
2024-10-16 Diego Devesaggml : fix BLAS with unsupported types (llama/9775)
2024-10-16 Diego Devesaggml : add backend registry / device interfaces to...
2024-10-16 Andrew Minh... Update building for Android (llama/9672)
2024-10-16 Georgi Gerganovggml : add metal backend registry / device (llama/9713)
2024-10-16 Paul Tsochantarismetal : single allocation of encode_async block (llama...
2024-10-09 Daniel Beveniusggml-alloc : remove buffer_id from leaf_alloc (#987)
2024-10-06 Georgi Gerganovzig : remove obsolete build script
2024-10-06 Georgi Gerganovsync : whisper.cpp
2024-10-06 SRHMorrisvulkan : retry allocation with fallback flags (whisper...
2024-10-06 Georgi Gerganovspm : update backend.c -> backend.cpp
2024-10-05 Johannes Gäßlerexamples: add dataset, data shuffling to MNIST (#982)
2024-10-05 Georgi Gerganovsync : whisper.cpp
2024-10-05 Georgi Gerganovmetal : zero-init buffer contexts (whisper/0)
2024-10-04 Georgi Gerganovsync : llama.cpp
2024-10-04 Daniel Beveniusggml : fix typo in example usage ggml_gallocr_new ...
2024-10-04 Diego Devesaggml : fixes after sync (#983)
2024-10-03 Georgi Gerganovsync : whisper.cpp
2024-10-03 Georgi Gerganovggml : remove old file (skip) (#0)
2024-10-03 Georgi Gerganovcont : fixes
2024-10-03 Georgi Gerganovexamples : adapt to new ggml backend interfaces
2024-10-03 Diego Devesaggml-backend : add device and backend reg interfaces...
2024-10-03 Georgi Gerganovsync : llama.cpp
2024-10-03 Ouadie EL FAROUKIFixed dequant precision issues in Q4_1 and Q5_1 (llama...
2024-10-03 Diego Devesaggml-backend : add device and backend reg interfaces...
2024-10-03 Alberto Cabrera... Initial cmake support of SYCL for AMD GPUs (llama/9658)
2024-10-03 Radoslav Gerganovvulkan : do not use tensor->extra (llama/9407)
2024-10-03 Johannes Gäßlerggml/ex: calculate accuracy in graph, adapt MNIST ...
2024-10-02 Johannes Gäßlerggml: refactor cross entropy loss CPU impl. (#976)
2024-10-01 Georgi Gerganovreadme : refresh
2024-10-01 Georgi Gerganovmetal : add perf-metal tool + fix build
2024-10-01 Georgi Gerganovmetal : reduce command encoding overhead (llama/9698)
2024-09-30 Johannes Gäßlertest: fix OPT_STEP_ADAMW for test-backend-ops (#974)
2024-09-30 Salvatore Mesoracavulkan : mul_mat: fix UB with small warps (#952)
2024-09-30 Borislav Stanimirovggml : fix ggml_cast (#973)
2024-09-29 Johannes Gäßlerggml: fix gradient allocation logic (#966)
2024-09-29 Georgi Gerganovsync : llama.cpp
2024-09-29 Georgi Gerganovggml : define missing HWCAP flags (llama/9684)
2024-09-29 slarentest-backend-ops : use flops for some performance tests...
2024-09-29 Dan Johanssonggml : add run-time detection of neon, i8mm and sve...
2024-09-29 Markus TavenrathEnable use to the rebar feature to upload buffers to...
2024-09-29 R0CKSTARmtgpu: enable VMM (llama/9597)
2024-09-29 Charles Xuggml : remove assert for AArch64 GEMV and GEMM Q4 kerne...
2024-09-29 Dou Xinpengcann: fix crash when llama-bench is running on multiple...
2024-09-29 Johannes GäßlerCUDA: remove bad assert (#972)
2024-09-29 Jeff Bolzvulkan : multithread pipeline creation (#963)
2024-09-27 Jeff Bolzvulkan : fix build for GGML_VULKAN_RUN_TESTS, add TFLOP...
2024-09-26 Salvatore Mesoracavulkan : argsort barriers must be under uniform control...
2024-09-24 Georgi Gerganovggml : fix GGML_MAX_N_THREADS + improve formatting...
2024-09-24 Georgi Gerganovsync : llama.cpp
2024-09-24 Eric Zhangggml : add AVX512DQ requirement for AVX512 builds ...
2024-09-24 Georgi Gerganovlog : add CONT level for continuing previous log entry...
2024-09-24 Max Krasnyanskythreads: fix msvc build without openmp (llama/9615)
2024-09-24 Ivancuda: add q8_0->f32 cpy operation (llama/9571)
2024-09-24 Max Krasnyanskythreads: improve ggml_barrier scaling with large number...
2024-09-24 Srihari-mcwggml : AVX512 gemm for Q4_0_8_8 (llama/9532)
2024-09-24 Georgi Gerganovmetal : use F32 prec for K*Q in vec FA (llama/9595)
2024-09-24 Akarshan BiswasRevert "[SYCL] fallback mmvq (#9088)" (llama/9579)
2024-09-24 R0CKSTARmusa: enable building fat binaries, enable unified...
2024-09-24 Molly SophiaFix merge error in #9454 (llama/9589)
2024-09-24 Johannes GäßlerCUDA: enable Gemma FA for HIP/Pascal (llama/9581)
2024-09-24 Molly SophiaRWKV v6: RWKV_WKV op CUDA implementation (llama/9454)
2024-09-24 slarenggml-alloc : fix list of allocated tensors with GGML_AL...
2024-09-24 agray3Update CUDA graph on scale change plus clear nodes...
2024-09-20 Georgi Gerganovexamples : adapt to ggml.h changes (#0)
2024-09-20 Georgi Gerganovsync : llama.cpp
next