]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/shortlog
pkg/ggml/sources/whisper.cpp
2024-11-01 Ma Mingfeiadd amx kernel for gemm (llama/8998)
2024-11-01 Diego Devesavulkan : add backend registry / device interfaces ...
2024-11-01 Gilad Sfix: allocating CPU buffer with size `0` (llama/9917)
2024-11-01 Gilad Sfix: use `vm_allocate` to allocate CPU backend buffer...
2024-11-01 Johannes GäßlerCUDA: fix 1D im2col, add tests (ggml/993)
2024-11-01 leo-ponyFix cann compilation error (llama/9891)
2024-11-01 agray3Vectorize load instructions in dmmv f16 CUDA kernel...
2024-11-01 Diego Devesaggml : move more prints to the ggml log system (llama...
2024-11-01 Diego Devesarpc : add backend registry / device interfaces (llama...
2024-11-01 R0CKSTARmusa: add docker image support (llama/9685)
2024-11-01 Diego Devesaggml : fix BLAS with unsupported types (llama/9775)
2024-11-01 Diego Devesaggml : add backend registry / device interfaces to...
2024-11-01 Andrew Minh... Update building for Android (llama/9672)
2024-11-01 Georgi Gerganovggml : add metal backend registry / device (llama/9713)
2024-11-01 Paul Tsochantarismetal : single allocation of encode_async block (llama...
2024-11-01 Daniel Beveniusggml-alloc : remove buffer_id from leaf_alloc (ggml...
2024-10-31 Georgi Gerganovscripts : sync amx
2024-10-31 Georgi Gerganovggml : alloc ggml_contexts on the heap (#2525)
2024-10-30 Georgi Gerganovci : fix openblas build (#2511)
2024-10-29 Georgi Gerganovscripts : add turbo-q8_0 to the benchmark
2024-10-29 Georgi Gerganovwhisper : minor compile warning
2024-10-29 jettoblackwhisper : move new-segment callback after DTW step...
2024-10-29 KITAITI Makotoruby : fix installation test (#2519)
2024-10-28 KITAITI Makotoruby : add more APIs (#2518)
2024-10-28 KITAITI Makotoruby : support new-segment callback (#2506)
2024-10-28 KITAITI Makotoruby : add Metal support (#2516)
2024-10-23 Jossciiwhisper : fix index overflow in token-level timestamp...
2024-10-17 toboil-featuresreadme : update links and make commands (#2489)
2024-10-16 KITAITI Makotoruby : fix bindings (#2484)
2024-10-16 toboil-featuresreadme : add Vulkan notice (#2488)
2024-10-16 Georgi Gerganovmake : fix GGML_VULKAN=1 build (#2485)
2024-10-15 Rotem Danwhisper : add dtw preset for large-v3-turbo (#2481)
2024-10-14 CrispStrobeconvert : handle max_target_positions (#2477)
2024-10-14 Salman Farozreadme : update the Quick Start section (#2475)
2024-10-08 Sandro Haneawhisper : add OpenVINO init with state (#2464)
2024-10-07 Georgi Gerganovrelease : v1.7.1
2024-10-06 SRHMorrisvulkan : retry allocation with fallback flags (#2451)
2024-10-05 Georgi Gerganovrelease : v1.7.0
2024-10-05 Georgi Gerganovscripts : bench v3-turbo
2024-10-05 Georgi Gerganovwhisper : remove mel leftover constants (396089f)
2024-10-05 Georgi Gerganovwhisper : zero-out the KV cache upon clear (#2445)
2024-10-05 Georgi Gerganovobjc : fix build
2024-10-05 Georgi Gerganovmetal : zero-init buffer contexts (#0)
2024-10-05 Georgi Gerganovwhisper : revert mel-related changes (#0)
2024-10-05 Georgi Gerganovwhisper : adapt to latest ggml (skip) (#0)
2024-10-05 Daniel Beveniusggml : fix typo in example usage ggml_gallocr_new ...
2024-10-05 Diego Devesaggml : fixes after sync (ggml/983)
2024-10-05 Diego Devesaggml-backend : add device and backend reg interfaces...
2024-10-05 Ouadie EL FAROUKIFixed dequant precision issues in Q4_1 and Q5_1 (llama...
2024-10-05 Diego Devesaggml-backend : add device and backend reg interfaces...
2024-10-05 Alberto Cabrera... Initial cmake support of SYCL for AMD GPUs (llama/9658)
2024-10-05 Radoslav Gerganovvulkan : do not use tensor->extra (llama/9407)
2024-10-05 Johannes Gäßlerggml/ex: calculate accuracy in graph, adapt MNIST ...
2024-10-05 Johannes Gäßlerggml: refactor cross entropy loss CPU impl. (ggml/976)
2024-10-05 Georgi Gerganovscripts : sync ggml-backend.cpp
2024-10-05 Georgi Gerganovwhisper : fix excessive memory usage (#2443)
2024-10-04 Rahul Vadhyarexamples : update dr_wav.h to newer version (#2449)
2024-10-03 Georgi Gerganovtalk-llama : sync llama.cpp
2024-10-03 Georgi Gerganovmetal : reduce command encoding overhead (llama/9698)
2024-10-03 Georgi Gerganovsync : ggml
2024-10-03 Johannes Gäßlertest: fix OPT_STEP_ADAMW for test-backend-ops (ggml...
2024-10-03 Salvatore Mesoracavulkan : mul_mat: fix UB with small warps (ggml/952)
2024-10-03 Borislav Stanimirovggml : fix ggml_cast (ggml/973)
2024-10-03 Johannes Gäßlerggml: fix gradient allocation logic (ggml/966)
2024-10-03 Georgi Gerganovggml : define missing HWCAP flags (llama/9684)
2024-10-03 Dan Johanssonggml : add run-time detection of neon, i8mm and sve...
2024-10-03 Markus TavenrathEnable use to the rebar feature to upload buffers to...
2024-10-03 R0CKSTARmtgpu: enable VMM (llama/9597)
2024-10-03 Charles Xuggml : remove assert for AArch64 GEMV and GEMM Q4 kerne...
2024-10-03 Dou Xinpengcann: fix crash when llama-bench is running on multiple...
2024-10-03 Johannes GäßlerCUDA: remove bad assert (ggml/972)
2024-10-03 Jeff Bolzvulkan : multithread pipeline creation (ggml/963)
2024-10-03 Jeff Bolzvulkan : fix build for GGML_VULKAN_RUN_TESTS, add TFLOP...
2024-10-03 Salvatore Mesoracavulkan : argsort barriers must be under uniform control...
2024-10-03 Georgi Gerganovggml : fix GGML_MAX_N_THREADS + improve formatting...
2024-10-02 gilbertgongserver : ffmpeg overwrite leftover temp file (#2431)
2024-10-01 Georgi Gerganovwhisper : add large-v3-turbo (#2440)
2024-09-27 Georgi Gerganovtests : remove test-backend-ops (#2434)
2024-09-25 Georgi Gerganovci : disable failing CUDA and Java builds
2024-09-24 Hugoreadme : fix references to download-ggml-model.sh ...
2024-09-24 Georgi Gerganovmake : remove "talk" target until updated
2024-09-24 Georgi Gerganovggml : add ggml-cpu-impl.h (skip) (#0)
2024-09-24 Georgi Gerganovsync : ggml
2024-09-24 Georgi Gerganovtalk-llama : sync llama.cpp
2024-09-24 Eric Zhangggml : add AVX512DQ requirement for AVX512 builds ...
2024-09-24 Georgi Gerganovlog : add CONT level for continuing previous log entry...
2024-09-24 Max Krasnyanskythreads: fix msvc build without openmp (llama/9615)
2024-09-24 Ivancuda: add q8_0->f32 cpy operation (llama/9571)
2024-09-24 Max Krasnyanskythreads: improve ggml_barrier scaling with large number...
2024-09-24 Srihari-mcwggml : AVX512 gemm for Q4_0_8_8 (llama/9532)
2024-09-24 Georgi Gerganovmetal : use F32 prec for K*Q in vec FA (llama/9595)
2024-09-24 Akarshan BiswasRevert "[SYCL] fallback mmvq (ggml/9088)" (llama/9579)
2024-09-24 R0CKSTARmusa: enable building fat binaries, enable unified...
2024-09-24 Molly SophiaFix merge error in #9454 (llama/9589)
2024-09-24 Johannes GäßlerCUDA: enable Gemma FA for HIP/Pascal (llama/9581)
2024-09-24 Molly SophiaRWKV v6: RWKV_WKV op CUDA implementation (llama/9454)
2024-09-24 slarenggml-alloc : fix list of allocated tensors with GGML_AL...
2024-09-24 agray3Update CUDA graph on scale change plus clear nodes...
2024-09-24 Georgi Gerganovexamples : adapt to ggml.h changes (ggml/0)
2024-09-24 Georgi Gerganovggml : refactoring (llama/#0)
next