]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/shortlog
pkg/ggml/sources/whisper.cpp
2024-03-20 denerscwhisper : token-level timestamps with DTW (#1485)
2024-03-18 Jo Lissexamples : rename --audio-context to --audio-ctx per...
2024-03-16 Georgi Gerganovwhisper : set outputs from conv graph (#1959)
2024-03-16 slarenalloc : fix allocation data of pre-allocated leafs
2024-03-16 Georgi Gerganovcmake : copy ggml-common.h to bin
2024-03-16 Georgi Gerganovgitignore : .vimspector.json
2024-03-15 Georgi Gerganovtalk-llama : sync llama.cpp
2024-03-15 Georgi Gerganovsync : ggml
2024-03-15 slarenupdate examples and tests
2024-03-15 Georgi Gerganovggml : add ggml-common.h
2024-03-15 Georgi Gerganovggml : designate enum vals for integer types (llama...
2024-03-15 Georgi Gerganovmetal : build metallib + fix embed path (llama/6015)
2024-03-15 slarenllama : add pipeline parallelism support (llama/6017)
2024-03-15 AidanBeltonSUpdate get version (llama/6025)
2024-03-15 Georgi Gerganovggml : reuse quantum structs across backends (llama...
2024-03-15 Georgi Gerganovggml : fix UB in IQ2_S and IQ3_S (llama/6012)
2024-03-15 Georgi Gerganovsycl : update IQ1_S kernels (WIP - not working!) (llama...
2024-03-15 Kawrakow1.5 bit: we can do even better (llama/5999)
2024-03-15 Michael Podvitskiyggml, ci : Windows ARM runner and build fixes (llama...
2024-03-15 KawrakowBetter 1.5 bit quantization (llama/5971)
2024-03-15 Abhilash MajumderAdd q3_s and q1_s (llama/5886)
2024-03-15 Georgi Gerganovmetal : move mm_id indices to shared mem (llama/5982)
2024-03-15 Georgi Gerganovggml : fix unnecessary f32 -> f16 -> f32 casts (mmla...
2024-03-15 Georgi Gerganovggml : remove old quantization functions (llama/5942)
2024-03-15 Georgi Gerganovggml : add ggml-common.h to deduplicate shared code...
2024-03-15 compiladellama : support Mamba Selective State Space Models...
2024-03-15 Georgi Gerganovextra : update sync scripts after ggml-common.h
2024-03-10 Josh Bleecher... whisper : document whisper_batch.n_seq_id (#1942)
2024-03-10 Josh Bleecher... whisper : improve beam search candidate diversity ...
2024-03-09 Josh Bleecher... bindings/go : add linker flags to make metal work ...
2024-03-09 Josh Bleecher... whisper : make beam candidate sort more stable (#1943)
2024-03-08 Georgi Gerganovggml : try fix 32-bit arm compat (#1938)
2024-03-08 Georgi Gerganovtalk-llama : use llama_decode instead of llama_eval
2024-03-08 Georgi Gerganovtalk-llama : sync llama.cpp
2024-03-08 Georgi Gerganovtalk-llama : sync llama.cpp
2024-03-08 Georgi Gerganovsync : ggml
2024-03-08 Neo Zhang JianyuRevert "[SYCL] fix error when set main gpu to non-zero...
2024-03-08 Neo Zhang Jianyufix error when set main gpu to non-zero (llama/5901)
2024-03-08 Jared Van Bortelggml : use SYS_get_cpu if SYS_getcpu is not defined...
2024-03-08 bobqianicggml : use `uint8x16_t` return type for `ggml_vqtbl1q_u...
2024-03-08 Neo Zhang Jianyuadd wait() to make code stable (llama/5895)
2024-03-08 Jared Van Bortelquants : use MM256_SET_M128I consistently to fix gcc...
2024-03-08 0cc4mVulkan Improvements (llama/5835)
2024-03-08 Neo Zhang Jianyufix mul_mat fault in CI/unit-test (llama/5862)
2024-03-08 Georgi Gerganovggml : fix unknown status (llama/0)
2024-03-08 Georgi Gerganovwhisper : fix compute helper return (ggml/750)
2024-03-08 Michael Podvitskiyggml : introduce ggml_status (ggml/750)
2024-03-08 slarencuda : fix data race in soft max (llama/5853)
2024-03-08 Georgi Gerganovggml : fix IQ3_S AVX implementation (llama/5834)
2024-03-08 Kawrakowggml : IQ3_S improvements (llama/5829)
2024-03-08 Neo Zhang JianyuSupport multiple GPUs (split mode) on SYCL backend...
2024-03-08 ddpasaggml-vulkan: fix VULKAN_CHECK_RESULTS flag, which was...
2024-03-08 AidanBeltonSUse batched mul_mat pathway (llama/5591)
2024-03-08 Evemake portability_enumeration_ext apple only (llama...
2024-03-08 leejetadd some new ops, fix some operators and add batch...
2024-03-06 F1L1Pexamples : Auto lowercase language parameter in main...
2024-03-06 zhouwgexamples : fix typo in bench.cpp (#1933)
2024-03-05 zhouwgwhisper : fix typo (#1925)
2024-03-05 zhouwgwhisper.android.java : fix returns in JNI (#1929)
2024-03-04 kennethgecmake : add library versioning (#1352)
2024-03-04 Gavin Caireadme : recommend MacOS Sonoma for Core ML (#1917)
2024-02-28 Georgi Gerganovtalk-llama : sync llama.cpp
2024-02-28 Georgi Gerganovsync : ggml
2024-02-28 Georgi Gerganovsync : llama.cpp (ggml/0)
2024-02-28 Kawrakowggml : make i-quants work with super-blocks of 64 ...
2024-02-28 KawrakowAttempt to fix android build (llama/5752)
2024-02-28 KawrakowIQ4_XS: a 4.25 bpw quantization (llama/5747)
2024-02-28 Engininja2cuda : replace remaining shfl_xor with calls to warp_re...
2024-02-28 Engininja2ggml-quants : fix avx2 iq1_s vec_dot when compiled...
2024-02-28 KawrakowAdding IQ2_S and IQ2_M to complete coverage of the...
2024-02-28 Johannes GäßlerCUDA: fix DEBUG_CUDA_MALLOC (llama/5729)
2024-02-28 AidanBeltonSAdd support for soft_max ALiBi (llama/5639)
2024-02-28 Radosław Grytaggml-quants : provide ggml_vqtbl1q_u8 for 64bit compati...
2024-02-28 slarenadd google magika inference example (ggml/748)
2024-02-26 Andrew Sstream.wasm : fix invalid memory access when no segment...
2024-02-25 Georgi Gerganovtalk-llama : sync llama.cpp
2024-02-25 Georgi Gerganovsync : ggml
2024-02-25 Georgi Gerganovsync : llama.cpp (ggml/0)
2024-02-25 Georgi Gerganovcode : normalize enum names (llama/5697)
2024-02-25 KawrakowIQ3_S: a much better alternative to Q3_K (llama/5676)
2024-02-25 UEXTM.comIntroduce backend GUIDs (ggml/743)
2024-02-24 Tamotsu Takahashitalk, talk-llama : pass text_to_speak as a file (#1865)
2024-02-23 Abhilash Majumderwhisper : add SYCL support (#1863)
2024-02-22 Georgi Gerganovtalk-llama : sync llama.cpp
2024-02-22 Georgi Gerganovsync : ggml
2024-02-22 Georgi Gerganovggml : always define ggml_fp16_t as uint16_t (llama...
2024-02-22 Georgi Gerganovci : fix whitespace
2024-02-22 Georgi Gerganovggml : 32-bit arm compat (#1891)
2024-02-22 Georgi Gerganovsync : ggml
2024-02-22 Georgi Gerganovsync : llama.cpp (ggml/0)
2024-02-22 Meng, Hengyuconext add name (llama/5624)
2024-02-22 AidanBeltonSUpdate ggml_sycl_op_mul_mat_vec_q (llama/5502)
2024-02-22 0cc4mRefactor validation and enumeration platform checks...
2024-02-22 0cc4mAdd check for VK_KHR_portability_enumeration for Molten...
2024-02-22 Mathijs de... Add preprocessor checks for Apple devices.
2024-02-22 Mathijs de... Resolve ErrorIncompatibleDriver with Vulkan on MacOS.
2024-02-22 Mathijs de... Allow for Vulkan build with Accelerate.
2024-02-22 slarencuda : ignore peer access already enabled errors (llama...
2024-02-22 Siddharth Ramakrishnanggml : compute forward no longer pass src tensors ...
2024-02-22 bssrdfggml : fix conv_2d batch mode (ggml/737)
next