]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commitdiff
readme : add Vulkan notice (#2488)
authortoboil-features <redacted>
Wed, 16 Oct 2024 15:43:26 +0000 (18:43 +0300)
committerGitHub <redacted>
Wed, 16 Oct 2024 15:43:26 +0000 (18:43 +0300)
* Add Vulkan notice in README.md

* Fix formatting for Vulkan section in README.md

* Fix formatting in README.md

README.md

index 2393fe49636af7217afaf86d1d29aa4a99cc82f7..f87bcf17a8f9d90aae2e3bdea58c680fca424e16 100644 (file)
--- a/README.md
+++ b/README.md
@@ -18,6 +18,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
 - Mixed F16 / F32 precision
 - [4-bit and 5-bit integer quantization support](https://github.com/ggerganov/whisper.cpp#quantization)
 - Zero memory allocations at runtime
+- Vulkan support
 - Support for CPU-only inference
 - [Efficient GPU support for NVIDIA](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
 - [OpenVINO Support](https://github.com/ggerganov/whisper.cpp#openvino-support)
@@ -429,6 +430,16 @@ make clean
 GGML_CUDA=1 make -j
 ```
 
+## Vulkan GPU support
+Cross-vendor solution which allows you to accelerate workload on your GPU.
+First, make sure your graphics card driver provides support for Vulkan API.
+
+Now build `whisper.cpp` with Vulkan support:
+```
+make clean
+make GGML_VULKAN=1
+```
+
 ## BLAS CPU support via OpenBLAS
 
 Encoder processing can be accelerated on the CPU via OpenBLAS.