You can download the converted models using the [models/download-ggml-model.sh](models/download-ggml-model.sh) script
or manually from here:
-- https://huggingface.co/datasets/ggerganov/whisper.cpp
+- https://huggingface.co/ggerganov/whisper.cpp
- https://ggml.ggerganov.com
For more details, see the conversion script [models/convert-pt-to-ggml.py](models/convert-pt-to-ggml.py) or the README
// CONSTANTS
const (
- srcUrl = "https://huggingface.co/datasets/ggerganov/whisper.cpp/resolve/main" // The location of the models
- srcExt = ".bin" // Filename extension
- bufSize = 1024 * 64 // Size of the buffer used for downloading the model
+ srcUrl = "https://huggingface.co/ggerganov/whisper.cpp/resolve/main" // The location of the models
+ srcExt = ".bin" // Filename extension
+ bufSize = 1024 * 64 // Size of the buffer used for downloading the model
)
var (
Alternatively, you can simply download the smallest ggml GPT-2 117M model (240 MB) like this:\r
\r
```\r
-wget --quiet --show-progress -O models/ggml-gpt-2-117M.bin https://huggingface.co/datasets/ggerganov/ggml/raw/main/ggml-model-gpt-2-117M.bin\r
+wget --quiet --show-progress -O models/ggml-gpt-2-117M.bin https://huggingface.co/ggerganov/ggml/raw/main/ggml-model-gpt-2-117M.bin\r
```\r
\r
## TTS\r
the `ggml` files yourself using the conversion script, or you can use the [download-ggml-model.sh](download-ggml-model.sh)
script to download the already converted models. Currently, they are hosted on the following locations:
-- https://huggingface.co/datasets/ggerganov/whisper.cpp
+- https://huggingface.co/ggerganov/whisper.cpp
- https://ggml.ggerganov.com
Sample usage:
A third option to obtain the model files is to download them from Hugging Face:
-https://huggingface.co/datasets/ggerganov/whisper.cpp/tree/main
+https://huggingface.co/ggerganov/whisper.cpp/tree/main
## Available models
goto :eof
)
-PowerShell -NoProfile -ExecutionPolicy Bypass -Command "Invoke-WebRequest -Uri https://huggingface.co/datasets/ggerganov/whisper.cpp/resolve/main/ggml-%model%.bin -OutFile ggml-%model%.bin"
+PowerShell -NoProfile -ExecutionPolicy Bypass -Command "Invoke-WebRequest -Uri https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-%model%.bin -OutFile ggml-%model%.bin"
if %ERRORLEVEL% neq 0 (
echo Failed to download ggml model %model%
#src="https://ggml.ggerganov.com"
#pfx="ggml-model-whisper"
-src="https://huggingface.co/datasets/ggerganov/whisper.cpp"
+src="https://huggingface.co/ggerganov/whisper.cpp"
pfx="resolve/main/ggml"
# get the path of this script
int64_t t_load_us = 0;
int64_t t_start_us = 0;
-
ggml_type wtype = ggml_type::GGML_TYPE_F16; // weight type (FP32 or FP16)
whisper_model model;