From: Matthieu Coudron Date: Wed, 21 Jan 2026 06:52:46 +0000 (+0100) Subject: gguf: display strerrno when cant load a model (#18884) X-Git-Tag: upstream/0.0.8067~280 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=37c35f0e1c625831687b146cbb0a57654ef88ca2;p=pkg%2Fggml%2Fsources%2Fllama.cpp gguf: display strerrno when cant load a model (#18884) I've had issues loading models with llama-server: [44039] E gguf_init_from_file: failed to open GGUF file 'mistral-7b-v0.1.Q8_0.gguf' and I was sure it could access the file. Seems like --models-dir and --models-presets dont interact like I thought they would but I salvaged this snippet that helps troubleshooting [44039] E gguf_init_from_file: failed to open GGUF file 'mistral-7b-v0.1.Q8_0.gguf' (errno No such file or directory) --- diff --git a/ggml/src/gguf.cpp b/ggml/src/gguf.cpp index b165d8bdc..bfab5c4d6 100644 --- a/ggml/src/gguf.cpp +++ b/ggml/src/gguf.cpp @@ -734,7 +734,7 @@ struct gguf_context * gguf_init_from_file(const char * fname, struct gguf_init_p FILE * file = ggml_fopen(fname, "rb"); if (!file) { - GGML_LOG_ERROR("%s: failed to open GGUF file '%s'\n", __func__, fname); + GGML_LOG_ERROR("%s: failed to open GGUF file '%s' (%s)\n", __func__, fname, strerror(errno)); return nullptr; }