From: 2114L3 Date: Tue, 16 Dec 2025 10:50:43 +0000 (+1000) Subject: server: Update README.md incorrect argument (#18073) X-Git-Tag: upstream/0.0.7446~16 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=5f5f9b46376ac14d7f95b0d968c182f522602880;p=pkg%2Fggml%2Fsources%2Fllama.cpp server: Update README.md incorrect argument (#18073) n-gpu-layer is incorrect argument is n-gpu-layers with the 's' --- diff --git a/tools/server/README.md b/tools/server/README.md index 073bcd2c..ef4990fa 100644 --- a/tools/server/README.md +++ b/tools/server/README.md @@ -1430,7 +1430,7 @@ Model presets allow advanced users to define custom configurations using an `.in llama-server --models-preset ./my-models.ini ``` -Each section in the file defines a new preset. Keys within a section correspond to command-line arguments (without leading dashes). For example, the argument `--n-gpu-layer 123` is written as `n-gpu-layer = 123`. +Each section in the file defines a new preset. Keys within a section correspond to command-line arguments (without leading dashes). For example, the argument `--n-gpu-layers 123` is written as `n-gpu-layers = 123`. Short argument forms (e.g., `c`, `ngl`) and environment variable names (e.g., `LLAMA_ARG_N_GPU_LAYERS`) are also supported as keys. @@ -1445,7 +1445,7 @@ version = 1 ; string value chat-template = chatml ; numeric value -n-gpu-layer = 123 +n-gpu-layers = 123 ; flag value (for certain flags, you need to use the "no-" prefix for negation) jinja = true ; shorthand argument (for example, context size)