From: Eric Curtin Date: Fri, 24 Jan 2025 09:39:24 +0000 (+0000) Subject: Update llama-run README.md (#11386) X-Git-Tag: upstream/0.0.4631~90 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=01f37edf1a6fae76fd9e2e02109aae6995a914f0;p=pkg%2Fggml%2Fsources%2Fllama.cpp Update llama-run README.md (#11386) For consistency Signed-off-by: Eric Curtin --- diff --git a/examples/run/README.md b/examples/run/README.md index a0680544..89a55207 100644 --- a/examples/run/README.md +++ b/examples/run/README.md @@ -3,11 +3,10 @@ The purpose of this example is to demonstrate a minimal usage of llama.cpp for running models. ```bash -llama-run granite-code +llama-run granite3-moe ``` ```bash -llama-run -h Description: Runs a llm @@ -17,7 +16,7 @@ Usage: Options: -c, --context-size Context size (default: 2048) - -n, --ngl + -n, -ngl, --ngl Number of GPU layers (default: 0) --temp Temperature (default: 0.8)