From: Georgi Gerganov Date: Sun, 6 Oct 2024 10:49:41 +0000 (+0300) Subject: readme : fix typo [no ci] X-Git-Tag: upstream/0.0.4488~597 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=f4b2dcdf4992ef11a854abc9b662624490e37b4c;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : fix typo [no ci] --- diff --git a/examples/main/README.md b/examples/main/README.md index 6730effd..f0c3031a 100644 --- a/examples/main/README.md +++ b/examples/main/README.md @@ -69,7 +69,7 @@ In this section, we cover the most commonly used options for running the `llama- - `-c N, --ctx-size N`: Set the size of the prompt context. The default is 512, but LLaMA models were built with a context of 2048, which will provide better results for longer input/inference. - `-mli, --multiline-input`: Allows you to write or paste multiple lines without ending each in '\' - `-t N, --threads N`: Set the number of threads to use during generation. For optimal performance, it is recommended to set this value to the number of physical CPU cores your system has. -- - `-ngl N, --n-gpu-layers N`: When compiled with GPU support, this option allows offloading some layers to the GPU for computation. Generally results in increased performance. +- `-ngl N, --n-gpu-layers N`: When compiled with GPU support, this option allows offloading some layers to the GPU for computation. Generally results in increased performance. ## Input Prompts