From: Srinivas Billa Date: Thu, 15 Jun 2023 17:36:38 +0000 (+0100) Subject: readme : server compile flag (#1874) X-Git-Tag: gguf-v0.4.0~631 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=9dda13e5e1f70bdfc25fbc0f0378f27c8b67e983;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : server compile flag (#1874) Explicitly include the server make instructions for C++ noobsl like me ;) --- diff --git a/examples/server/README.md b/examples/server/README.md index 7dabac9c..3b111655 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa To get started right away, run the following command, making sure to use the correct path for the model you have: #### Unix-based systems (Linux, macOS, etc.): +Make sure to build with the server option on +```bash +LLAMA_BUILD_SERVER=1 make +``` ```bash ./server -m models/7B/ggml-model.bin --ctx_size 2048