From: Jesse Jojo Johnson Date: Wed, 5 Jul 2023 18:03:19 +0000 (+0000) Subject: Update Server Instructions (#2113) X-Git-Tag: gguf-v0.4.0~511 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=983b555e9ddb36703cee4d22642afe958de093b7;p=pkg%2Fggml%2Fsources%2Fllama.cpp Update Server Instructions (#2113) * Update server instructions for web front end * Update server README * Remove duplicate OAI instructions * Fix duplicate text --------- Co-authored-by: Jesse Johnson --- diff --git a/examples/server/README.md b/examples/server/README.md index 160614ba..037412d7 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -21,7 +21,7 @@ Command line options: - `-to N`, `--timeout N`: Server read/write timeout in seconds. Default `600`. - `--host`: Set the hostname or ip address to listen. Default `127.0.0.1`. - `--port`: Set the port to listen. Default: `8080`. -- `--public`: path from which to serve static files (default examples/server/public) +- `--path`: path from which to serve static files (default examples/server/public) - `--embedding`: Enable embedding extraction, Default: disabled. ## Build @@ -207,3 +207,27 @@ openai.api_base = "http://:port" ``` Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API + +### Extending the Web Front End + +The default location for the static files is `examples/server/public`. You can extend the front end by running the server binary with `--path` set to `./your-directory` and importing `/completion.js` to get access to the llamaComplete() method. A simple example is below: + +``` + + +
+      
+    
+ + +```