From: Georgi Gerganov Date: Wed, 9 Apr 2025 07:54:42 +0000 (+0300) Subject: readme : add rpc backend (#12842) X-Git-Tag: gguf-v0.16.1~67 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=47277d6d1d0d515cff34292a1a78a0d1b7252350;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : add rpc backend (#12842) --- diff --git a/README.md b/README.md index 95a05e6e..e56042f1 100644 --- a/README.md +++ b/README.md @@ -9,13 +9,6 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) in pure C/C++ -> [!IMPORTANT] -> New `llama.cpp` package location: [ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/pkgs/container/llama.cpp) -> -> Update your container URLs to: `ghcr.io/ggml-org/llama.cpp` -> -> More info: https://github.com/ggml-org/llama.cpp/discussions/11801 - ## Recent API changes - [Changelog for `libllama` API](https://github.com/ggml-org/llama.cpp/issues/9289) @@ -247,6 +240,7 @@ Instructions for adding support for new models: [HOWTO-add-model.md](docs/develo | [Vulkan](docs/build.md#vulkan) | GPU | | [CANN](docs/build.md#cann) | Ascend NPU | | [OpenCL](docs/backend/OPENCL.md) | Adreno GPU | +| [RPC](https://github.com/ggml-org/llama.cpp/tree/master/examples/rpc) | All | ## Building the project