]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
llama : offload to RPC in addition to other backends (llama/7640)
authorRadoslav Gerganov <redacted>
Mon, 3 Jun 2024 17:03:26 +0000 (20:03 +0300)
committerGeorgi Gerganov <redacted>
Sun, 16 Jun 2024 15:19:48 +0000 (18:19 +0300)
commit6cc3b022eeb968e3763ac3addbe79da2235e21a0
tree61645c2168d94f8006406b888748f13ae623332c
parente5e38d4920a6843944b62bac4e239ba7ee314da3
llama : offload to RPC in addition to other backends (llama/7640)

* llama : offload to RPC in addition to other backends

* - fix copy_tensor being called on the src buffer instead of the dst buffer

- always initialize views in the view_src buffer

- add RPC backend to Makefile build

- add endpoint to all RPC object names

* add rpc-server to Makefile

* Update llama.cpp

Co-authored-by: slaren <redacted>
---------

Co-authored-by: slaren <redacted>
ggml-alloc.c
ggml-backend.c
ggml-backend.h
ggml-rpc.cpp