]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : do not crash if there is no CPU backend (#13395)
authorDiego Devesa <redacted>
Fri, 9 May 2025 11:02:07 +0000 (13:02 +0200)
committerGitHub <redacted>
Fri, 9 May 2025 11:02:07 +0000 (13:02 +0200)
commit27ebfcacbaadc6104e2b18acd8f13515cbf63dce
tree7e3e6da2768f34368a48dd057a03c2f64eaf0e3f
parent5c86c9ed3ef1cc7307fdce05f0f0e2e45253cf90
llama : do not crash if there is no CPU backend (#13395)

* llama : do not crash if there is no CPU backend

* add checks to examples
src/llama-adapter.cpp
src/llama-model-loader.cpp
src/llama-model.cpp
tools/main/main.cpp
tools/mtmd/clip.cpp
tools/mtmd/llava.cpp
tools/rpc/rpc-server.cpp