]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)
authorMeng Zhang <redacted>
Tue, 7 Nov 2023 06:49:08 +0000 (22:49 -0800)
committerGitHub <redacted>
Tue, 7 Nov 2023 06:49:08 +0000 (08:49 +0200)
commit46876d2a2c92e60579dc732cdb8cbd243b06f317
tree8387e95867f96505ccbc909133eaa189e479db32
parent381efbf480959bb6d1e247a8b0c2328f22e350f8
cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)

* protyping the idea that supports running on CPU for a GGML_USE_CUBLAS=on build

* doc: add comments to ggml_cublas_loaded()

* fix defined(...)
ggml-cuda.cu
ggml-cuda.h
llama.cpp