]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
authorGeorgi Gerganov <redacted>
Wed, 31 Jan 2024 15:30:17 +0000 (17:30 +0200)
committerGitHub <redacted>
Wed, 31 Jan 2024 15:30:17 +0000 (17:30 +0200)
commit5cb04dbc16d1da38c8fdcc0111b40e67d00dd1c3
tree3ef8dc640d5c08466309c09a8ac2963bb760af06
parentefb7bdbbd061d087c788598b97992c653f992ddd
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)

* llama : remove LLAMA_MAX_DEVICES from llama.h

ggml-ci

* Update llama.cpp

Co-authored-by: slaren <redacted>
* server : remove LLAMA_MAX_DEVICES

ggml-ci

* llama : remove LLAMA_SUPPORTS_GPU_OFFLOAD

ggml-ci

* train : remove LLAMA_SUPPORTS_GPU_OFFLOAD

* readme : add deprecation notice

* readme : change deprecation notice to "remove" and fix url

* llama : remove gpu includes from llama.h

ggml-ci

---------

Co-authored-by: slaren <redacted>
README.md
common/common.cpp
common/common.h
common/train.cpp
examples/batched-bench/batched-bench.cpp
examples/llama-bench/llama-bench.cpp
examples/server/server.cpp
llama.cpp
llama.h