]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Add an option to build without CUDA VMM (#7067)
authorWilliam Tambellini <redacted>
Mon, 6 May 2024 18:12:14 +0000 (11:12 -0700)
committerGitHub <redacted>
Mon, 6 May 2024 18:12:14 +0000 (20:12 +0200)
commit858f6b73f6e57a62523d16a955d565254be889b4
treef6a64462f2173d18a986ab1b58a8ef746869bfbb
parentb3a995b416e13ae3123a117a743e11d0ede0ca4c
Add an option to build without CUDA VMM (#7067)

Add an option to build ggml cuda without CUDA VMM
resolves
https://github.com/ggerganov/llama.cpp/issues/6889
https://forums.developer.nvidia.com/t/potential-nvshmem-allocated-memory-performance-issue/275416/4
CMakeLists.txt
ggml-cuda.cu