]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
Add an option to build without CUDA VMM (llama/7067)
authorWilliam Tambellini <redacted>
Mon, 6 May 2024 18:12:14 +0000 (11:12 -0700)
committerGeorgi Gerganov <redacted>
Mon, 13 May 2024 08:02:26 +0000 (11:02 +0300)
commitb5521fea1988e8110d9fdc1c17f0f98abff0346a
treefd1d3d05e205bf6bdc7a9affbd526284b3f55cb8
parent9b84195225480516e77101830f6abd5b35340f32
Add an option to build without CUDA VMM (llama/7067)

Add an option to build ggml cuda without CUDA VMM
resolves
https://github.com/ggerganov/llama.cpp/issues/6889
https://forums.developer.nvidia.com/t/potential-nvshmem-allocated-memory-performance-issue/275416/4
ggml-cuda.cu