]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
metal : bug-fix when enable ggml-alloc (#2757)
authorShouzheng Liu <redacted>
Thu, 24 Aug 2023 16:27:25 +0000 (12:27 -0400)
committerGitHub <redacted>
Thu, 24 Aug 2023 16:27:25 +0000 (19:27 +0300)
commit38b16dfca6e5032e6cfb90c1653bf1ba4cf647b4
tree0c85b951d8d62c6d3bc455ed41d0c5435324c032
parent8f8c28e89cb9531211783da697d6e7c445e2af1d
metal : bug-fix when enable ggml-alloc (#2757)

* metal: better memory alloc w/ concurrency dispatch

The ggml-alloc should only free tensors at memory barriers.

* ggml-alloc: avoid return silently

In certain cases, the allocate_node() function may silently return
without performing any memory allocation.
ggml-alloc.c
llama.cpp