]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : std::move llm_bigram_bpe from work_queue (#9062)
authorDaniel Bevenius <redacted>
Wed, 21 Aug 2024 07:32:58 +0000 (09:32 +0200)
committerGitHub <redacted>
Wed, 21 Aug 2024 07:32:58 +0000 (10:32 +0300)
commit8455340b874aa90d5f70104c99ed2778d9e31c36
tree98529efeff478f0e45a99d43b107416967d092c7
parent2f3c1466ff46a2413b0e363a5005c46538186ee6
llama : std::move llm_bigram_bpe from work_queue (#9062)

* llama : std::move llm_bigram_bpe from work_queue

This commit updates the retrieval of llm_bigram_bpe objects from
work_queue.top() by using std::move.

The motivation for this is to avoid the copying of the std::string
`text` member of the llm_bigram_bpe struct.

* squash! llama : std::move llm_bigram_bpe from work_queue

Introduced a MovablePriorityQueue class to allow moving elements
out of the priority queue for llm_bigram_bpe.

* squash! llama : std::move llm_bigram_bpe from work_queue

Rename MovablePriorityQueue to lama_priority_queue.

* squash! llama : std::move llm_bigram_bpe from work_queue

Rename lama_priority_queue -> llama_priority_queue.
src/llama-vocab.cpp