]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Introduce llama-run (#10291)
authorEric Curtin <redacted>
Mon, 25 Nov 2024 21:56:24 +0000 (16:56 -0500)
committerGitHub <redacted>
Mon, 25 Nov 2024 21:56:24 +0000 (22:56 +0100)
commit0cc63754b831d3a6c37bc5d721d12ce9540ffe76
tree2198cf92146af69874dcf270f542f683d7186316
parent50d5cecbda3b0d03344eed326287adc1f6c7f3ef
Introduce llama-run (#10291)

It's like simple-chat but it uses smart pointers to avoid manual
memory cleanups. Less memory leaks in the code now. Avoid printing
multiple dots. Split code into smaller functions. Uses no exception
handling.

Signed-off-by: Eric Curtin <redacted>
CMakeLists.txt
Makefile
examples/CMakeLists.txt
examples/run/CMakeLists.txt [new file with mode: 0644]
examples/run/README.md [new file with mode: 0644]
examples/run/run.cpp [new file with mode: 0644]
include/llama-cpp.h [new file with mode: 0644]