]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : add benchmark example (#2626)
authorslaren <redacted>
Fri, 18 Aug 2023 10:44:58 +0000 (12:44 +0200)
committerGitHub <redacted>
Fri, 18 Aug 2023 10:44:58 +0000 (12:44 +0200)
commit097e121e2f17ed3541cf02c55ff7e9febc091b19
treef3bead40b2632be95479e3f9b31baffc6681f572
parenteaf98c2649d7da705de255712f0038ac7e47c610
llama : add benchmark example (#2626)

* llama : add benchmark example

* add to examples CMakeLists.txt

* fix msvc build

* add missing include

* add Bessel's correction to stdev calculation

Co-authored-by: Johannes Gäßler <redacted>
* improve markdown formatting

* add missing include

* print warning is NDEBUG is not defined

* remove n_prompt and n_gen from the matrix, use each value separately instead

* better checks for non-optimized builds

* llama.cpp : fix MEM_REQ_SCRATCH0 reusing the value of n_ctx of the first call

* fix json formatting

* add sql output

* add basic cpu and gpu info (linx/cuda only)

* markdown: also show values that differ from the default

* markdown: add build id

* cleanup

* improve formatting

* formatting

---------

Co-authored-by: Johannes Gäßler <redacted>
.gitignore
Makefile
examples/CMakeLists.txt
examples/llama-bench/CMakeLists.txt [new file with mode: 0644]
examples/llama-bench/llama-bench.cpp [new file with mode: 0755]
ggml-cuda.cu
ggml-cuda.h
llama.cpp
llama.h