]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : multi-threaded quantization (#1075)
authorKawrakow <redacted>
Thu, 20 Apr 2023 17:42:27 +0000 (19:42 +0200)
committerGitHub <redacted>
Thu, 20 Apr 2023 17:42:27 +0000 (20:42 +0300)
commit38de86a7114c97ecf3644e3a60159f1ed893e1b0
treefc6b90dd99825ce4e745304aab484b85903949c0
parente0305ead3a072db9c08b35c9600c49273b38a4b5
llama : multi-threaded quantization (#1075)

* Multi-threading quantization.

Not much gain for simple quantizations, bit it will be important
for quantizations that require more CPU cycles.

* Multi-threading for quantize-stats

It now does the job in ~14 seconds on my Mac for
Q4_0, Q4_1 and Q4_2. Single-threaded it was taking
more than 2 minutes after adding the more elaborate
version of Q4_2.

* Reviewer comments

* Avoiding compiler confusion

After changing chunk_size to const int as suggested by
@ggerganov, clang and GCC starting to warn me that I don't
need to capture it in the lambda. So, I removed it from the
capture list. But that makes the MSVC build fail. So,
making it a constexpr to make every compiler happy.

* Still fighting with lambda captures in MSVC

---------

Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
examples/quantize-stats/quantize-stats.cpp
examples/quantize/quantize.cpp
ggml.c
ggml.h
llama.cpp
llama.h