]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
arm64: optimize q4_k_q8_k kernel with i8mm (llama/13886)
authorYibo Cai <redacted>
Thu, 29 May 2025 11:39:20 +0000 (19:39 +0800)
committerGeorgi Gerganov <redacted>
Sun, 1 Jun 2025 12:14:44 +0000 (15:14 +0300)
commitea643c6ae36230b5e852a7cb97607fd837f10a10
treeab05805a1ab4b10966ba9df2360e2091c9857aba
parent1d7b3c79f422dbbcb98586ea1fafb8d53d7dba4e
arm64: optimize q4_k_q8_k kernel with i8mm (llama/13886)

This PR improves q4_k_q8_k gemm kernel with arm64 i8mm instruction.

Tested on neoverse-n2 with llama3 8b q4_k_m quantization model.
- 34% ~ 50% S_PP uplift for all batch sizes
- 12% ~ 37% S_TG uplift for batch size 4 and above

Perplexity doesn't change with this PR.

```
// tested on neoverse-n2
$ llama-batched-bench \
      -m Meta-Llama-3-8B-Instruct-Q4_K_M.gguf \
      --no-mmap -fa \
      -c 8192 -b 4096 -ub 512 -npp 128 -ntg 128 \
      -npl 1,2,4,8,16,32 \
      -t 64

---------------------------------------------------------------------
|    PP |     TG |    B |       S_PP t/s      |       S_TG t/s      |
|       |        |      | original |  this pr | original |  this pr |
|-------|--------|------|----------|----------|----------|----------|
|   128 |    128 |    1 |   110.12 |   147.83 |    24.36 |    24.28 |
|   128 |    128 |    2 |   121.16 |   172.42 |    46.36 |    47.93 |
|   128 |    128 |    4 |   120.15 |   169.75 |    74.68 |    84.00 |
|   128 |    128 |    8 |   130.97 |   196.81 |    91.04 |   114.74 |
|   128 |    128 |   16 |   131.01 |   196.88 |   101.43 |   135.79 |
|   128 |    128 |   32 |   130.85 |   196.51 |   106.97 |   147.29 |
---------------------------------------------------------------------
```
ggml/src/ggml-cpu/ggml-cpu-quants.c
ggml/src/ggml-cpu/ggml-cpu.c