]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
CANN: Update several operators to support FP16 data format (#16251)
authorhipudding <redacted>
Mon, 13 Oct 2025 00:52:22 +0000 (08:52 +0800)
committerGitHub <redacted>
Mon, 13 Oct 2025 00:52:22 +0000 (08:52 +0800)
commitf9bc66c3ebcfddb5f09e4b21253623caeb8e414a
tree73d782b3bce0a61896ca0918b79c4728e440da03
parenta31cf36ad946a13b3a646bf0dadf2a481e89f944
CANN: Update several operators to support FP16 data format (#16251)

Many Ascend operators internally use FP16 precision for computation.
If input data is in FP32, it must first be cast to FP16 before
computation, and then cast back to FP32 after computation, which
introduces unnecessary cast operations. Moreover, FP16 computation
requires significantly less workload compared to FP32, leading to
noticeable efficiency improvements.

In this change, `get_rows`, `rms_norm`, and `flash_attn_ext` are extended
to support multiple data types. Validation on the Qwen2 0.5b model shows
correct accuracy and about 10% performance gain in concurrent scenarios.

Co-authored-by: noemotiovon <redacted>
ggml/src/ggml-cann/aclnn_ops.cpp