]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
cuda/cpu: Increase support for fp16 unary operations (ggml/1125)
authorcmdr2 <redacted>
Fri, 28 Feb 2025 07:04:39 +0000 (12:34 +0530)
committerGeorgi Gerganov <redacted>
Mon, 3 Mar 2025 16:18:11 +0000 (18:18 +0200)
commit87abb7e903635a2660e89f9336122f374d683e0a
tree9a97cdca7406b34c844e9de5739728838fe44d0a
parent6d4c23b81b03c7b089a5f7a21ca73d1385d37191
cuda/cpu: Increase support for fp16 unary operations (ggml/1125)

* Support fp16 unary operations in the CUDA backend

* cpu: increase fp16 support for unary operators in the CPU backend

* cuda: increase fp16 support for unary operators in the CUDA backend

* Add test cases for fp16 unary operators

* metal: update supports_op for unary operators that don't support fp16, to prevent test-backend-ops from failing

* metal: fix PR comments for unary op support after fp16 unary tests
ggml/src/ggml-cpu/ggml-cpu.c
ggml/src/ggml-cuda/clamp.cu
ggml/src/ggml-cuda/ggml-cuda.cu
ggml/src/ggml-cuda/unary.cu
ggml/src/ggml-cuda/unary.cuh
ggml/src/ggml-metal/ggml-metal.m
tests/test-backend-ops.cpp