]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
ggml : automatic selection of best CPU backend (#10606)
authorDiego Devesa <redacted>
Sun, 1 Dec 2024 15:12:41 +0000 (16:12 +0100)
committerGitHub <redacted>
Sun, 1 Dec 2024 15:12:41 +0000 (16:12 +0100)
commit3420909dffa50e70660524797a1e715a717684d2
tree31e65b811b4225670207c5d07eda44ac56e0301c
parent86dc11c5bcf34db2749d8bd8d4fa07a542c94f84
ggml : automatic selection of best CPU backend (#10606)

* ggml : automatic selection of best CPU backend

* amx : minor opt

* add GGML_AVX_VNNI to enable avx-vnni, fix checks
12 files changed:
.devops/llama-server.Dockerfile
CMakeLists.txt
Package.swift
ggml/CMakeLists.txt
ggml/src/ggml-backend-impl.h
ggml/src/ggml-backend-reg.cpp
ggml/src/ggml-cpu/CMakeLists.txt
ggml/src/ggml-cpu/amx/common.h
ggml/src/ggml-cpu/amx/mmq.cpp
ggml/src/ggml-cpu/cpu-feats-x86.cpp [new file with mode: 0644]
ggml/src/ggml-cpu/ggml-cpu-aarch64.c
scripts/build-cpu.sh [new file with mode: 0755]