From: Georgi Gerganov Date: Sat, 25 Mar 2023 14:30:32 +0000 (+0200) Subject: Remove obsolete information from README X-Git-Tag: gguf-v0.4.0~1112 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=4a7129acd2e939b92d70dd568c746f2fa078232c;p=pkg%2Fggml%2Fsources%2Fllama.cpp Remove obsolete information from README --- diff --git a/README.md b/README.md index 0830074b..8a84324b 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ The main goal is to run the model using 4-bit quantization on a MacBook - Plain C/C++ implementation without dependencies -- Apple silicon first-class citizen - optimized via ARM NEON +- Apple silicon first-class citizen - optimized via ARM NEON and Accelerate framework - AVX2 support for x86 architectures - Mixed F16 / F32 precision - 4-bit quantization support @@ -323,14 +323,6 @@ or with light image: docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:light -m /models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512 ``` -## Limitations - -- Probably the token sampling can be improved -- The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder, - there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simply don't - know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the - performance will be the same, since no BLAS calls are invoked by the current implementation - ### Contributing - Contributors can open PRs