From: Georgi Gerganov Date: Mon, 22 May 2023 14:57:21 +0000 (+0300) Subject: readme : update Features X-Git-Tag: upstream/0.0.1642~1448 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=6e064f69aa99e940108dbcfb3c18a3e5f0f964a5;p=pkg%2Fggml%2Fsources%2Fggml readme : update Features --- diff --git a/README.md b/README.md index b578c183..6b190e49 100644 --- a/README.md +++ b/README.md @@ -2,8 +2,6 @@ Tensor library for machine learning -**⚠️ The quantization formats Q4 and Q8 have been updated: https://github.com/ggerganov/llama.cpp/pull/1508 - requantize any old models** - ***Note that this project is under development and not ready for production use. \ Some of the development is currently happening in the [llama.cpp](https://github.com/ggerganov/llama.cpp) and [whisper.cpp](https://github.com/ggerganov/whisper.cpp) repos*** @@ -11,11 +9,11 @@ Some of the development is currently happening in the [llama.cpp](https://github - Written in C - 16-bit float support -- 4-bit integer quantization support -- Automatic differentiation (WIP in progress) +- Integer quantization support (4-bit, 5-bit, 8-bit, etc.) +- Automatic differentiation - ADAM and L-BFGS optimizers -- Optimized for Apple silicon via NEON intrinsics and Accelerate framework -- On x86 architectures utilzes AVX intrinsics +- Optimized for Apple Silicon +- On x86 architectures utilizes AVX / AVX2 intrinsics - No third-party dependencies - Zero memory allocations during runtime