From: Romain Neutron Date: Tue, 30 Jan 2024 09:16:38 +0000 (+0100) Subject: readme : minor (#5204) X-Git-Tag: upstream/0.0.4488~2476 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=5589921ef84a4fb1c6d1c9c34d626a5a83033db6;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : minor (#5204) This is about tuning the code formatting of the README file --- diff --git a/README.md b/README.md index 15e61baa..b37348a7 100644 --- a/README.md +++ b/README.md @@ -290,7 +290,7 @@ In order to build llama.cpp you have three different options. sudo pkg install gmake automake autoconf pkgconf llvm15 clinfo clover \ opencl clblast openblas - gmake CC=/usr/local/bin/clang15 CXX=/usr/local/bin/clang++15 -j4 + gmake CC=/usr/local/bin/clang15 CXX=/usr/local/bin/clang++15 -j4 ``` **Notes:** With this packages you can build llama.cpp with OPENBLAS and @@ -613,9 +613,9 @@ Building the program with BLAS support may lead to some performance improvements # obtain the original LLaMA model weights and place them in ./models ls ./models 65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model - # [Optional] for models using BPE tokenizers - ls ./models - 65B 30B 13B 7B vocab.json +# [Optional] for models using BPE tokenizers +ls ./models +65B 30B 13B 7B vocab.json # install Python dependencies python3 -m pip install -r requirements.txt @@ -623,8 +623,8 @@ python3 -m pip install -r requirements.txt # convert the 7B model to ggml FP16 format python3 convert.py models/7B/ - # [Optional] for models using BPE tokenizers - python convert.py models/7B/ --vocabtype bpe +# [Optional] for models using BPE tokenizers +python convert.py models/7B/ --vocabtype bpe # quantize the model to 4-bits (using q4_0 method) ./quantize ./models/7B/ggml-model-f16.gguf ./models/7B/ggml-model-q4_0.gguf q4_0