]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
Somehow '**' got lost (#7663)
authorGalunid <redacted>
Fri, 31 May 2024 08:24:41 +0000 (10:24 +0200)
committerGitHub <redacted>
Fri, 31 May 2024 08:24:41 +0000 (18:24 +1000)
README.md

index eeeb64919aeb03d44c02d57b7b97385b06edbf71..89b0fe0b01eb346e8514abd8172421b7846e1677 100644 (file)
--- a/README.md
+++ b/README.md
@@ -22,7 +22,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
 
 ### Hot topics
 
-- **`convert.py` has been deprecated and moved to `examples/convert-legacy-llama.py`, please use `convert-hf-to-gguf.py` https://github.com/ggerganov/llama.cpp/pull/7430
+- **`convert.py` has been deprecated and moved to `examples/convert-legacy-llama.py`, please use `convert-hf-to-gguf.py`** https://github.com/ggerganov/llama.cpp/pull/7430
 - Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021
 - BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920
 - MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387