]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
Add convert.py removal to hot topics (#7662)
authorGalunid <redacted>
Fri, 31 May 2024 08:09:20 +0000 (10:09 +0200)
committerGitHub <redacted>
Fri, 31 May 2024 08:09:20 +0000 (10:09 +0200)
README.md

index 60e7aaf2c899ccaa2d89ad163c8807b7c6421764..eeeb64919aeb03d44c02d57b7b97385b06edbf71 100644 (file)
--- a/README.md
+++ b/README.md
@@ -22,7 +22,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
 
 ### Hot topics
 
-- **Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021**
+- **`convert.py` has been deprecated and moved to `examples/convert-legacy-llama.py`, please use `convert-hf-to-gguf.py` https://github.com/ggerganov/llama.cpp/pull/7430
+- Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021
 - BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920
 - MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387
 - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404