From: Jun Jie Date: Thu, 4 Apr 2024 17:16:37 +0000 (+0800) Subject: readme : fix typo (#6481) X-Git-Tag: upstream/0.0.4488~1877 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=b660a5729e1e7508671d3d0515fd7efaeaeb85b9;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : fix typo (#6481) --- diff --git a/README.md b/README.md index 67692dee..76203ba9 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) - **MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387** - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 -- Multi-GPU pipeline parallelizm support https://github.com/ggerganov/llama.cpp/pull/6017 +- Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017 - Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981 - Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962 - Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328