]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
readme : update hot topics (#15315)
authorGeorgi Gerganov <redacted>
Thu, 14 Aug 2025 14:16:03 +0000 (17:16 +0300)
committerGitHub <redacted>
Thu, 14 Aug 2025 14:16:03 +0000 (17:16 +0300)
README.md

index 96e30050d3b8b54cb961e0e90a0e794d7debf7c4..11d92907862adf125fb2ad08f43d55b5c806ce44 100644 (file)
--- a/README.md
+++ b/README.md
@@ -17,6 +17,7 @@ LLM inference in C/C++
 
 ## Hot topics
 
+- **[[FEEDBACK] Better packaging for llama.cpp to support downstream consumers ðŸ¤—](https://github.com/ggml-org/llama.cpp/discussions/15313)**
 - Support for the `gpt-oss` model with native MXFP4 format has been added | [PR](https://github.com/ggml-org/llama.cpp/pull/15091) | [Collaboration with NVIDIA](https://blogs.nvidia.com/blog/rtx-ai-garage-openai-oss) | [Comment](https://github.com/ggml-org/llama.cpp/discussions/15095)
 - Hot PRs: [All](https://github.com/ggml-org/llama.cpp/pulls?q=is%3Apr+label%3Ahot+) | [Open](https://github.com/ggml-org/llama.cpp/pulls?q=is%3Apr+label%3Ahot+is%3Aopen)
 - Multimodal support arrived in `llama-server`: [#12898](https://github.com/ggml-org/llama.cpp/pull/12898) | [documentation](./docs/multimodal.md)