From: Georgi Gerganov Date: Tue, 5 Aug 2025 17:19:33 +0000 (+0300) Subject: readme : update hot topics (#15097) X-Git-Tag: upstream/0.0.6164~70 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=be426425817bc3e6a2d91dae476dba6fa85894be;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : update hot topics (#15097) --- diff --git a/README.md b/README.md index 9b2e0f85..954fff83 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,7 @@ LLM inference in C/C++ ## Hot topics +- Support for the `gpt-oss` model with native MXFP4 format has been added | [PR](https://github.com/ggml-org/llama.cpp/pull/15091) | [Collaboration with NVIDIA](https://blogs.nvidia.com/blog/rtx-ai-garage-openai-oss) | [Comment](https://github.com/ggml-org/llama.cpp/discussions/15095) - Hot PRs: [All](https://github.com/ggml-org/llama.cpp/pulls?q=is%3Apr+label%3Ahot+) | [Open](https://github.com/ggml-org/llama.cpp/pulls?q=is%3Apr+label%3Ahot+is%3Aopen) - Multimodal support arrived in `llama-server`: [#12898](https://github.com/ggml-org/llama.cpp/pull/12898) | [documentation](./docs/multimodal.md) - VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode