## Hot topics
+- **HuggingFace cache migration: models downloaded with `-hf` are now stored in the standard HuggingFace cache directory, enabling sharing with other HF tools.**
- **[guide : using the new WebUI of llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/16938)**
- [guide : running gpt-oss with llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/15396)
- [[FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗](https://github.com/ggml-org/llama.cpp/discussions/15313)
return; // -hf is not going to work
}
+ bool warned = false;
+
for (const auto & entry : fs::directory_iterator(old_cache)) {
if (!entry.is_regular_file()) {
continue;
continue;
}
+ if (!warned) {
+ warned = true;
+ LOG_WRN("================================================================================\n"
+ "WARNING: Migrating cache to HuggingFace cache directory\n"
+ " Old cache: %s\n"
+ " New cache: %s\n"
+ "This one-time migration moves models previously downloaded with -hf\n"
+ "from the legacy llama.cpp cache to the standard HuggingFace cache.\n"
+ "Models downloaded with --model-url are not affected.\n"
+ "================================================================================\n",
+ old_cache.string().c_str(), get_cache_directory().string().c_str());
+ }
+
auto repo_id = owner + "/" + repo;
auto files = get_repo_files(repo_id, token);