* common: custom hf endpoint support
Add support for custom huggingface endpoints via HF_ENDPOINT environment variable
You can now specify a custom huggingface endpoint using the HF_ENDPOINT environment variable when using the --hf-repo flag, which works similarly to huggingface-cli's endpoint configuration.
Example usage:
HF_ENDPOINT=https://hf-mirror.com/ ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"
The trailing slash in the URL is optional:
HF_ENDPOINT=https://hf-mirror.com ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"
* Update common/arg.cpp
readability Improvement
Co-authored-by: Xuan-Son Nguyen <redacted>
* Apply suggestions from code review
---------
Co-authored-by: ベアトリーチェ <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
}
}
- // TODO: allow custom host
- model.url = "https://huggingface.co/" + model.hf_repo + "/resolve/main/" + model.hf_file;
-
+ std::string hf_endpoint = "https://huggingface.co/";
+ const char * hf_endpoint_env = getenv("HF_ENDPOINT");
+ if (hf_endpoint_env) {
+ hf_endpoint = hf_endpoint_env;
+ if (hf_endpoint.back() != '/') hf_endpoint += '/';
+ }
+ model.url = hf_endpoint + model.hf_repo + "/resolve/main/" + model.hf_file;
// make sure model path is present (for caching purposes)
if (model.path.empty()) {
// this is to avoid different repo having same file name, or same file name in different subdirs