From: Daniel Bevenius Date: Sun, 23 Jun 2024 13:39:45 +0000 (+0200) Subject: Fix typo in llama_set_embeddings comment (#8077) X-Git-Tag: upstream/0.0.4488~1281 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=11318d9aa1f668aa10407a5cb9614371af32f3ce;p=pkg%2Fggml%2Fsources%2Fllama.cpp Fix typo in llama_set_embeddings comment (#8077) --- diff --git a/llama.h b/llama.h index 05d8b092..53e06d9d 100644 --- a/llama.h +++ b/llama.h @@ -786,7 +786,7 @@ extern "C" { // Get the number of threads used for prompt and batch processing (multiple token). LLAMA_API uint32_t llama_n_threads_batch(struct llama_context * ctx); - // Set whether the model is in embeddings model or not + // Set whether the model is in embeddings mode or not // If true, embeddings will be returned but logits will not LLAMA_API void llama_set_embeddings(struct llama_context * ctx, bool embeddings);