From: makomk Date: Sat, 13 Jan 2024 14:16:11 +0000 (+0000) Subject: server : fix crash with multimodal models without BOS token (#4904) X-Git-Tag: upstream/0.0.4488~2637 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=ee8243adaa9a9f51ff449213383874e49efe368f;p=pkg%2Fggml%2Fsources%2Fllama.cpp server : fix crash with multimodal models without BOS token (#4904) --- diff --git a/examples/server/server.cpp b/examples/server/server.cpp index c1ab8f9d..7b33aea1 100644 --- a/examples/server/server.cpp +++ b/examples/server/server.cpp @@ -1835,7 +1835,7 @@ struct llama_server_context slot.cache_tokens = prompt_tokens; - if (slot.n_past == slot.num_prompt_tokens) + if (slot.n_past == slot.num_prompt_tokens && slot.n_past > 0) { // we have to evaluate at least 1 token to generate logits. LOG_TEE("slot %d : we have to evaluate at least 1 token to generate logits\n", slot.id);