From: Anthony Umfer Date: Sun, 11 May 2025 15:08:26 +0000 (-0400) Subject: tools : fix uninitialized llama_batch in server (#13436) X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=9a390c4829cd3058d26a2e2c09d16e3fd12bf1b1;p=pkg%2Fggml%2Fsources%2Fllama.cpp tools : fix uninitialized llama_batch in server (#13436) * add constructor to initialize server_context::batch, preventing destructor's call to llama_batch_free from causing an invalid free() * Update tools/server/server.cpp Co-authored-by: Xuan-Son Nguyen * use C++11 initializer syntax * switch from Copy-list-initialization to Direct-list-initialization --------- Co-authored-by: Xuan-Son Nguyen --- diff --git a/tools/server/server.cpp b/tools/server/server.cpp index de8ded71..7169ffdc 100644 --- a/tools/server/server.cpp +++ b/tools/server/server.cpp @@ -1862,7 +1862,7 @@ struct server_context { llama_context_params cparams_dft; - llama_batch batch; + llama_batch batch {}; bool clean_kv_cache = true; bool add_bos_token = true;