From: Steve Grubb Date: Sat, 11 May 2024 08:13:02 +0000 (-0400) Subject: server : free llama_batch on exit (#7212) X-Git-Tag: upstream/0.0.4488~1639 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=988631335a20d06497f58be0b8ba13adb4323a22;p=pkg%2Fggml%2Fsources%2Fllama.cpp server : free llama_batch on exit (#7212) * [server] Cleanup a memory leak on exit There are a couple memory leaks on exit of the server. This hides others. After cleaning this up, you can see leaks on slots. But that is another patch to be sent after this. * make tab into spaces --- diff --git a/examples/server/server.cpp b/examples/server/server.cpp index 2bf4026d..55c1d412 100644 --- a/examples/server/server.cpp +++ b/examples/server/server.cpp @@ -673,6 +673,8 @@ struct server_context { llama_free_model(model); model = nullptr; } + + llama_batch_free(batch); } bool load_model(const gpt_params & params_) {