tools : fix uninitialized llama_batch in server (#13436)

* add constructor to initialize server_context::batch, preventing destructor's call to llama_batch_free from causing an invalid free()

* Update tools/server/server.cpp

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* use C++11 initializer syntax

* switch from Copy-list-initialization to Direct-list-initialization

---------

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
This commit is contained in:
Anthony Umfer
2025-05-11 11:08:26 -04:00
committed by GitHub
parent 09232370fc
commit 9a390c4829

View File

@ -1862,7 +1862,7 @@ struct server_context {
llama_context_params cparams_dft;
llama_batch batch;
llama_batch batch {};
bool clean_kv_cache = true;
bool add_bos_token = true;