mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-07-26 11:13:53 -04:00
* server: cap n_predict if not set to n_ctx_train * server: fix infinite loop * server: infinite loop, move in process_token server: infinite loop: set stop limit to true * minor: spaces * minor: spaces * server: include prompt tokens in the EOS limit