Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-14 20:29:41 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
e52522b8694ae73abf12feb18d29168674aa1c1b
llama.cpp/examples/server/tests/unit
History
Xuan Son Nguyen e52522b869 server : bring back info of final chunk in stream mode (#10722)
* server : bring back into to final chunk in stream mode

* clarify a bit

* traling space
2024-12-08 20:38:51 +01:00
..
test_basic.py
server : (refactor) no more json in server_task input (#10691)
2024-12-07 20:21:09 +01:00
test_chat_completion.py
server : (refactor) no more json in server_task input (#10691)
2024-12-07 20:21:09 +01:00
test_completion.py
server : bring back info of final chunk in stream mode (#10722)
2024-12-08 20:38:51 +01:00
test_ctx_shift.py
…
test_embedding.py
…
test_infill.py
server : add more test cases (#10569)
2024-11-29 21:48:56 +01:00
test_lora.py
…
test_rerank.py
server : add more test cases (#10569)
2024-11-29 21:48:56 +01:00
test_security.py
…
test_slot_save.py
…
test_speculative.py
server : fix speculative decoding with context shift (#10641)
2024-12-04 22:38:20 +02:00
test_tokenize.py
…
Powered by Gitea Version: 1.24.4 Page: 1766ms Template: 89ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API