Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-15 20:53:00 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
e51c47b401f8cb5f21630a05171e2529cde4d186
llama.cpp/examples/server/tests/unit
History
peidaqi cf8cc856d7 server : Fixed wrong function name in llamacpp server unit test (#11473)
The test_completion_stream_with_openai_library() function is actually with stream=False by default, and test_completion_with_openai_library() with stream=True
2025-01-29 00:03:42 +01:00
..
test_basic.py
server : add flag to disable the web-ui (#10762) (#10751)
2024-12-10 18:22:34 +01:00
test_chat_completion.py
Add Jinja template support (#11016)
2025-01-21 13:18:51 +00:00
test_completion.py
server : Fixed wrong function name in llamacpp server unit test (#11473)
2025-01-29 00:03:42 +01:00
test_ctx_shift.py
…
test_embedding.py
server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967)
2024-12-24 21:33:04 +01:00
test_infill.py
server : fix extra BOS in infill endpoint (#11106)
2025-01-06 15:36:08 +02:00
test_lora.py
server : allow using LoRA adapters per-request (#10994)
2025-01-02 15:05:18 +01:00
test_rerank.py
server : fill usage info in embeddings and rerank responses (#10852)
2024-12-17 18:00:24 +02:00
test_security.py
…
test_slot_save.py
…
test_speculative.py
server : allow using LoRA adapters per-request (#10994)
2025-01-02 15:05:18 +01:00
test_tokenize.py
…
Powered by Gitea Version: 1.24.4 Page: 2135ms Template: 24ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API