Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-05 02:23:54 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
69804487e0b10f2c5c06316f0ac0eb6ada68433f
llama.cpp/examples/server/tests/unit
History
Olivier Chafik 4a2b196d03 server : fix --jinja when there's no tools or schema (typo was forcing JSON) (#11531)
2025-01-31 10:12:40 +02:00
..
test_basic.py
server : add flag to disable the web-ui (#10762) (#10751)
2024-12-10 18:22:34 +01:00
test_chat_completion.py
server : fix --jinja when there's no tools or schema (typo was forcing JSON) (#11531)
2025-01-31 10:12:40 +02:00
test_completion.py
server : Fixed wrong function name in llamacpp server unit test (#11473)
2025-01-29 00:03:42 +01:00
test_ctx_shift.py
…
test_embedding.py
server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967)
2024-12-24 21:33:04 +01:00
test_infill.py
server : fix extra BOS in infill endpoint (#11106)
2025-01-06 15:36:08 +02:00
test_lora.py
server : allow using LoRA adapters per-request (#10994)
2025-01-02 15:05:18 +01:00
test_rerank.py
server : fill usage info in embeddings and rerank responses (#10852)
2024-12-17 18:00:24 +02:00
test_security.py
…
test_slot_save.py
…
test_speculative.py
server : allow using LoRA adapters per-request (#10994)
2025-01-02 15:05:18 +01:00
test_tokenize.py
…
test_tool_call.py
Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639)
2025-01-30 19:13:58 +00:00
Powered by Gitea Version: 1.24.1 Page: 1784ms Template: 33ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API