Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-28 21:23:55 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
2bd1b30f6979235ec67b95c183b9b77baa7ab9ce
llama.cpp/tools/server/tests/unit
History
Xuan-Son Nguyen 9ecf3e66a3 server : support audio input (#13714)
* server : support audio input

* add audio support on webui
2025-05-23 11:03:47 +02:00
..
test_basic.py
…
test_chat_completion.py
server : fix first message identification (#13634)
2025-05-21 15:07:57 +02:00
test_completion.py
server : fix cache_tokens bug with no cache_prompt (#13533)
2025-05-14 13:35:07 +02:00
test_ctx_shift.py
server : do not return error out of context (with ctx shift disabled) (#13577)
2025-05-16 21:50:00 +02:00
test_embedding.py
…
test_infill.py
…
test_lora.py
…
test_rerank.py
…
test_security.py
…
test_slot_save.py
…
test_speculative.py
…
test_template.py
server: inject date_string in llama 3.x template + fix date for firefunction v2 (#12802)
2025-05-15 02:39:51 +01:00
test_tokenize.py
…
test_tool_call.py
server: inject date_string in llama 3.x template + fix date for firefunction v2 (#12802)
2025-05-15 02:39:51 +01:00
test_vision_api.py
server : support audio input (#13714)
2025-05-23 11:03:47 +02:00
Powered by Gitea Version: 1.24.3 Page: 1569ms Template: 91ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API