Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-29 13:43:38 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
a813badbbdf0d38705f249df7a0c99af5cdee678
llama.cpp/examples/server/tests/unit
History
Reza Kakhki 9ba399dfa7 server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967)
* add support for base64

* fix base64 test

* improve test

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-12-24 21:33:04 +01:00
..
test_basic.py
server : add flag to disable the web-ui (#10762) (#10751)
2024-12-10 18:22:34 +01:00
test_chat_completion.py
server : add system_fingerprint to chat/completion (#10917)
2024-12-23 12:02:44 +01:00
test_completion.py
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
2024-12-24 18:54:49 +01:00
test_ctx_shift.py
…
test_embedding.py
server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967)
2024-12-24 21:33:04 +01:00
test_infill.py
server : fix format_infill (#10724)
2024-12-08 23:04:29 +01:00
test_lora.py
…
test_rerank.py
server : fill usage info in embeddings and rerank responses (#10852)
2024-12-17 18:00:24 +02:00
test_security.py
…
test_slot_save.py
…
test_speculative.py
server : fix speculative decoding with context shift (#10641)
2024-12-04 22:38:20 +02:00
test_tokenize.py
…
Powered by Gitea Version: 1.24.3 Page: 3490ms Template: 29ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API