This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-15 12:42:40 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
d785f9c1fd1a1929c6d0e2a0b12cae5db867908b
llama.cpp
/
tools
/
server
/
tests
/
unit
History
Olivier Chafik
d785f9c1fd
server: fix/test add_generation_prompt (
#13770
)
...
Co-authored-by: ochafik <
ochafik@google.com
>
2025-05-25 10:45:49 +01:00
..
test_basic.py
…
test_chat_completion.py
server
: streaming of tool calls and thoughts when
--jinja
is on (
#12379
)
2025-05-25 01:48:08 +01:00
test_completion.py
server : fix cache_tokens bug with no cache_prompt (
#13533
)
2025-05-14 13:35:07 +02:00
test_ctx_shift.py
server : do not return error out of context (with ctx shift disabled) (
#13577
)
2025-05-16 21:50:00 +02:00
test_embedding.py
…
test_infill.py
…
test_lora.py
…
test_rerank.py
…
test_security.py
…
test_slot_save.py
…
test_speculative.py
…
test_template.py
server: fix/test add_generation_prompt (
#13770
)
2025-05-25 10:45:49 +01:00
test_tokenize.py
…
test_tool_call.py
server
: streaming of tool calls and thoughts when
--jinja
is on (
#12379
)
2025-05-25 01:48:08 +01:00
test_vision_api.py
server : support audio input (
#13714
)
2025-05-23 11:03:47 +02:00