This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-07 23:12:56 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
fd123cfead49eb32e386e26b8ef7a6d41554dda5
llama.cpp
/
examples
/
server
/
tests
/
unit
History
Olivier Chafik
be421fc429
tool-call
: ensure there's always a non-empty tool call id (
#12292
)
2025-03-10 09:45:29 +00:00
..
test_basic.py
…
test_chat_completion.py
server
: fix deadly typo in response_format.json_schema.schema handling (
#12168
)
2025-03-04 08:24:07 +02:00
test_completion.py
server : Fixed wrong function name in llamacpp server unit test (
#11473
)
2025-01-29 00:03:42 +01:00
test_ctx_shift.py
…
test_embedding.py
…
test_infill.py
server : fix extra BOS in infill endpoint (
#11106
)
2025-01-06 15:36:08 +02:00
test_lora.py
server : allow using LoRA adapters per-request (
#10994
)
2025-01-02 15:05:18 +01:00
test_rerank.py
server : add TEI API format for /rerank endpoint (
#11942
)
2025-02-18 14:21:41 +01:00
test_security.py
…
test_slot_save.py
…
test_speculative.py
server : allow using LoRA adapters per-request (
#10994
)
2025-01-02 15:05:18 +01:00
test_tokenize.py
…
test_tool_call.py
tool-call
: ensure there's always a non-empty tool call id (
#12292
)
2025-03-10 09:45:29 +00:00