This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-26 10:09:41 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
83330d8cd6491e53e1aca4c5dfc47e039b3c04ff
llama.cpp
/
examples
/
server
/
tests
/
features
History
Johan
911b3900dd
server : add_special option for tokenize endpoint (
#7059
)
2024-05-08 15:27:58 +03:00
..
steps
server : add_special option for tokenize endpoint (
#7059
)
2024-05-08 15:27:58 +03:00
embeddings.feature
…
environment.py
…
issues.feature
server: tests: passkey challenge / self-extend with context shift demo (
#5832
)
2024-03-02 22:00:14 +01:00
parallel.feature
common: llama_load_model_from_url split support (
#6192
)
2024-03-23 18:07:00 +01:00
passkey.feature
…
results.feature
…
security.feature
json-schema-to-grammar improvements (+ added to server) (
#5978
)
2024-03-21 11:50:43 +00:00
server.feature
server : add_special option for tokenize endpoint (
#7059
)
2024-05-08 15:27:58 +03:00
slotsave.feature
…
wrong_usages.feature
…