This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-30 22:23:31 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
0d2ec438330271d201c2e9224aca23d0d5c908bf
llama.cpp
/
examples
/
server
/
tests
/
features
History
Georgi Gerganov
6262d13e0b
common : reimplement logging (
#9418
)
...
https://github.com/ggerganov/llama.cpp/pull/9418
2024-09-15 20:46:12 +03:00
..
steps
common : reimplement logging (
#9418
)
2024-09-15 20:46:12 +03:00
embeddings.feature
llama : sanitize invalid tokens (
#9357
)
2024-09-08 00:33:13 +03:00
environment.py
…
issues.feature
…
lora.feature
server : add lora hotswap endpoint (WIP) (
#8857
)
2024-08-06 17:33:39 +02:00
parallel.feature
server : simplify state machine for slot (
#9283
)
2024-09-06 23:21:29 +02:00
passkey.feature
server : simplify state machine for slot (
#9283
)
2024-09-06 23:21:29 +02:00
results.feature
…
security.feature
…
server.feature
server : Add option to return token pieces in /tokenize endpoint (
#9108
)
2024-09-12 22:30:11 +02:00
slotsave.feature
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (
#7425
)
2024-05-21 14:39:48 +02:00
wrong_usages.feature
server : refactor multitask handling (
#9274
)
2024-09-02 17:11:51 +02:00