Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-05 00:25:26 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
99bd4ac28c32cd17c0e337ff5601393b033dc5fc
llama.cpp/examples/server/tests/features
History
Georgi Gerganov 1bde94dd02 server : remove self-extend features (#9860)
* server : remove self-extend

ggml-ci

* server : fix context limit check to use slot.n_past

ggml-ci
2024-10-12 16:06:31 +03:00
..
steps
server : better security control for public deployments (#9776)
2024-10-08 13:27:04 +02:00
ctx_shift.feature
server : remove self-extend features (#9860)
2024-10-12 16:06:31 +03:00
embeddings.feature
llama : add reranking support (#9510)
2024-09-28 17:42:03 +03:00
environment.py
…
issues.feature
…
lora.feature
server : add lora hotswap endpoint (WIP) (#8857)
2024-08-06 17:33:39 +02:00
parallel.feature
server : simplify state machine for slot (#9283)
2024-09-06 23:21:29 +02:00
passkey.feature
server : simplify state machine for slot (#9283)
2024-09-06 23:21:29 +02:00
rerank.feature
llama : add reranking support (#9510)
2024-09-28 17:42:03 +03:00
results.feature
server : fix temperature + disable some tests (#7409)
2024-05-20 22:10:03 +10:00
security.feature
server : better security control for public deployments (#9776)
2024-10-08 13:27:04 +02:00
server.feature
server : Add option to return token pieces in /tokenize endpoint (#9108)
2024-09-12 22:30:11 +02:00
slotsave.feature
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)
2024-05-21 14:39:48 +02:00
wrong_usages.feature
server : refactor multitask handling (#9274)
2024-09-02 17:11:51 +02:00
Powered by Gitea Version: 1.24.3 Page: 2477ms Template: 50ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API