Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-06-28 04:15:21 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
gg/llama-refactor-sampling
llama.cpp/examples/server/tests/features
History
Xuan Son Nguyen 48baa61ecc server : test script : add timeout for all requests (#9282)
2024-09-02 22:08:38 +02:00
..
steps
server : test script : add timeout for all requests (#9282)
2024-09-02 22:08:38 +02:00
embeddings.feature
Improve usability of --model-url & related flags (#6930)
2024-04-30 00:52:50 +01:00
environment.py
server tests : more pythonic process management; fix bare except: (#6146)
2024-03-20 06:33:49 +01:00
issues.feature
server: tests: passkey challenge / self-extend with context shift demo (#5832)
2024-03-02 22:00:14 +01:00
lora.feature
server : add lora hotswap endpoint (WIP) (#8857)
2024-08-06 17:33:39 +02:00
parallel.feature
server : test script : add timeout for all requests (#9282)
2024-09-02 22:08:38 +02:00
passkey.feature
Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258)
2024-07-02 12:18:10 -04:00
results.feature
server : fix temperature + disable some tests (#7409)
2024-05-20 22:10:03 +10:00
security.feature
json-schema-to-grammar improvements (+ added to server) (#5978)
2024-03-21 11:50:43 +00:00
server.feature
json: fix additionalProperties, allow space after enum/const (#7840)
2024-06-26 01:45:58 +01:00
slotsave.feature
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)
2024-05-21 14:39:48 +02:00
wrong_usages.feature
server : refactor multitask handling (#9274)
2024-09-02 17:11:51 +02:00
Powered by Gitea Version: 1.24.1 Page: 303ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API