Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-04 18:16:58 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
83b72cb086ce46a33dececc86bfe4648b6120aa8
llama.cpp/examples/server/tests/features
History
Johannes Gäßler 28103f4832 Server: fix seed for multiple slots (#6835)
* Server: add tests for consistent results

* sampling: separate rng per sampling context
2024-04-24 11:08:36 +02:00
..
steps
Server: fix seed for multiple slots (#6835)
2024-04-24 11:08:36 +02:00
embeddings.feature
common: llama_load_model_from_url using --model-url (#6098)
2024-03-17 19:12:37 +01:00
environment.py
server tests : more pythonic process management; fix bare except: (#6146)
2024-03-20 06:33:49 +01:00
issues.feature
…
parallel.feature
common: llama_load_model_from_url split support (#6192)
2024-03-23 18:07:00 +01:00
passkey.feature
…
results.feature
Server: fix seed for multiple slots (#6835)
2024-04-24 11:08:36 +02:00
security.feature
json-schema-to-grammar improvements (+ added to server) (#5978)
2024-03-21 11:50:43 +00:00
server.feature
common: llama_load_model_from_url split support (#6192)
2024-03-23 18:07:00 +01:00
slotsave.feature
llama : save and restore kv cache for single seq id (#6341)
2024-04-08 15:43:30 +03:00
wrong_usages.feature
…
Powered by Gitea Version: 1.24.1 Page: 1769ms Template: 66ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API