Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-13 06:23:34 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
948f4ec7c5bff92b18e63303f2b2d1645bccd943
llama.cpp/examples/server/tests/features
History
Benjamin Findley e586ee4259 change default temperature of OAI compat API from 0 to 1 (#7226)
* change default temperature of OAI compat API from 0 to 1

* make tests explicitly send temperature to OAI API
2024-05-13 12:40:08 +10:00
..
steps
change default temperature of OAI compat API from 0 to 1 (#7226)
2024-05-13 12:40:08 +10:00
embeddings.feature
Improve usability of --model-url & related flags (#6930)
2024-04-30 00:52:50 +01:00
environment.py
…
issues.feature
…
parallel.feature
common: llama_load_model_from_url split support (#6192)
2024-03-23 18:07:00 +01:00
passkey.feature
…
results.feature
Server: add tests for batch size, different seeds (#6950)
2024-05-01 17:52:55 +02:00
security.feature
json-schema-to-grammar improvements (+ added to server) (#5978)
2024-03-21 11:50:43 +00:00
server.feature
server : add_special option for tokenize endpoint (#7059)
2024-05-08 15:27:58 +03:00
slotsave.feature
llama : save and restore kv cache for single seq id (#6341)
2024-04-08 15:43:30 +03:00
wrong_usages.feature
…
Powered by Gitea Version: 1.24.2 Page: 2492ms Template: 103ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API