mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-07-26 03:03:25 -04:00
server : tuning tests (#7388)
* server : don't pass temperature as string * server : increase timeout * tests : fix the fix 0.8f -> 0.8 ggml-ci * tests : set explicit temperature
This commit is contained in:
@ -13,6 +13,7 @@ Feature: Results
|
||||
|
||||
Scenario Outline: consistent results with same seed
|
||||
Given <n_slots> slots
|
||||
And 0.0 temperature
|
||||
Then the server is starting
|
||||
Then the server is healthy
|
||||
|
||||
@ -30,6 +31,7 @@ Feature: Results
|
||||
|
||||
Scenario Outline: different results with different seed
|
||||
Given <n_slots> slots
|
||||
And 1.0 temperature
|
||||
Then the server is starting
|
||||
Then the server is healthy
|
||||
|
||||
|
Reference in New Issue
Block a user