server : passthrough the /models endpoint during loading (#13535)

* server : passthrough the /models endpoint during loading

* server : update readme + return json for "meta" field
This commit is contained in:
Georgi Gerganov
2025-05-14 15:42:10 +03:00
committed by GitHub
parent 360a9c98e1
commit 053174436f
2 changed files with 12 additions and 3 deletions

View File

@ -1040,7 +1040,7 @@ To know the `id` of the adapter, use GET `/lora-adapters`
Returns information about the loaded model. See [OpenAI Models API documentation](https://platform.openai.com/docs/api-reference/models).
The returned list always has one single element.
The returned list always has one single element. The `meta` field can be `null` (for example, while the model is still loading).
By default, model `id` field is the path to model file, specified via `-m`. You can set a custom value for model `id` field via `--alias` argument. For example, `--alias gpt-4o-mini`.