This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-14 12:19:48 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
76e868821a94072fbc87cb1fcca291694319eae8
llama.cpp
/
examples
/
server
/
tests
/
features
/
steps
History
Pierrick Hymbert
76e868821a
server: metrics: add llamacpp:prompt_seconds_total and llamacpp:tokens_predicted_seconds_total, reset bucket only on /metrics. Fix values cast to int. Add Process-Start-Time-Unix header. (
#5937
)
...
Closes
#5850
2024-03-08 12:25:04 +01:00
..
steps.py
server: metrics: add llamacpp:prompt_seconds_total and llamacpp:tokens_predicted_seconds_total, reset bucket only on /metrics. Fix values cast to int. Add Process-Start-Time-Unix header. (
#5937
)
2024-03-08 12:25:04 +01:00