This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-06-27 03:55:20 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
master
llama.cpp
/
requirements
/
requirements-compare-llama-bench.txt
4 lines
53 B
Plaintext
Raw
Permalink
Normal View
History
Unescape
Escape
py : type-check all Python scripts with Pyright (#8341) * py : type-check all Python scripts with Pyright * server-tests : use trailing slash in openai base_url * server-tests : add more type annotations * server-tests : strip "chat" from base_url in oai_chat_completions * server-tests : model metadata is a dict * ci : disable pip cache in type-check workflow The cache is not shared between branches, and it's 250MB in size, so it would become quite a big part of the 10GB cache limit of the repo. * py : fix new type errors from master branch * tests : fix test-tokenizer-random.py Apparently, gcc applies optimisations even when pre-processing, which confuses pycparser. * ci : only show warnings and errors in python type-check The "information" level otherwise has entries from 'examples/pydantic_models_to_grammar.py', which could be confusing for someone trying to figure out what failed, considering that these messages can safely be ignored even though they look like errors.
2024-07-07 15:04:39 -04:00
tabulate~=0.9.0
GitPython~=3.1.43
compare-llama-bench: add option to plot (#14169) * compare llama-bench: add option to plot * Address review comments: convert case + add type hints * Add matplotlib to requirements * fix tests * Improve comment and fix assert condition for test * Add back default test_name, add --plot_log_scale * use log_scale regardless of x_values
2025-06-14 16:34:20 +08:00
matplotlib~=3.10.0
Reference in New Issue
Copy Permalink