This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-27 03:33:46 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
e4868d16d24dec55e61bcaadaca28feed8f98b13
llama.cpp
/
requirements
History
Johannes Gäßler
494c5899cb
scripts: benchmark for HTTP server throughput (
#14668
)
...
* scripts: benchmark for HTTP server throughput * fix server connection reset
2025-07-14 13:14:30 +02:00
..
requirements-all.txt
scripts: benchmark for HTTP server throughput (
#14668
)
2025-07-14 13:14:30 +02:00
requirements-compare-llama-bench.txt
compare-llama-bench: add option to plot (
#14169
)
2025-06-14 10:34:20 +02:00
requirements-convert_hf_to_gguf_update.txt
common: Include torch package for s390x (
#13699
)
2025-05-22 21:31:29 +03:00
requirements-convert_hf_to_gguf.txt
common: Include torch package for s390x (
#13699
)
2025-05-22 21:31:29 +03:00
requirements-convert_legacy_llama.txt
py : update transfomers version (
#9694
)
2024-09-30 18:03:47 +03:00
requirements-convert_llama_ggml_to_gguf.txt
…
requirements-convert_lora_to_gguf.txt
common: Include torch package for s390x (
#13699
)
2025-05-22 21:31:29 +03:00
requirements-gguf_editor_gui.txt
gguf-py : add support for sub_type (in arrays) in GGUFWriter add_key_value method (
#13561
)
2025-05-29 15:36:05 +02:00
requirements-pydantic.txt
…
requirements-server-bench.txt
scripts: benchmark for HTTP server throughput (
#14668
)
2025-07-14 13:14:30 +02:00
requirements-test-tokenizer-random.txt
…
requirements-tool_bench.txt
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (
#12034
)
2025-03-05 13:05:13 +00:00