Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-14 12:19:48 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
9515c6131aecaccc955fdedcfe16c3e030aaefcb
llama.cpp/requirements
History
Sigbjørn Skjæret 2bf3fbf0b5 ci : check that pre-tokenizer hashes are up-to-date (#15032)
* torch is not required for convert_hf_to_gguf_update

* add --check-missing parameter

* check that pre-tokenizer hashes are up-to-date
2025-08-02 14:39:01 +02:00
..
requirements-all.txt
scripts: benchmark for HTTP server throughput (#14668)
2025-07-14 13:14:30 +02:00
requirements-compare-llama-bench.txt
compare-llama-bench: add option to plot (#14169)
2025-06-14 10:34:20 +02:00
requirements-convert_hf_to_gguf_update.txt
ci : check that pre-tokenizer hashes are up-to-date (#15032)
2025-08-02 14:39:01 +02:00
requirements-convert_hf_to_gguf.txt
mtmd : add support for Voxtral (#14862)
2025-07-28 15:01:48 +02:00
requirements-convert_legacy_llama.txt
…
requirements-convert_llama_ggml_to_gguf.txt
…
requirements-convert_lora_to_gguf.txt
common: Include torch package for s390x (#13699)
2025-05-22 21:31:29 +03:00
requirements-gguf_editor_gui.txt
gguf-py : add support for sub_type (in arrays) in GGUFWriter add_key_value method (#13561)
2025-05-29 15:36:05 +02:00
requirements-pydantic.txt
mtmd : add support for Voxtral (#14862)
2025-07-28 15:01:48 +02:00
requirements-server-bench.txt
scripts: benchmark for HTTP server throughput (#14668)
2025-07-14 13:14:30 +02:00
requirements-test-tokenizer-random.txt
…
requirements-tool_bench.txt
…
Powered by Gitea Version: 1.24.4 Page: 1580ms Template: 58ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API