Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-06-26 19:55:04 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
bf5bcd0b857db420235e03639f0a5f218a7f8cf8
llama.cpp/requirements/requirements-convert_hf_to_gguf.txt

8 lines
308 B
Plaintext
Raw Normal View History

py : switch to snake_case (#8305) * py : switch to snake_case ggml-ci * cont ggml-ci * cont ggml-ci * cont : fix link * gguf-py : use snake_case in scripts entrypoint export * py : rename requirements for convert_legacy_llama.py Needed for scripts/check-requirements.sh --------- Co-authored-by: Francis Couture-Harpin <git@compilade.net>
2024-07-05 07:53:33 +03:00
-r ./requirements-convert_legacy_llama.txt
py : use cpu-only torch in requirements.txt (#8335)
2024-07-07 07:23:38 -04:00
--extra-index-url https://download.pytorch.org/whl/cpu
common: Include torch package for s390x (#13699) * common: update requirements.txt to include pytorch nightly for s390x Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * common: fix torch installation via pip for s390x Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> --------- Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-05-23 02:31:29 +08:00
torch~=2.2.1; platform_machine != "s390x"
# torch s390x packages can only be found from nightly builds
--extra-index-url https://download.pytorch.org/whl/nightly
torch>=0.0.0.dev0; platform_machine == "s390x"
Reference in New Issue Copy Permalink
Powered by Gitea Version: 1.24.1 Page: 176ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API