This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-15 04:33:06 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
1d19025909ae3abbc26c50bb8795c2f351fe4ba1
llama.cpp
/
requirements
History
Francis Couture-Harpin
635f945ed1
convert : remove imatrix to gguf python script
2025-04-15 17:42:26 -04:00
..
requirements-all.txt
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (
#12034
)
2025-03-05 13:05:13 +00:00
requirements-compare-llama-bench.txt
…
requirements-convert_hf_to_gguf_update.txt
…
requirements-convert_hf_to_gguf.txt
…
requirements-convert_legacy_llama.txt
…
requirements-convert_llama_ggml_to_gguf.txt
…
requirements-convert_lora_to_gguf.txt
…
requirements-pydantic.txt
…
requirements-test-tokenizer-random.txt
…
requirements-tool_bench.txt
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (
#12034
)
2025-03-05 13:05:13 +00:00