This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-19 00:57:41 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
b5150
llama.cpp
/
scripts
History
Georgi Gerganov
526739b879
sync : ggml
...
ggml-ci
2025-04-14 09:26:15 +03:00
..
apple
…
build-info.sh
…
check-requirements.sh
…
ci-run.sh
…
compare-commits.sh
…
compare-llama-bench.py
…
debug-test.sh
…
fetch_server_test_models.py
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (
#12034
)
2025-03-05 13:05:13 +00:00
gen-authors.sh
…
gen-unicode-data.py
…
get_chat_template.py
…
get-flags.mk
…
get-hellaswag.sh
…
get-pg.sh
…
get-wikitext-2.sh
…
get-wikitext-103.sh
…
get-winogrande.sh
…
hf.sh
…
install-oneapi.bat
…
qnt-all.sh
…
run-all-perf.sh
…
run-all-ppl.sh
…
sync-ggml-am.sh
scripts : fix sync-ggml-am.sh
2025-04-11 00:17:47 +03:00
sync-ggml.last
sync : ggml
2025-04-14 09:26:15 +03:00
sync-ggml.sh
scripts : update sync + fix cmake merge
2025-03-27 10:09:29 +02:00
tool_bench.py
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (
#12034
)
2025-03-05 13:05:13 +00:00
tool_bench.sh
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (
#12034
)
2025-03-05 13:05:13 +00:00
verify-checksum-models.py
…
xxd.cmake
…