This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-19 00:57:41 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
815fe72adcea5ec79d358db6a4c479191f396b3c
llama.cpp
/
scripts
History
Georgi Gerganov
815fe72adc
sync : ggml
2024-11-01 10:28:24 +02:00
..
build-info.sh
…
check-requirements.sh
…
ci-run.sh
…
compare-commits.sh
scripts : verify py deps at the start of compare (
#9520
)
2024-09-18 18:34:32 +03:00
compare-llama-bench.py
llama : refactor model loader with backend registry (
#10026
)
2024-10-30 02:01:23 +01:00
debug-test.sh
scripts : fix spelling typo in messages and comments (
#9782
)
2024-10-08 09:19:53 +03:00
gen-authors.sh
…
gen-unicode-data.py
…
get-flags.mk
…
get-hellaswag.sh
…
get-pg.sh
…
get-wikitext-2.sh
…
get-wikitext-103.sh
…
get-winogrande.sh
…
hf.sh
…
install-oneapi.bat
…
pod-llama.sh
…
qnt-all.sh
…
run-all-perf.sh
…
run-all-ppl.sh
…
run-with-preset.py
llama : remove Tail-Free sampling (
#10071
)
2024-10-29 10:42:05 +02:00
server-llm.sh
…
sync-ggml-am.sh
scripts : fix amx sync [no ci]
2024-10-26 10:33:31 +03:00
sync-ggml.last
sync : ggml
2024-11-01 10:28:24 +02:00
sync-ggml.sh
scripts : fix amx sync [no ci]
2024-10-26 10:33:31 +03:00
verify-checksum-models.py
…
xxd.cmake
…