This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-29 13:43:38 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
b2901
llama.cpp
/
scripts
History
Georgi Gerganov
29499bb593
sync : ggml
2024-05-15 13:23:41 +03:00
..
build-info.cmake
…
build-info.sh
…
check-requirements.sh
…
ci-run.sh
…
compare-commits.sh
…
compare-llama-bench.py
llama-bench : add pp+tg test type (
#7199
)
2024-05-10 18:03:54 +02:00
convert-gg.sh
…
debug-test.sh
Scripting & documenting debugging one test without anything else in the loop. (
#7096
)
2024-05-12 03:26:35 +10:00
gen-authors.sh
…
gen-build-info-cpp.cmake
…
gen-unicode-data.py
llama3 custom regex split (
#6965
)
2024-05-09 23:30:44 +10:00
get-flags.mk
…
get-hellaswag.sh
…
get-pg.sh
…
get-wikitext-2.sh
…
get-wikitext-103.sh
…
get-winogrande.sh
…
hf.sh
…
install-oneapi.bat
…
LlamaConfig.cmake.in
…
pod-llama.sh
…
qnt-all.sh
…
run-all-perf.sh
…
run-all-ppl.sh
…
run-with-preset.py
convert.py : add python logging instead of print() (
#6511
)
2024-05-03 22:36:41 +03:00
server-llm.sh
…
sync-ggml-am.sh
script : sync ggml-rpc
2024-05-14 19:14:38 +03:00
sync-ggml.last
sync : ggml
2024-05-15 13:23:41 +03:00
sync-ggml.sh
script : sync ggml-rpc
2024-05-14 19:14:38 +03:00
verify-checksum-models.py
convert.py : add python logging instead of print() (
#6511
)
2024-05-03 22:36:41 +03:00
xxd.cmake
…