This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-15 12:42:40 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
e6e7c75d94adf4d39e846d30807c531ff22865e7
llama.cpp
/
scripts
History
Georgi Gerganov
78c6785175
sync : ggml
2025-01-04 16:09:53 +02:00
..
build-info.sh
…
check-requirements.sh
py : type-check all Python scripts with Pyright (
#8341
)
2024-07-07 15:04:39 -04:00
ci-run.sh
…
compare-commits.sh
…
compare-llama-bench.py
ggml : more perfo with llamafile tinyblas on x86_64 (
#10714
)
2024-12-24 18:54:49 +01:00
debug-test.sh
…
gen-authors.sh
license : update copyright notice + add AUTHORS (
#6405
)
2024-04-09 09:23:19 +03:00
gen-unicode-data.py
…
get-flags.mk
build : pass all warning flags to nvcc via -Xcompiler (
#5570
)
2024-02-18 16:21:52 -05:00
get-hellaswag.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
get-pg.sh
…
get-wikitext-2.sh
…
get-wikitext-103.sh
…
get-winogrande.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
hf.sh
ggml : more perfo with llamafile tinyblas on x86_64 (
#10714
)
2024-12-24 18:54:49 +01:00
install-oneapi.bat
…
qnt-all.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
run-all-perf.sh
…
run-all-ppl.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
sync-ggml-am.sh
…
sync-ggml.last
sync : ggml
2025-01-04 16:09:53 +02:00
sync-ggml.sh
…
verify-checksum-models.py
…
xxd.cmake
build
: generate hex dump of server assets during build (
#6661
)
2024-04-21 18:48:53 +01:00