This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-15 12:42:40 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
2e66913e5f56209f4c949f98e431925b78e7e84d
llama.cpp
/
scripts
History
Johannes Gäßler
33a5244806
compare-llama-bench.py: fix long hexsha args (
#6424
)
2024-04-01 13:30:43 +02:00
..
build-info.cmake
…
build-info.sh
…
check-requirements.sh
…
ci-run.sh
…
compare-commits.sh
cuda : rename build flag to LLAMA_CUDA (
#6299
)
2024-03-26 01:16:01 +01:00
compare-llama-bench.py
compare-llama-bench.py: fix long hexsha args (
#6424
)
2024-04-01 13:30:43 +02:00
convert-gg.sh
…
gen-build-info-cpp.cmake
…
get-flags.mk
…
get-hellaswag.sh
…
get-pg.sh
…
get-wikitext-2.sh
…
get-wikitext-103.sh
lookup: complement data from context with general text statistics (
#5479
)
2024-03-23 01:24:36 +01:00
get-winogrande.sh
…
hf.sh
…
install-oneapi.bat
…
LlamaConfig.cmake.in
cuda : rename build flag to LLAMA_CUDA (
#6299
)
2024-03-26 01:16:01 +01:00
pod-llama.sh
cuda : rename build flag to LLAMA_CUDA (
#6299
)
2024-03-26 01:16:01 +01:00
qnt-all.sh
…
run-all-perf.sh
…
run-all-ppl.sh
…
run-with-preset.py
…
server-llm.sh
cuda : rename build flag to LLAMA_CUDA (
#6299
)
2024-03-26 01:16:01 +01:00
sync-ggml-am.sh
sync : ggml (
#6351
)
2024-03-29 17:45:46 +02:00
sync-ggml.last
…
sync-ggml.sh
sync : ggml (
#6351
)
2024-03-29 17:45:46 +02:00
verify-checksum-models.py
…