This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-18 05:56:00 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
2a7c94db5fb67b2f8882d2d16a11bf5d8d12d397
llama.cpp
/
scripts
History
Georgi Gerganov
64802ec00d
sync : ggml
2024-01-11 09:39:08 +02:00
..
build-info.cmake
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (
#3970
)
2023-11-27 21:25:42 +02:00
build-info.sh
…
check-requirements.sh
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
compare-llama-bench.py
Python script to compare commits with llama-bench (
#4844
)
2024-01-10 01:04:33 +01:00
convert-gg.sh
…
gen-build-info-cpp.cmake
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (
#3970
)
2023-11-27 21:25:42 +02:00
get-flags.mk
build : detect host compiler and cuda compiler separately (
#4414
)
2023-12-13 12:10:10 -05:00
get-pg.sh
scripts : improve get-pg.sh (
#4838
)
2024-01-09 19:21:13 +02:00
get-wikitext-2.sh
…
LlamaConfig.cmake.in
…
qnt-all.sh
…
run-all-perf.sh
…
run-all-ppl.sh
…
server-llm.sh
…
sync-ggml-am.sh
scripts : fix sync order + metal sed
2024-01-03 14:38:38 +02:00
sync-ggml.last
sync : ggml
2024-01-11 09:39:08 +02:00
sync-ggml.sh
sync : ggml (new ops, tests, backend, etc.) (
#4359
)
2023-12-07 22:26:54 +02:00
verify-checksum-models.py
…