This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-29 11:39:14 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
775ac8712a7b42cfead2585f42cec0dfd56644ab
llama.cpp
/
.devops
History
Someone Serge
1e3900ebac
flake.nix: expose full scope in legacyPackages
2023-12-31 13:14:58 -08:00
..
nix
flake.nix: expose full scope in legacyPackages
2023-12-31 13:14:58 -08:00
cloud-v-pipeline
…
full-cuda.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
full-rocm.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
full.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
llama-cpp-clblast.srpm.spec
…
llama-cpp-cublas.srpm.spec
…
llama-cpp.srpm.spec
…
main-cuda.Dockerfile
…
main-rocm.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
main.Dockerfile
…
tools.sh
…