This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-16 13:12:51 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
96e80dabc6e73ff68b09b68947b1fc25883c5094
llama.cpp
/
.devops
History
Ikko Eltociear Ashimine
be36bb946a
flake.nix : fix typo (
#4700
)
...
betwen -> between
2024-01-05 18:02:44 +02:00
..
nix
flake.nix : fix typo (
#4700
)
2024-01-05 18:02:44 +02:00
cloud-v-pipeline
ci : Cloud-V for RISC-V builds (
#3160
)
2023-09-15 11:06:56 +03:00
full-cuda.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
full-rocm.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
full.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
llama-cpp-clblast.srpm.spec
…
llama-cpp-cublas.srpm.spec
…
llama-cpp.srpm.spec
…
main-cuda.Dockerfile
…
main-rocm.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
main.Dockerfile
…
tools.sh
docker : add finetune option (
#4211
)
2023-11-30 23:46:01 +02:00