This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-09-15 03:51:13 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
6db2b41a76ee78d5efdd5c3cddd5d7ad3f646855
llama.cpp
/
.devops
History
Michael Hueschen
c9b316c78f
nix-shell: use addToSearchPath
...
thx to @SomeoneSerge for the suggestion!
2024-01-24 12:39:29 +00:00
..
nix
nix-shell: use addToSearchPath
2024-01-24 12:39:29 +00:00
cloud-v-pipeline
ci : Cloud-V for RISC-V builds (
#3160
)
2023-09-15 11:06:56 +03:00
full-cuda.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
full-rocm.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
full.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
llama-cpp-clblast.srpm.spec
…
llama-cpp-cublas.srpm.spec
…
llama-cpp.srpm.spec
…
main-cuda.Dockerfile
…
main-intel.Dockerfile
devops : add intel oneapi dockerfile (
#5068
)
2024-01-23 09:11:39 +02:00
main-rocm.Dockerfile
python : add check-requirements.sh and GitHub workflow (
#4585
)
2023-12-29 16:50:29 +02:00
main.Dockerfile
…
tools.sh
docker : add finetune option (
#4211
)
2023-11-30 23:46:01 +02:00