This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-21 10:17:58 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
511636df0c90826b4dd1fc21ff260c19d69a3b5d
llama.cpp
/
.devops
History
slaren
048de848ee
docker : fix missing binaries in full-cuda image (
#9278
)
2024-09-02 18:11:13 +02:00
..
nix
build(nix): Package gguf-py (
#5664
)
2024-09-02 14:21:01 +03:00
cloud-v-pipeline
…
full-cuda.Dockerfile
docker : fix missing binaries in full-cuda image (
#9278
)
2024-09-02 18:11:13 +02:00
full-rocm.Dockerfile
…
full.Dockerfile
…
llama-cli-cann.Dockerfile
…
llama-cli-cuda.Dockerfile
docker : update CUDA images (
#9213
)
2024-08-28 13:20:36 +02:00
llama-cli-intel.Dockerfile
…
llama-cli-rocm.Dockerfile
…
llama-cli-vulkan.Dockerfile
…
llama-cli.Dockerfile
…
llama-cpp-cuda.srpm.spec
…
llama-cpp.srpm.spec
…
llama-server-cuda.Dockerfile
docker : update CUDA images (
#9213
)
2024-08-28 13:20:36 +02:00
llama-server-intel.Dockerfile
…
llama-server-rocm.Dockerfile
…
llama-server-vulkan.Dockerfile
…
llama-server.Dockerfile
…
tools.sh
…