This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-21 10:17:58 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
94b87f87b502d2ab6c8b28d2b5935cf008ca1505
llama.cpp
/
.devops
History
Georgi Gerganov
dbc2ec59b5
docker : drop to CUDA 12.4 (
#11869
)
...
* docker : drop to CUDA 12.4 * docker : update readme [no ci]
2025-02-14 14:48:40 +02:00
..
nix
…
cloud-v-pipeline
…
cpu.Dockerfile
…
cuda.Dockerfile
docker : drop to CUDA 12.4 (
#11869
)
2025-02-14 14:48:40 +02:00
intel.Dockerfile
…
llama-cli-cann.Dockerfile
…
llama-cpp-cuda.srpm.spec
…
llama-cpp.srpm.spec
…
musa.Dockerfile
musa: bump MUSA SDK version to rc3.1.1 (
#11822
)
2025-02-13 13:28:18 +01:00
rocm.Dockerfile
…
tools.sh
docker: add perplexity and bench commands to full image (
#11438
)
2025-01-28 10:42:32 +00:00
vulkan.Dockerfile
ci : fix build CPU arm64 (
#11472
)
2025-01-29 00:02:56 +01:00