This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-18 05:56:00 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
7a0de960452f9a57de7f1d167e57b6f3ee5ac1b6
llama.cpp
/
.devops
History
simevo
e4e915912c
devops : fix compile bug when the BASE_CUDA_DEV_CONTAINER is based on Ubuntu 24.04 (
#15005
)
...
fixes
#15004
Co-authored-by: Paolo Greppi <
paolo.greppi@libpf.com
>
2025-08-14 18:45:27 +03:00
..
nix
…
cann.Dockerfile
docker : add cann build pipline (
#14591
)
2025-08-01 10:02:34 +08:00
cpu.Dockerfile
docker : Enable GGML_CPU_ALL_VARIANTS for ARM (
#15267
)
2025-08-14 16:22:58 +02:00
cuda.Dockerfile
devops : fix compile bug when the BASE_CUDA_DEV_CONTAINER is based on Ubuntu 24.04 (
#15005
)
2025-08-14 18:45:27 +03:00
intel.Dockerfile
…
llama-cli-cann.Dockerfile
…
llama-cpp-cuda.srpm.spec
…
llama-cpp.srpm.spec
…
musa.Dockerfile
musa: upgrade musa sdk to rc4.2.0 (
#14498
)
2025-07-24 20:05:37 +01:00
rocm.Dockerfile
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (
#14624
)
2025-07-27 00:28:14 +02:00
tools.sh
…
vulkan.Dockerfile
…