This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-16 21:22:37 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
646944cfa8961afd914dd6637739b3cda9a72e11
llama.cpp
/
.devops
History
Christian Kastner
646944cfa8
docker : Enable GGML_CPU_ALL_VARIANTS for ARM (
#15267
)
2025-08-14 16:22:58 +02:00
..
nix
nix : use optionalAttrs for env mkDerivation attrset argument (
#14726
)
2025-07-17 15:18:16 -07:00
cann.Dockerfile
docker : add cann build pipline (
#14591
)
2025-08-01 10:02:34 +08:00
cpu.Dockerfile
docker : Enable GGML_CPU_ALL_VARIANTS for ARM (
#15267
)
2025-08-14 16:22:58 +02:00
cuda.Dockerfile
…
intel.Dockerfile
…
llama-cli-cann.Dockerfile
…
llama-cpp-cuda.srpm.spec
…
llama-cpp.srpm.spec
…
musa.Dockerfile
musa: upgrade musa sdk to rc4.2.0 (
#14498
)
2025-07-24 20:05:37 +01:00
rocm.Dockerfile
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (
#14624
)
2025-07-27 00:28:14 +02:00
tools.sh
scripts : make the shell scripts cross-platform (
#14341
)
2025-06-30 10:17:18 +02:00
vulkan.Dockerfile
…