This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-24 03:31:31 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
4601a8bb6784d2ab8b4b605354b51979fbeea1d3
llama.cpp
/
.devops
History
Diego Devesa
59f4db1088
ggml : add predefined list of CPU backend variants to build (
#10626
)
...
* ggml : add predefined list of CPU backend variants to build * update CPU dockerfiles
2024-12-04 14:45:40 +01:00
..
nix
…
cloud-v-pipeline
…
full-cuda.Dockerfile
…
full-musa.Dockerfile
…
full-rocm.Dockerfile
…
full.Dockerfile
ggml : add predefined list of CPU backend variants to build (
#10626
)
2024-12-04 14:45:40 +01:00
llama-cli-cann.Dockerfile
…
llama-cli-cuda.Dockerfile
…
llama-cli-intel.Dockerfile
…
llama-cli-musa.Dockerfile
…
llama-cli-rocm.Dockerfile
…
llama-cli-vulkan.Dockerfile
…
llama-cli.Dockerfile
ggml : add predefined list of CPU backend variants to build (
#10626
)
2024-12-04 14:45:40 +01:00
llama-cpp-cuda.srpm.spec
…
llama-cpp.srpm.spec
…
llama-server-cuda.Dockerfile
…
llama-server-intel.Dockerfile
…
llama-server-musa.Dockerfile
…
llama-server-rocm.Dockerfile
…
llama-server-vulkan.Dockerfile
…
llama-server.Dockerfile
ggml : add predefined list of CPU backend variants to build (
#10626
)
2024-12-04 14:45:40 +01:00
tools.sh
…