Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-24 03:31:31 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
b4ec1d44294b628a811cc97367bb7ace0a32c9fd
llama.cpp/.devops
History
Nuno 6f53d8a6b4 docker: add missing vulkan library to base layer and update to 24.04 (#11422)
Signed-off-by: rare-magma <rare-magma@posteo.eu>
2025-01-26 18:22:43 +01:00
..
nix
nix: allow to override rocm gpu targets (#10794)
2024-12-14 10:17:36 -08:00
cloud-v-pipeline
…
cpu.Dockerfile
docker : add GGML_CPU_ARM_ARCH arg to select ARM architecture to build for (#11419)
2025-01-25 17:22:41 +01:00
cuda.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
intel.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
llama-cli-cann.Dockerfile
docker: use GGML_NATIVE=OFF (#10368)
2024-11-18 00:21:53 +01:00
llama-cpp-cuda.srpm.spec
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139)
2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec
…
musa.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
rocm.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
tools.sh
fix: graceful shutdown for Docker images (#10815)
2024-12-13 18:23:50 +01:00
vulkan.Dockerfile
docker: add missing vulkan library to base layer and update to 24.04 (#11422)
2025-01-26 18:22:43 +01:00
Powered by Gitea Version: 1.24.2 Page: 2371ms Template: 180ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API