Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-28 11:08:19 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
6c5bc0625fae6909cb40def15bc4bb45db6f7f4d
llama.cpp/.devops
History
Diego Devesa 59f4db1088 ggml : add predefined list of CPU backend variants to build (#10626)
* ggml : add predefined list of CPU backend variants to build

* update CPU dockerfiles
2024-12-04 14:45:40 +01:00
..
nix
…
cloud-v-pipeline
…
full-cuda.Dockerfile
…
full-musa.Dockerfile
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
2024-11-26 17:00:41 +01:00
full-rocm.Dockerfile
…
full.Dockerfile
ggml : add predefined list of CPU backend variants to build (#10626)
2024-12-04 14:45:40 +01:00
llama-cli-cann.Dockerfile
…
llama-cli-cuda.Dockerfile
…
llama-cli-intel.Dockerfile
…
llama-cli-musa.Dockerfile
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
2024-11-26 17:00:41 +01:00
llama-cli-rocm.Dockerfile
…
llama-cli-vulkan.Dockerfile
…
llama-cli.Dockerfile
ggml : add predefined list of CPU backend variants to build (#10626)
2024-12-04 14:45:40 +01:00
llama-cpp-cuda.srpm.spec
…
llama-cpp.srpm.spec
…
llama-server-cuda.Dockerfile
…
llama-server-intel.Dockerfile
…
llama-server-musa.Dockerfile
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
2024-11-26 17:00:41 +01:00
llama-server-rocm.Dockerfile
…
llama-server-vulkan.Dockerfile
…
llama-server.Dockerfile
ggml : add predefined list of CPU backend variants to build (#10626)
2024-12-04 14:45:40 +01:00
tools.sh
…
Powered by Gitea Version: 1.24.5 Page: 2626ms Template: 304ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API