This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-09-15 20:07:56 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
dad5c44398b78467943ed0a603ea427fe9f6fc62
llama.cpp
/
.github
/
workflows
History
Jeff Bolz
652b70e667
vulkan: force device 0 in CI (
#14106
)
2025-06-10 10:53:47 -05:00
..
bench.yml.disabled
…
build-linux-cross.yml
ci: add LoongArch cross-compile build (
#13944
)
2025-06-07 10:39:11 -03:00
build.yml
vulkan: force device 0 in CI (
#14106
)
2025-06-10 10:53:47 -05:00
close-issue.yml
…
docker.yml
…
editorconfig.yml
…
gguf-publish.yml
…
labeler.yml
…
python-check-requirements.yml
…
python-lint.yml
…
python-type-check.yml
…
release.yml
ci : remove cuda 11.7 releases, switch runner to windows 2022 (
#13997
)
2025-06-04 15:37:40 +02:00
server.yml
ci : remove cuda 11.7 releases, switch runner to windows 2022 (
#13997
)
2025-06-04 15:37:40 +02:00
winget.yml
…