This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-31 06:34:56 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
9f35e44592a7646a5803620eb6a3f0ed5ac90553
llama.cpp
/
.github
/
workflows
History
Xuan Son Nguyen
92f77a640f
ci : pin nodejs to 22.11.0 (
#10779
)
2024-12-11 14:59:41 +01:00
..
bench.yml.disabled
ggml-backend : add device and backend reg interfaces (
#9707
)
2024-10-03 01:49:47 +02:00
build.yml
llama : use cmake for swift build (
#10525
)
2024-12-08 13:14:54 +02:00
close-issue.yml
ci : fine-grant permission (
#9710
)
2024-10-04 11:47:19 +02:00
docker.yml
ci : publish the docker images created during scheduled runs (
#10515
)
2024-11-26 13:05:20 +01:00
editorconfig.yml
…
gguf-publish.yml
…
labeler.yml
…
python-check-requirements.yml
…
python-lint.yml
ci : add ubuntu cuda build, build with one arch on windows (
#10456
)
2024-11-26 13:05:07 +01:00
python-type-check.yml
ci : reduce severity of unused Pyright ignore comments (
#9697
)
2024-09-30 14:13:16 -04:00
server.yml
ci : pin nodejs to 22.11.0 (
#10779
)
2024-12-11 14:59:41 +01:00