This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-09-24 07:56:47 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
3791ad219323389106dc3fd80814eb5bbb7b80de
llama.cpp
/
.github
/
workflows
History
slaren
8cb508d0d5
disable publishing the full-rocm docker image (
#8083
)
2024-06-24 08:36:11 +03:00
..
bench.yml
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
build.yml
ci : fix macos x86 build (
#7940
)
2024-06-14 20:28:34 +03:00
close-issue.yml
ci : exempt confirmed bugs from being tagged as stale (
#7014
)
2024-05-01 08:13:59 +03:00
docker.yml
disable publishing the full-rocm docker image (
#8083
)
2024-06-24 08:36:11 +03:00
editorconfig.yml
…
gguf-publish.yml
…
labeler.yml
labeler.yml: Use settings from ggerganov/llama.cpp [no ci] (
#7363
)
2024-05-19 20:51:03 +10:00
nix-ci-aarch64.yml
…
nix-ci.yml
…
nix-flake-update.yml
…
nix-publish-flake.yml
…
python-check-requirements.yml
…
python-lint.yml
convert.py : add python logging instead of print() (
#6511
)
2024-05-03 22:36:41 +03:00
server.yml
fix CI failures (
#8066
)
2024-06-23 13:14:45 +02:00