Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-09-19 13:38:34 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
71a64989a5d2e25c13507efada145f12cf358914
llama.cpp/.github/workflows
History
Diego Devesa c9b00a70b0 ci : fix cuda releases (#10532)
2024-11-26 22:12:10 +01:00
..
bench.yml.disabled
ggml-backend : add device and backend reg interfaces (#9707)
2024-10-03 01:49:47 +02:00
build.yml
ci : fix cuda releases (#10532)
2024-11-26 22:12:10 +01:00
close-issue.yml
ci : fine-grant permission (#9710)
2024-10-04 11:47:19 +02:00
docker.yml
ci : publish the docker images created during scheduled runs (#10515)
2024-11-26 13:05:20 +01:00
editorconfig.yml
…
gguf-publish.yml
…
labeler.yml
…
python-check-requirements.yml
py : fix requirements check '==' -> '~=' (#8982)
2024-08-12 11:02:01 +03:00
python-lint.yml
ci : add ubuntu cuda build, build with one arch on windows (#10456)
2024-11-26 13:05:07 +01:00
python-type-check.yml
ci : reduce severity of unused Pyright ignore comments (#9697)
2024-09-30 14:13:16 -04:00
server.yml
server : replace behave with pytest (#10416)
2024-11-26 16:20:18 +01:00
Powered by Gitea Version: 1.24.5 Page: 3597ms Template: 10ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API