Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-30 22:23:31 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
0a11f8b7b5c39fdf6e91ef9674bc68ff08681af7
llama.cpp/.github/workflows
History
Eve 7b1ec53f56 vulkan: bugfixes for small subgroup size systems + llvmpipe test (#10809)
* ensure mul mat shaders work on systems with subgroup size less than 32

more fixes

add test

* only s_warptile_mmq needs to be run with 32 threads or more
2024-12-17 06:52:55 +01:00
..
bench.yml.disabled
ggml-backend : add device and backend reg interfaces (#9707)
2024-10-03 01:49:47 +02:00
build.yml
vulkan: bugfixes for small subgroup size systems + llvmpipe test (#10809)
2024-12-17 06:52:55 +01:00
close-issue.yml
ci : fine-grant permission (#9710)
2024-10-04 11:47:19 +02:00
docker.yml
ci : publish the docker images created during scheduled runs (#10515)
2024-11-26 13:05:20 +01:00
editorconfig.yml
ci: exempt master branch workflows from getting cancelled (#6486)
2024-04-04 18:30:53 +02:00
gguf-publish.yml
…
labeler.yml
labeler.yml: Use settings from ggerganov/llama.cpp [no ci] (#7363)
2024-05-19 20:51:03 +10:00
python-check-requirements.yml
py : fix requirements check '==' -> '~=' (#8982)
2024-08-12 11:02:01 +03:00
python-lint.yml
ci : add ubuntu cuda build, build with one arch on windows (#10456)
2024-11-26 13:05:07 +01:00
python-type-check.yml
ci : reduce severity of unused Pyright ignore comments (#9697)
2024-09-30 14:13:16 -04:00
server.yml
ci : pin nodejs to 22.11.0 (#10779)
2024-12-11 14:59:41 +01:00
Powered by Gitea Version: 1.24.3 Page: 2034ms Template: 48ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API