Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-28 13:20:27 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
9c42b1718ca8299f9afeabdc122badeab64c9690
llama.cpp/ggml
History
Johannes Gäßler 9c42b1718c CUDA: fix logic for V100 + GGML_CUDA_FORCE_MMQ (#12098)
2025-02-28 09:26:43 +01:00
..
cmake
cmake: Fix ggml backend dependencies and installation (#11818)
2025-02-27 09:42:48 +02:00
include
ggml-cpu: Support s390x SIMD Instruction Set (#12019)
2025-02-22 21:39:24 +00:00
src
CUDA: fix logic for V100 + GGML_CUDA_FORCE_MMQ (#12098)
2025-02-28 09:26:43 +01:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
cmake: Fix ggml backend dependencies and installation (#11818)
2025-02-27 09:42:48 +02:00
Powered by Gitea Version: 1.24.3 Page: 1646ms Template: 57ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API