Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-09 02:12:45 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
1f45f2890ef7f365ba0a45e08a8d1f46b8bc6b9e
llama.cpp/ggml
History
Kai Pastor 613c5095c3 cmake : Indent ggml-config.cmake (ggml/1310)
2025-07-28 08:15:01 +03:00
..
cmake
cmake : Indent ggml-config.cmake (ggml/1310)
2025-07-28 08:15:01 +03:00
include
ggml: Add initial WebGPU backend (#14521)
2025-07-16 18:18:51 +03:00
src
vulkan : add fp16 support for the conv_2d kernel (#14872)
2025-07-27 12:04:33 +02:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml-cpu : disable GGML_NNPA by default due to instability (#14880)
2025-07-25 19:09:03 +02:00
Powered by Gitea Version: 1.24.3 Page: 1682ms Template: 92ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API