Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-13 11:57:43 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
613c5095c33de9b7bbf1097d7c32510f51d58b01
llama.cpp/ggml
History
Kai Pastor 613c5095c3 cmake : Indent ggml-config.cmake (ggml/1310)
2025-07-28 08:15:01 +03:00
..
cmake
cmake : Indent ggml-config.cmake (ggml/1310)
2025-07-28 08:15:01 +03:00
include
ggml: Add initial WebGPU backend (#14521)
2025-07-16 18:18:51 +03:00
src
vulkan : add fp16 support for the conv_2d kernel (#14872)
2025-07-27 12:04:33 +02:00
.gitignore
…
CMakeLists.txt
ggml-cpu : disable GGML_NNPA by default due to instability (#14880)
2025-07-25 19:09:03 +02:00
Powered by Gitea Version: 1.24.4 Page: 2272ms Template: 69ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API