Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-14 12:19:48 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
5e43f104cca1a14874e980326a506b44fde022b8
llama.cpp/ggml
History
Akarshan Biswas 5e43f104cc SYCL: Disable f16 Unary OPs as not supported by the kernels (#12201)
2025-03-05 16:58:23 +01:00
..
cmake
cmake: Fix ggml backend dependencies and installation (#11818)
2025-02-27 09:42:48 +02:00
include
ggml : portability fixes for VS 2017 (#12150)
2025-03-04 18:53:26 +02:00
src
SYCL: Disable f16 Unary OPs as not supported by the kernels (#12201)
2025-03-05 16:58:23 +01:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (#12032)
2025-03-03 22:10:54 +01:00
Powered by Gitea Version: 1.24.4 Page: 1276ms Template: 15ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API