Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-09-19 21:49:31 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
5220a16d18563d3ffc509002f0514415fdda4036
llama.cpp/ggml
History
Johannes Gäßler 5220a16d18 CUDA: fix FA logic for PTX 7.0 and CC >= 7.5 (#12222)
2025-03-06 18:45:09 +01:00
..
cmake
cmake: Fix ggml backend dependencies and installation (#11818)
2025-02-27 09:42:48 +02:00
include
ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (#12154)
2025-03-06 02:26:10 +01:00
src
CUDA: fix FA logic for PTX 7.0 and CC >= 7.5 (#12222)
2025-03-06 18:45:09 +01:00
.gitignore
…
CMakeLists.txt
ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (#12154)
2025-03-06 02:26:10 +01:00
Powered by Gitea Version: 1.24.5 Page: 2695ms Template: 174ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API