This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-14 20:29:41 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
9c35706b98ea271858acef4194f526a71b24cdc9
llama.cpp
/
ggml
History
Johannes Gäßler
9c35706b98
CUDA: fix MMQ nwarps for AMD with warp_size==32 (
#15014
)
2025-08-01 20:47:32 +02:00
..
cmake
cmake : Fix BLAS link interface (ggml/1316)
2025-07-30 17:33:11 +03:00
include
ggml: Add initial WebGPU backend (
#14521
)
2025-07-16 18:18:51 +03:00
src
CUDA: fix MMQ nwarps for AMD with warp_size==32 (
#15014
)
2025-08-01 20:47:32 +02:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (
#14930
)
2025-07-29 17:44:30 +02:00