This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-29 13:43:38 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
4d0598e1445a64c99cf2faac72f8d5d023f1e6a1
llama.cpp
/
ggml
History
uvos
4d0598e144
HIP: add GGML_CUDA_CC_IS_* for amd familys as increasing cc archtectures for amd gpus are not supersets of eatch other (
#11601
)
...
This fixes a bug where RDNA1 gpus other than gfx1010 where not handled correctly
2025-02-02 22:08:05 +01:00
..
cmake
cmake: add ggml find package (
#11369
)
2025-01-26 12:07:48 -04:00
include
CUDA: use mma PTX instructions for FlashAttention (
#11583
)
2025-02-02 19:31:09 +01:00
src
HIP: add GGML_CUDA_CC_IS_* for amd familys as increasing cc archtectures for amd gpus are not supersets of eatch other (
#11601
)
2025-02-02 22:08:05 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
cmake: add ggml find package (
#11369
)
2025-01-26 12:07:48 -04:00