This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-27 10:38:56 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
492d7f1ff77e116e44962b832cc3db24cfc46d24
llama.cpp
/
ggml
/
src
/
ggml-cuda
/
vendors
History
Slobodan Josic
bd40678df7
HIP: Add support for RDNA4 targets (
#12372
)
2025-03-26 23:46:30 +01:00
..
cuda.h
CUDA: add BF16 support (
#11093
)
2025-01-06 02:33:52 +01:00
hip.h
HIP: Add support for RDNA4 targets (
#12372
)
2025-03-26 23:46:30 +01:00
musa.h
CUDA: Improve flash decoding kernel GPU occupancy for BS=1 case (
#12183
)
2025-03-19 20:52:06 +01:00