This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-26 11:13:53 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
0208355f42bdab88a08507ead4a6302790a08323
llama.cpp
/
ggml
History
Johannes Gäßler
0208355f42
CUDA: fix race conditions FlashAttention kernels (
#13438
)
2025-05-10 22:22:48 +02:00
..
cmake
…
include
CUDA: fix bad asserts for partial offload (
#13337
)
2025-05-06 13:58:51 +02:00
src
CUDA: fix race conditions FlashAttention kernels (
#13438
)
2025-05-10 22:22:48 +02:00
.gitignore
…
CMakeLists.txt
whisper: remove MSVC warnings pragmas (whisper/3090)
2025-05-07 17:28:36 +03:00