This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-17 13:40:55 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
c026ba3c23765a648ca27c7a15ecf179f8e27f26
llama.cpp
/
ggml
History
Jeff Bolz
c026ba3c23
vulkan: print shared memory size (
#11719
)
2025-02-07 11:26:03 +01:00
..
cmake
…
include
CUDA: use mma PTX instructions for FlashAttention (
#11583
)
2025-02-02 19:31:09 +01:00
src
vulkan: print shared memory size (
#11719
)
2025-02-07 11:26:03 +01:00
.gitignore
…
CMakeLists.txt
cmake: Add ability to pass in GGML_BUILD_NUMBER (ggml/1096)
2025-02-04 12:59:15 +02:00