This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-18 05:56:00 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
8a7e3bf17aa5a8412854787746c92a28623a8925
llama.cpp
/
ggml
History
Rémy O
8a7e3bf17a
vulkan: initial support for IQ4_XS quantization (
#11501
)
2025-02-06 07:09:59 +01:00
..
cmake
cmake: add ggml find package (
#11369
)
2025-01-26 12:07:48 -04:00
include
CUDA: use mma PTX instructions for FlashAttention (
#11583
)
2025-02-02 19:31:09 +01:00
src
vulkan: initial support for IQ4_XS quantization (
#11501
)
2025-02-06 07:09:59 +01:00
.gitignore
…
CMakeLists.txt
cmake: Add ability to pass in GGML_BUILD_NUMBER (ggml/1096)
2025-02-04 12:59:15 +02:00