This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-15 04:33:06 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
e54d41befcc1575f4c898c5ff4ef43970cead75f
llama.cpp
/
ggml
History
compilade
e54d41befc
gguf-py : add Numpy MXFP4 de/quantization support (
#15111
)
...
* gguf-py : add MXFP4 de/quantization support * ggml-quants : handle zero amax for MXFP4
2025-08-08 17:48:26 -04:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
llama : add gpt-oss (
#15091
)
2025-08-05 22:10:36 +03:00
src
gguf-py : add Numpy MXFP4 de/quantization support (
#15111
)
2025-08-08 17:48:26 -04:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: add cmake option to enable compiler output of kernel resource usage metrics (
#15103
)
2025-08-07 16:44:14 +02:00