This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-20 22:53:12 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
55ac3b7aeaf52f19786ed96e885d89521fc0f6c8
llama.cpp
/
gguf-py
/
gguf
History
Georgi Gerganov
e84b71c2c6
ggml : drop support for QK_K=64 (
#7473
)
...
* ggml : drop support for QK_K=64 ggml-ci * opencl : restore QK_K=256 define
2024-05-23 10:00:21 +03:00
..
__init__.py
…
constants.py
ggml : drop support for QK_K=64 (
#7473
)
2024-05-23 10:00:21 +03:00
gguf_reader.py
…
gguf_writer.py
…
gguf.py
…
lazy.py
…
py.typed
…
quants.py
…
tensor_mapping.py
…
vocab.py
…