This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-14 20:29:41 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
a39a8423f7a4951762f68c5e3ded022fd29f43f9
llama.cpp
/
gguf-py
/
gguf
History
younesbelkada
a39a8423f7
merge
2025-07-04 14:48:22 +04:00
..
scripts
gguf-py : add support for sub_type (in arrays) in GGUFWriter add_key_value method (
#13561
)
2025-05-29 15:36:05 +02:00
__init__.py
…
constants.py
more cleaning on python code
2025-07-03 18:09:30 +04:00
gguf_reader.py
gguf-py : display the invalid gguf type (
#13687
)
2025-05-21 16:33:54 +02:00
gguf_writer.py
more cleaning on python code
2025-07-03 18:09:30 +04:00
gguf.py
…
lazy.py
gguf-py : support lazy tensor splitting (
#12809
)
2025-04-08 09:03:07 +02:00
metadata.py
…
py.typed
…
quants.py
…
tensor_mapping.py
merge
2025-07-04 14:48:22 +04:00
utility.py
gguf-py : fix SafetensorRemote return on undefined size (< 0) (
#13841
)
2025-05-28 23:50:20 +02:00
vocab.py
gguf-py : add support for chat template jinja files (
#14508
)
2025-07-02 21:02:35 +02:00