This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-14 04:17:53 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
7b50d589a863c7631135c1226f6eab65cb406212
llama.cpp
/
gguf-py
/
gguf
History
Sigbjørn Skjæret
238005c2dc
gguf-py : fix SpecialVocab parsing when post_processor is null (
#14330
)
2025-06-22 19:46:17 +02:00
..
scripts
gguf-py : add support for sub_type (in arrays) in GGUFWriter add_key_value method (
#13561
)
2025-05-29 15:36:05 +02:00
__init__.py
…
constants.py
llama : improve sep token handling (
#14272
)
2025-06-20 14:04:09 +02:00
gguf_reader.py
gguf-py : display the invalid gguf type (
#13687
)
2025-05-21 16:33:54 +02:00
gguf_writer.py
llama : improve sep token handling (
#14272
)
2025-06-20 14:04:09 +02:00
gguf.py
…
lazy.py
…
metadata.py
…
py.typed
…
quants.py
…
tensor_mapping.py
model : add NeoBERT (
#14164
)
2025-06-16 14:53:41 +02:00
utility.py
gguf-py : fix SafetensorRemote return on undefined size (< 0) (
#13841
)
2025-05-28 23:50:20 +02:00
vocab.py
gguf-py : fix SpecialVocab parsing when post_processor is null (
#14330
)
2025-06-22 19:46:17 +02:00