This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-30 22:23:31 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
749e0d27f0247337869f4698f59dd7fafba94326
llama.cpp
/
gguf-py
/
gguf
History
Csaba Kecskemeti
acd6cb1c41
ggml : model card yaml tab->2xspace (
#14819
)
2025-07-22 19:29:43 +03:00
..
scripts
gguf-py : dump bpw per layer and model in markdown mode (
#14703
)
2025-07-16 00:04:42 +02:00
__init__.py
…
constants.py
imatrix : use GGUF to store importance matrices (
#9400
)
2025-07-19 12:51:22 -04:00
gguf_reader.py
gguf-py : display the invalid gguf type (
#13687
)
2025-05-21 16:33:54 +02:00
gguf_writer.py
model : support LiquidAI LFM2 hybrid family (
#14620
)
2025-07-11 20:27:01 +02:00
gguf.py
…
lazy.py
…
metadata.py
ggml : model card yaml tab->2xspace (
#14819
)
2025-07-22 19:29:43 +03:00
py.typed
…
quants.py
…
tensor_mapping.py
model: add Ernie 4.5 MoE support (
#14658
)
2025-07-17 23:15:32 +02:00
utility.py
gguf-py : fix SafetensorRemote return on undefined size (< 0) (
#13841
)
2025-05-28 23:50:20 +02:00
vocab.py
gguf-py : add support for chat template jinja files (
#14508
)
2025-07-02 21:02:35 +02:00