This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-30 06:03:37 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
518a01480eb3a7c80a4951b430db9dee55428310
llama.cpp
/
gguf-py
/
gguf
History
Sigbjørn Skjæret
2c3f8b850a
llama : support BailingMoE (Ling) (
#12634
)
2025-03-30 22:21:03 +02:00
..
scripts
Refactor gguf scripts to improve metadata handling (
#11909
)
2025-02-26 08:04:48 -05:00
__init__.py
…
constants.py
llama : support BailingMoE (Ling) (
#12634
)
2025-03-30 22:21:03 +02:00
gguf_reader.py
Refactor gguf scripts to improve metadata handling (
#11909
)
2025-02-26 08:04:48 -05:00
gguf_writer.py
llama: Add support for RWKV v7 architecture (
#12412
)
2025-03-18 07:27:50 +08:00
gguf.py
…
lazy.py
…
metadata.py
convert : fix Norway problem when parsing YAML (
#12114
)
2025-02-28 17:44:46 +01:00
py.typed
…
quants.py
…
tensor_mapping.py
llama : support BailingMoE (Ling) (
#12634
)
2025-03-30 22:21:03 +02:00
utility.py
repo : update links to new url (
#11886
)
2025-02-15 16:40:57 +02:00
vocab.py
convert : Support chat_template.json (
#12460
)
2025-03-19 08:58:13 +01:00