Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-12 19:37:53 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
a5165a6ca93a16270deda5feea7a1ae3f876b793
llama.cpp/gguf-py/gguf
History
Francis Couture-Harpin 16202d6f96 Merge branch 'master' into compilade/imatrix-batched-chunks
2025-04-13 12:10:02 -04:00
..
scripts
Refactor gguf scripts to improve metadata handling (#11909)
2025-02-26 08:04:48 -05:00
__init__.py
…
constants.py
Merge branch 'master' into compilade/imatrix-batched-chunks
2025-04-13 12:10:02 -04:00
gguf_reader.py
Refactor gguf scripts to improve metadata handling (#11909)
2025-02-26 08:04:48 -05:00
gguf_writer.py
llama : Support llama 4 text-only (#12791)
2025-04-07 23:06:44 +02:00
gguf.py
…
lazy.py
gguf-py : support lazy tensor splitting (#12809)
2025-04-08 09:03:07 +02:00
metadata.py
convert : fix Norway problem when parsing YAML (#12114)
2025-02-28 17:44:46 +01:00
py.typed
…
quants.py
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151)
2024-09-05 21:48:47 -04:00
tensor_mapping.py
llama-model : add Glm4Model implementation for GLM-4-0414 (#12867)
2025-04-11 12:10:10 +02:00
utility.py
convert : ability to lazy-load safetensors remotely without downloading to disk (#12820)
2025-04-10 17:24:44 +02:00
vocab.py
convert : Support chat_template.json (#12460)
2025-03-19 08:58:13 +01:00
Powered by Gitea Version: 1.24.4 Page: 2259ms Template: 55ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API