Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-30 06:03:37 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
8e186ef0e764c7a620e402d1f76ebad60bf31c49
llama.cpp/gguf-py/gguf
History
Emmanuel Ferdman eb0f5c28d3 gguf-py : display the invalid gguf type (#13687)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-05-21 16:33:54 +02:00
..
scripts
gguf-py : fix disconnect-before-connect in editor-gui (#13569)
2025-05-15 18:47:10 +02:00
__init__.py
convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499)
2024-07-18 20:40:15 +10:00
constants.py
mtmd : add vision support for llama 4 (#13282)
2025-05-19 13:04:14 +02:00
gguf_reader.py
gguf-py : display the invalid gguf type (#13687)
2025-05-21 16:33:54 +02:00
gguf_writer.py
convert : converting mmproj for Qwen2/2.5VL from convert_hf_to_gguf (#13209)
2025-05-02 17:17:15 +02:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
2023-11-11 08:04:50 +03:00
lazy.py
gguf-py : support lazy tensor splitting (#12809)
2025-04-08 09:03:07 +02:00
metadata.py
convert : fix Norway problem when parsing YAML (#12114)
2025-02-28 17:44:46 +01:00
py.typed
…
quants.py
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151)
2024-09-05 21:48:47 -04:00
tensor_mapping.py
mtmd : add vision support for llama 4 (#13282)
2025-05-19 13:04:14 +02:00
utility.py
convert : ability to lazy-load safetensors remotely without downloading to disk (#12820)
2025-04-10 17:24:44 +02:00
vocab.py
convert : Support chat_template.json (#12460)
2025-03-19 08:58:13 +01:00
Powered by Gitea Version: 1.24.3 Page: 1165ms Template: 17ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API