Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-28 21:23:55 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
8960fe86ae075c846c5df8848230d1904ba8877f
llama.cpp/gguf-py/gguf
History
pmysl c1386c936e gguf-py : add IQ1_M to GGML_QUANT_SIZES (#6761)
2024-04-21 15:49:30 +03:00
..
__init__.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
2023-11-11 08:04:50 +03:00
constants.py
gguf-py : add IQ1_M to GGML_QUANT_SIZES (#6761)
2024-04-21 15:49:30 +03:00
gguf_reader.py
gguf : add support for I64 and F64 arrays (#6062)
2024-03-15 10:46:51 +02:00
gguf_writer.py
convert : support models with multiple chat templates (#6588)
2024-04-18 14:49:01 +03:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
2023-11-11 08:04:50 +03:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (#2842)
2023-08-30 11:25:50 +03:00
tensor_mapping.py
llama : add qwen2moe (#6074)
2024-04-16 18:40:48 +03:00
vocab.py
convert : support models with multiple chat templates (#6588)
2024-04-18 14:49:01 +03:00
Powered by Gitea Version: 1.24.3 Page: 852ms Template: 8ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API