Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-06-27 12:05:03 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
gg/build-linux-static
llama.cpp/gguf-py/gguf
History
Georgi Gerganov 08f10f69c3 llama : remove notion of CLS token (#11064)
ggml-ci
2025-01-12 12:15:53 +02:00
..
scripts
gguf-py: fixed local detection of gguf package (#11180)
2025-01-11 11:42:31 +02:00
__init__.py
convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499)
2024-07-18 20:40:15 +10:00
constants.py
llama : remove notion of CLS token (#11064)
2025-01-12 12:15:53 +02:00
gguf_reader.py
gguf-py : numpy 2 newbyteorder fix (#9772)
2024-12-13 16:48:44 +02:00
gguf_writer.py
llama : remove notion of CLS token (#11064)
2025-01-12 12:15:53 +02:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
2023-11-11 08:04:50 +03:00
lazy.py
gguf-py : simplify support for quant types (#8838)
2024-08-08 13:33:09 -04:00
metadata.py
fix gguf-py: Conversion error when multiple licenses are configured (#9807)
2024-11-24 01:09:22 +01:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (#2842)
2023-08-30 11:25:50 +03:00
quants.py
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151)
2024-09-05 21:48:47 -04:00
tensor_mapping.py
llama: add support for QRWKV6 model architecture (#11001)
2025-01-10 09:58:08 +08:00
utility.py
gguf-py : fix some metadata name extraction edge cases (#8591)
2024-07-20 21:58:49 -04:00
vocab.py
convert : handle tokenizer merges format from transformers 4.45 (#9696)
2024-10-03 17:22:15 +03:00
Powered by Gitea Version: 1.24.1 Page: 981ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API