Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-26 10:09:41 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
17b363afd3575f8f9d025a35d2abb75f528a64c2
llama.cpp/gguf-py/gguf
History
Georgi Gerganov 08f10f69c3 llama : remove notion of CLS token (#11064)
ggml-ci
2025-01-12 12:15:53 +02:00
..
scripts
gguf-py: fixed local detection of gguf package (#11180)
2025-01-11 11:42:31 +02:00
__init__.py
…
constants.py
llama : remove notion of CLS token (#11064)
2025-01-12 12:15:53 +02:00
gguf_reader.py
gguf-py : numpy 2 newbyteorder fix (#9772)
2024-12-13 16:48:44 +02:00
gguf_writer.py
llama : remove notion of CLS token (#11064)
2025-01-12 12:15:53 +02:00
gguf.py
…
lazy.py
…
metadata.py
fix gguf-py: Conversion error when multiple licenses are configured (#9807)
2024-11-24 01:09:22 +01:00
py.typed
…
quants.py
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151)
2024-09-05 21:48:47 -04:00
tensor_mapping.py
llama: add support for QRWKV6 model architecture (#11001)
2025-01-10 09:58:08 +08:00
utility.py
…
vocab.py
convert : handle tokenizer merges format from transformers 4.45 (#9696)
2024-10-03 17:22:15 +03:00
Powered by Gitea Version: 1.24.5 Page: 2605ms Template: 30ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API