This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-04 18:16:58 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
d0c08040b6c8bebeade7b8d5764df6cf901678d5
llama.cpp
/
gguf-py
/
gguf
History
Georgi Gerganov
08f10f69c3
llama : remove notion of CLS token (
#11064
)
...
ggml-ci
2025-01-12 12:15:53 +02:00
..
scripts
gguf-py: fixed local detection of gguf package (
#11180
)
2025-01-11 11:42:31 +02:00
__init__.py
…
constants.py
llama : remove notion of CLS token (
#11064
)
2025-01-12 12:15:53 +02:00
gguf_reader.py
gguf-py : numpy 2 newbyteorder fix (
#9772
)
2024-12-13 16:48:44 +02:00
gguf_writer.py
llama : remove notion of CLS token (
#11064
)
2025-01-12 12:15:53 +02:00
gguf.py
…
lazy.py
…
metadata.py
fix gguf-py: Conversion error when multiple licenses are configured (
#9807
)
2024-11-24 01:09:22 +01:00
py.typed
…
quants.py
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (
#8151
)
2024-09-05 21:48:47 -04:00
tensor_mapping.py
llama: add support for QRWKV6 model architecture (
#11001
)
2025-01-10 09:58:08 +08:00
utility.py
…
vocab.py
convert : handle tokenizer merges format from transformers 4.45 (
#9696
)
2024-10-03 17:22:15 +03:00