This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-16 21:22:37 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
1e35d619a6fb0b9c5e3dc955345980ff056ddbaf
llama.cpp
/
gguf-py
/
gguf
History
Nindaleth
87c2e8b279
gguf-dump : support i-quants (
#5841
)
...
Co-authored-by: Black_Fox <
radekliska@gmail.com
>
2024-03-03 10:43:42 +02:00
..
__init__.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
constants.py
gguf-dump : support i-quants (
#5841
)
2024-03-03 10:43:42 +02:00
gguf_reader.py
gguf : fix "general.alignment" type in gguf_reader.py (
#5136
)
2024-01-26 11:10:28 +02:00
gguf_writer.py
convert-hf : make model class definitions self-contained (
#5825
)
2024-03-02 12:21:47 -05:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
py.typed
…
tensor_mapping.py
llama : add StarCoder2 support (
#5795
)
2024-03-01 21:30:46 +02:00
vocab.py
fix(gguf-py): special tokens are no longer skipped when add_<token>_token is set to false (
#5487
)
2024-02-15 14:14:37 +01:00