This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-11 11:05:39 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
8da46278e1a57107591653275f8e03a281de94f0
llama.cpp
/
gguf-py
/
gguf
History
Galunid
36eed0c42c
stablelm : StableLM support (
#3586
)
...
* Add support for stablelm-3b-4e1t * Supports GPU offloading of (n-1) layers
2023-11-14 11:17:12 +01:00
..
__init__.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
constants.py
stablelm : StableLM support (
#3586
)
2023-11-14 11:17:12 +01:00
gguf_reader.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
gguf_writer.py
gguf-py: gguf_writer: Use bytearray to build metadata (
#4051
)
2023-11-12 16:39:37 -07:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (
#2842
)
2023-08-30 11:25:50 +03:00
tensor_mapping.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
vocab.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00