This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-09-22 15:08:48 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
b5280
llama.cpp
/
gguf-py
/
gguf
History
Jared Van Bortel
2f567611c0
llama-model : support Qwen2 embedding models and pooling_mode_lasttoken (
#13245
)
2025-05-02 11:42:30 -04:00
..
scripts
gguf-py : GGUF Editor GUI - Python + Qt6 (
#12930
)
2025-04-18 20:30:41 +02:00
__init__.py
…
constants.py
llama-model : support Qwen2 embedding models and pooling_mode_lasttoken (
#13245
)
2025-05-02 11:42:30 -04:00
gguf_reader.py
…
gguf_writer.py
convert : converting mmproj for Qwen2/2.5VL from convert_hf_to_gguf (
#13209
)
2025-05-02 17:17:15 +02:00
gguf.py
…
lazy.py
…
metadata.py
…
py.typed
…
quants.py
…
tensor_mapping.py
convert : converting mmproj for Qwen2/2.5VL from convert_hf_to_gguf (
#13209
)
2025-05-02 17:17:15 +02:00
utility.py
…
vocab.py
…