Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-28 13:20:27 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
8dbbd75754d43ec7b4bbe42fb287cc2553fdf0e9
llama.cpp/gguf-py/gguf
History
Douglas Hanley 4524290e87 Use correct type of pooling for embedding models (#5500)
Use correct type of pooling for embedding models
2024-02-15 12:21:49 -05:00
..
__init__.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
2023-11-11 08:04:50 +03:00
constants.py
Use correct type of pooling for embedding models (#5500)
2024-02-15 12:21:49 -05:00
gguf_reader.py
gguf : fix "general.alignment" type in gguf_reader.py (#5136)
2024-01-26 11:10:28 +02:00
gguf_writer.py
Use correct type of pooling for embedding models (#5500)
2024-02-15 12:21:49 -05:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
2023-11-11 08:04:50 +03:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (#2842)
2023-08-30 11:25:50 +03:00
tensor_mapping.py
llama : add support for Nomic Embed (#5468)
2024-02-13 12:03:53 -05:00
vocab.py
fix(gguf-py): special tokens are no longer skipped when add_<token>_token is set to false (#5487)
2024-02-15 14:14:37 +01:00
Powered by Gitea Version: 1.24.3 Page: 1389ms Template: 18ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API