Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-19 22:36:13 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
2ec70c964b4c4ce4b265c3c95cf2f9475747259c
llama.cpp/tools
History
compilade 19f68fa5a4 imatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076)
* imatrix : add warning when suffix is not .gguf for GGUF imatrix

* imatrix : only warn about suffix when output format is unspecified
2025-08-04 23:26:52 +02:00
..
batched-bench
llama : add high-throughput mode (#14363)
2025-07-16 16:35:42 +03:00
cvector-generator
…
export-lora
mtmd : fix 32-bit narrowing issue in export-lora and mtmd clip (#14503)
2025-07-25 13:08:04 +02:00
gguf-split
scripts : make the shell scripts cross-platform (#14341)
2025-06-30 10:17:18 +02:00
imatrix
imatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076)
2025-08-04 23:26:52 +02:00
llama-bench
llama-bench: rename DB table name from test to llama_bench (#15003)
2025-08-02 17:20:40 +08:00
main
llama : fix --reverse-prompt crashing issue (#14794)
2025-07-21 17:38:36 +08:00
mtmd
mtmd : support MiniCPM-V 4.0 (#14983)
2025-07-31 17:22:17 +02:00
perplexity
llama : deprecate llama_kv_self_ API (#14030)
2025-06-06 14:11:15 +03:00
quantize
quantize : fix confusing error message if ftype is invalid (#15071)
2025-08-04 18:11:02 +02:00
rpc
…
run
cmake : do not search for curl libraries by ourselves (#14613)
2025-07-10 15:29:05 +03:00
server
server: enable token array inputs for OAI API (#15001)
2025-08-02 10:12:41 +02:00
tokenize
…
tts
…
CMakeLists.txt
…
Powered by Gitea Version: 1.24.5 Page: 2424ms Template: 522ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API