Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-31 22:53:52 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
f078c79865f3b047bd6d8c4925bdef40d6cdff56
llama.cpp/tools
History
Georgi Gerganov f078c79865 batched-bench : fix pp batch contents
2025-05-13 07:55:30 +03:00
..
batched-bench
batched-bench : fix pp batch contents
2025-05-13 07:55:30 +03:00
cvector-generator
…
export-lora
…
gguf-split
…
imatrix
imatrix : Add --parse-special for enabling parsing of special tokens in imatrix calculation (#13389)
2025-05-09 11:53:58 +02:00
llama-bench
llama-bench : add defrag-thold, check for invalid ranges (#13487)
2025-05-13 00:31:37 +02:00
main
llama : do not crash if there is no CPU backend (#13395)
2025-05-09 13:02:07 +02:00
mtmd
clip : cap max image size 1024 for qwen vl model (#13478)
2025-05-12 15:06:51 +02:00
perplexity
…
quantize
…
rpc
llama : do not crash if there is no CPU backend (#13395)
2025-05-09 13:02:07 +02:00
run
…
server
server : allow content to be null in oaicompat_completion_params_parse (#13477)
2025-05-12 13:56:42 +02:00
tokenize
…
tts
…
CMakeLists.txt
…
Powered by Gitea Version: 1.24.3 Page: 4055ms Template: 892ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API