This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-16 13:12:51 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
15e6125a397f6086c1dfdf7584acdb7c730313dc
llama.cpp
/
tools
History
Xuan-Son Nguyen
15e6125a39
mtmd : add hard limit on image resolution for qwen2vl / qwen2.5vl (
#13434
)
...
* mtmd : add hard limit on image resolution for qwen2vl / qwen2.5vl * fix typo
2025-05-10 19:57:54 +02:00
..
batched-bench
…
cvector-generator
…
export-lora
…
gguf-split
…
imatrix
imatrix : Add --parse-special for enabling parsing of special tokens in imatrix calculation (
#13389
)
2025-05-09 11:53:58 +02:00
llama-bench
…
main
llama : do not crash if there is no CPU backend (
#13395
)
2025-05-09 13:02:07 +02:00
mtmd
mtmd : add hard limit on image resolution for qwen2vl / qwen2.5vl (
#13434
)
2025-05-10 19:57:54 +02:00
perplexity
context : remove logits_all flag (
#13284
)
2025-05-08 14:26:50 +03:00
quantize
…
rpc
llama : do not crash if there is no CPU backend (
#13395
)
2025-05-09 13:02:07 +02:00
run
llama-run: add support for downloading models from ModelScope (
#13370
)
2025-05-09 10:25:50 +01:00
server
server : update docs (
#13432
)
2025-05-10 18:44:49 +02:00
tokenize
…
tts
…
CMakeLists.txt
mtmd : rename llava directory to mtmd (
#13311
)
2025-05-05 16:02:55 +02:00