This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-14 20:29:41 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
0ccc1213549e39ef4c1affb1bf5f49651ef4ce48
llama.cpp
/
tools
History
welix
0ccc121354
mtmd : fix the calculation of n_tokens for smolvlm (
#13381
)
...
Co-authored-by: Taichi Nishimura <
Taichi.A.Nishimura@sony.com
>
2025-05-08 15:03:53 +02:00
..
batched-bench
…
cvector-generator
…
export-lora
…
gguf-split
…
imatrix
context : remove logits_all flag (
#13284
)
2025-05-08 14:26:50 +03:00
llama-bench
…
main
context : remove logits_all flag (
#13284
)
2025-05-08 14:26:50 +03:00
mtmd
mtmd : fix the calculation of n_tokens for smolvlm (
#13381
)
2025-05-08 15:03:53 +02:00
perplexity
context : remove logits_all flag (
#13284
)
2025-05-08 14:26:50 +03:00
quantize
…
rpc
rpc : use backend registry, support dl backends (
#13304
)
2025-05-04 21:25:43 +02:00
run
…
server
context : allow cache-less context for embeddings (
#13108
)
2025-05-08 14:28:33 +03:00
tokenize
…
tts
…
CMakeLists.txt
mtmd : rename llava directory to mtmd (
#13311
)
2025-05-05 16:02:55 +02:00