This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-12 19:37:53 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
xsn/graph_ffn_gate_fix
llama.cpp
/
tools
History
Xuan Son Nguyen
8681d3ddb3
Revert "fix build on windows"
...
This reverts commit
fc420d3c7e
.
2025-05-06 13:41:55 +02:00
..
batched-bench
…
cvector-generator
…
export-lora
…
gguf-split
…
imatrix
imatrix: fix oob writes if src1 is not contiguous (
#13286
)
2025-05-04 00:50:37 +02:00
llama-bench
…
main
…
mtmd
Revert "fix build on windows"
2025-05-06 13:41:55 +02:00
perplexity
…
quantize
…
rpc
rpc : use backend registry, support dl backends (
#13304
)
2025-05-04 21:25:43 +02:00
run
llama : move end-user examples to tools directory (
#13249
)
2025-05-02 20:27:13 +02:00
server
sampling : Integrate Top-nσ into main sampling chain (and add it to the server) (
#13264
)
2025-05-05 22:12:19 +02:00
tokenize
…
tts
…
CMakeLists.txt
mtmd : rename llava directory to mtmd (
#13311
)
2025-05-05 16:02:55 +02:00