This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-08 18:04:54 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
b5560
llama.cpp
/
docs
History
Jiří Podivín
b3a89c3d9e
docs : Note about necessity of having libcurl installed for standard build. (
#13945
)
...
Signed-off-by: Jiri Podivin <
jpodivin@gmail.com
>
2025-05-31 18:58:35 +02:00
..
backend
CANN: Add the basic supports of Flash Attention kernel (
#13627
)
2025-05-26 10:20:18 +08:00
development
llama : move end-user examples to tools directory (
#13249
)
2025-05-02 20:27:13 +02:00
multimodal
mtmd : rename llava directory to mtmd (
#13311
)
2025-05-05 16:02:55 +02:00
android.md
…
build.md
docs : Note about necessity of having libcurl installed for standard build. (
#13945
)
2025-05-31 18:58:35 +02:00
docker.md
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (
#13647
)
2025-05-21 09:58:49 +08:00
function-calling.md
docs: remove link for llama-cli function calling (
#13810
)
2025-05-27 08:52:40 -03:00
install.md
install : add macports (
#12518
)
2025-03-23 10:21:48 +02:00
llguidance.md
…
multimodal.md
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (
#13784
)
2025-05-27 14:06:10 +02:00