This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-12 19:37:53 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
1caae7fc6c77551cb1066515e0f414713eebb367
llama.cpp
/
docs
History
Xuan-Son Nguyen
ea1431b0fa
docs : add "Quick start" section for new users (
#13862
)
...
* docs : add "Quick start" section for non-technical users * rm flox * Update README.md
2025-06-03 13:09:36 +02:00
..
backend
CANN: Add the basic supports of Flash Attention kernel (
#13627
)
2025-05-26 10:20:18 +08:00
development
…
multimodal
…
android.md
repo : update links to new url (
#11886
)
2025-02-15 16:40:57 +02:00
build.md
docs : add "Quick start" section for new users (
#13862
)
2025-06-03 13:09:36 +02:00
docker.md
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (
#13647
)
2025-05-21 09:58:49 +08:00
function-calling.md
docs: remove link for llama-cli function calling (
#13810
)
2025-05-27 08:52:40 -03:00
install.md
docs : add "Quick start" section for new users (
#13862
)
2025-06-03 13:09:36 +02:00
llguidance.md
…
multimodal.md
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (
#13784
)
2025-05-27 14:06:10 +02:00