This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-10 18:54:09 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
482548716f664f76e325ded58c9e8b7563e5e23a
llama.cpp
/
docs
History
Xuan-Son Nguyen
ea1431b0fa
docs : add "Quick start" section for new users (
#13862
)
...
* docs : add "Quick start" section for non-technical users * rm flox * Update README.md
2025-06-03 13:09:36 +02:00
..
backend
CANN: Add the basic supports of Flash Attention kernel (
#13627
)
2025-05-26 10:20:18 +08:00
development
…
multimodal
…
android.md
…
build.md
docs : add "Quick start" section for new users (
#13862
)
2025-06-03 13:09:36 +02:00
docker.md
…
function-calling.md
docs: remove link for llama-cli function calling (
#13810
)
2025-05-27 08:52:40 -03:00
install.md
docs : add "Quick start" section for new users (
#13862
)
2025-06-03 13:09:36 +02:00
llguidance.md
…
multimodal.md
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (
#13784
)
2025-05-27 14:06:10 +02:00