Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-06-26 19:55:04 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
5a8ae3053ced350ed300ba91600519fcad1c6ba7
llama.cpp/docs
History
Xuan-Son Nguyen ea1431b0fa docs : add "Quick start" section for new users (#13862)
* docs : add "Quick start" section for non-technical users

* rm flox

* Update README.md
2025-06-03 13:09:36 +02:00
..
backend
CANN: Add the basic supports of Flash Attention kernel (#13627)
2025-05-26 10:20:18 +08:00
development
llama : move end-user examples to tools directory (#13249)
2025-05-02 20:27:13 +02:00
multimodal
mtmd : rename llava directory to mtmd (#13311)
2025-05-05 16:02:55 +02:00
android.md
repo : update links to new url (#11886)
2025-02-15 16:40:57 +02:00
build.md
docs : add "Quick start" section for new users (#13862)
2025-06-03 13:09:36 +02:00
docker.md
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (#13647)
2025-05-21 09:58:49 +08:00
function-calling.md
docs: remove link for llama-cli function calling (#13810)
2025-05-27 08:52:40 -03:00
install.md
docs : add "Quick start" section for new users (#13862)
2025-06-03 13:09:36 +02:00
llguidance.md
llguidance build fixes for Windows (#11664)
2025-02-14 12:46:08 -08:00
multimodal.md
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (#13784)
2025-05-27 14:06:10 +02:00
Powered by Gitea Version: 1.24.1 Page: 152ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API