Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-06-29 04:35:05 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
363757628848a27a435bbf22ff9476e9aeda5f40
llama.cpp/docs
History
Jiří Podivín b3a89c3d9e docs : Note about necessity of having libcurl installed for standard build. (#13945)
Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
2025-05-31 18:58:35 +02:00
..
backend
CANN: Add the basic supports of Flash Attention kernel (#13627)
2025-05-26 10:20:18 +08:00
development
llama : move end-user examples to tools directory (#13249)
2025-05-02 20:27:13 +02:00
multimodal
mtmd : rename llava directory to mtmd (#13311)
2025-05-05 16:02:55 +02:00
android.md
repo : update links to new url (#11886)
2025-02-15 16:40:57 +02:00
build.md
docs : Note about necessity of having libcurl installed for standard build. (#13945)
2025-05-31 18:58:35 +02:00
docker.md
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (#13647)
2025-05-21 09:58:49 +08:00
function-calling.md
docs: remove link for llama-cli function calling (#13810)
2025-05-27 08:52:40 -03:00
install.md
install : add macports (#12518)
2025-03-23 10:21:48 +02:00
llguidance.md
llguidance build fixes for Windows (#11664)
2025-02-14 12:46:08 -08:00
multimodal.md
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (#13784)
2025-05-27 14:06:10 +02:00
Powered by Gitea Version: 1.24.1 Page: 203ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API