This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-04 00:08:38 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
5d46babdc2d4675d96ebcf23cac098a02f0d30cc
llama.cpp
/
docs
History
Grzegorz Grasza
1b2aaf28ac
Add Vulkan images to docker.md (
#14472
)
...
Right now it's not easy to find those.
2025-07-01 15:44:11 +02:00
..
backend
sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (
#13973
)
2025-06-25 18:09:55 +02:00
development
…
multimodal
mtmd : rename llava directory to mtmd (
#13311
)
2025-05-05 16:02:55 +02:00
android.md
…
build-s390x.md
docs: update s390x documentation + add faq (
#14389
)
2025-06-26 12:41:41 +02:00
build.md
ggml-cpu: enable IBM NNPA Vector Intrinsics (
#14317
)
2025-06-25 23:49:04 +02:00
docker.md
Add Vulkan images to docker.md (
#14472
)
2025-07-01 15:44:11 +02:00
function-calling.md
docs : remove WIP since PR has been merged (
#13912
)
2025-06-15 08:06:37 +02:00
install.md
docs : add "Quick start" section for new users (
#13862
)
2025-06-03 13:09:36 +02:00
llguidance.md
…
multimodal.md
docs : Update multimodal.md (
#14122
)
2025-06-13 15:17:53 +02:00