This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-19 09:08:04 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
bee28421be25fd447f61cb6db64d556cbfce32ec
llama.cpp
/
docs
History
Grzegorz Grasza
1b2aaf28ac
Add Vulkan images to docker.md (
#14472
)
...
Right now it's not easy to find those.
2025-07-01 15:44:11 +02:00
..
backend
sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (
#13973
)
2025-06-25 18:09:55 +02:00
development
…
multimodal
…
android.md
repo : update links to new url (
#11886
)
2025-02-15 16:40:57 +02:00
build-s390x.md
docs: update s390x documentation + add faq (
#14389
)
2025-06-26 12:41:41 +02:00
build.md
ggml-cpu: enable IBM NNPA Vector Intrinsics (
#14317
)
2025-06-25 23:49:04 +02:00
docker.md
Add Vulkan images to docker.md (
#14472
)
2025-07-01 15:44:11 +02:00
function-calling.md
…
install.md
docs : add "Quick start" section for new users (
#13862
)
2025-06-03 13:09:36 +02:00
llguidance.md
…
multimodal.md
…