This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-16 07:38:28 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
8875523eb311cac832bfda0c581e852292185ae9
llama.cpp
/
ggml
History
Jeff Bolz
8875523eb3
vulkan: support softmax/FA batch and broadcast (
#14449
)
2025-07-02 15:48:33 +03:00
..
cmake
ggml-cpu : rework weak alias on apple targets (
#14146
)
2025-06-16 13:54:15 +08:00
include
ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (
#14435
)
2025-07-02 15:48:33 +03:00
src
vulkan: support softmax/FA batch and broadcast (
#14449
)
2025-07-02 15:48:33 +03:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml-cpu: enable IBM NNPA Vector Intrinsics (
#14317
)
2025-06-25 23:49:04 +02:00