This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-08 18:04:54 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
a9aedf46b4930d94cd6b79860af9700a58373023
llama.cpp
/
ggml
History
Akarshan
a9aedf46b4
SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate
2025-06-22 10:37:26 +05:30
..
cmake
ggml-cpu : rework weak alias on apple targets (
#14146
)
2025-06-16 13:54:15 +08:00
include
implement swapped variants (cpu/cuda)
2025-06-22 10:37:25 +05:30
src
SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate
2025-06-22 10:37:26 +05:30
.gitignore
…
CMakeLists.txt
ggml : disable warnings for tests when using MSVC (ggml/1273)
2025-06-18 09:59:21 +03:00