This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-29 13:43:38 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
1e55890e4059fc9ff183af92cd1b021e0dfa7b41
llama.cpp
/
ggml
History
Aman Gupta
1e55890e40
CUDA: add fused rms norm (
#14800
)
2025-07-25 21:24:50 +08:00
..
cmake
ggml-cpu : rework weak alias on apple targets (
#14146
)
2025-06-16 13:54:15 +08:00
include
ggml: Add initial WebGPU backend (
#14521
)
2025-07-16 18:18:51 +03:00
src
CUDA: add fused rms norm (
#14800
)
2025-07-25 21:24:50 +08:00
.gitignore
…
CMakeLists.txt
ggml: Add initial WebGPU backend (
#14521
)
2025-07-16 18:18:51 +03:00