This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-26 11:13:53 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
207c22ec2d6d793fc70830138617d1e016c5151c
llama.cpp
/
ggml
History
Alan Gray
207c22ec2d
ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (
#12970
)
2025-04-17 15:19:42 +02:00
..
cmake
scripts : update sync + fix cmake merge
2025-03-27 10:09:29 +02:00
include
ggml : add bilinear upscale support (ggml/1185)
2025-04-11 00:17:47 +03:00
src
ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (
#12970
)
2025-04-17 15:19:42 +02:00
.gitignore
…
CMakeLists.txt
CUDA/HIP: Share the same unified memory allocation logic. (
#12934
)
2025-04-15 11:20:38 +02:00