This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-26 11:13:53 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
3ee6382d48b07b31e64983969c16019490e19740
llama.cpp
/
ggml
History
Diego Devesa
3ee6382d48
cuda : fix CUDA_FLAGS not being applied (
#10403
)
2024-11-19 14:29:38 +01:00
..
include
ggml: new optimization interface (ggml/988)
2024-11-17 08:30:29 +02:00
src
cuda : fix CUDA_FLAGS not being applied (
#10403
)
2024-11-19 14:29:38 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
sycl : Add option to set the SYCL architecture for all targets (
#10266
)
2024-11-19 08:02:23 +00:00