This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-18 00:27:31 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
fd123cfead49eb32e386e26b8ef7a6d41554dda5
llama.cpp
/
ggml
History
0cc4m
fd123cfead
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (
#12434
)
2025-03-18 07:21:40 +01:00
..
cmake
cmake : enable building llama.cpp using system libggml (
#12321
)
2025-03-17 11:05:23 +02:00
include
llama: Add support for RWKV v7 architecture (
#12412
)
2025-03-18 07:27:50 +08:00
src
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (
#12434
)
2025-03-18 07:21:40 +01:00
.gitignore
…
CMakeLists.txt
opencl: use OpenCL C standard supported by the device (
#12221
)
2025-03-10 09:57:00 -07:00