This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-06-29 20:45:04 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
0cc4m/vulkan-device-architecture
llama.cpp
/
ggml
/
src
/
ggml-vulkan
History
0cc4m
25840747e6
Vulkan: Add device architecture enum and logic to recognize AMD generations
2025-03-08 08:04:45 +00:00
..
cmake
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
vulkan-shaders
vulkan: optimize coopmat2 iq2/iq3 callbacks (
#11521
)
2025-02-06 07:15:30 +01:00
CMakeLists.txt
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
ggml-vulkan.cpp
Vulkan: Add device architecture enum and logic to recognize AMD generations
2025-03-08 08:04:45 +00:00