This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-14 20:29:41 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
891c63956dbfbdf7ed2ecd0b5882cff49dbfe90f
llama.cpp
/
ggml
History
Jeff Bolz
891c63956d
vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking (
#12273
)
...
* vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking
2025-03-17 10:41:59 +01:00
..
cmake
cmake : enable building llama.cpp using system libggml (
#12321
)
2025-03-17 11:05:23 +02:00
include
ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (
#12154
)
2025-03-06 02:26:10 +01:00
src
vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking (
#12273
)
2025-03-17 10:41:59 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
opencl: use OpenCL C standard supported by the device (
#12221
)
2025-03-10 09:57:00 -07:00