This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-28 13:20:27 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
eb1776b15a32d832f1266deeeab75b9d255c5849
llama.cpp
/
ggml
History
Johannes Gäßler
658987cfc9
CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (
#13014
)
...
* CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID * fix logic for RoPE support, CUDA graphs
2025-04-22 21:27:40 +02:00
..
cmake
scripts : update sync + fix cmake merge
2025-03-27 10:09:29 +02:00
include
rpc : add RPC_CMD_HELLO (
#12955
)
2025-04-18 10:13:42 +03:00
src
CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (
#13014
)
2025-04-22 21:27:40 +02:00
.gitignore
…
CMakeLists.txt
ggml : add SSE 4.2 and x64 base variant for CPUs without AVX (
#12871
)
2025-04-21 18:13:51 +02:00