This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-29 05:33:37 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
e4dd31ff892627665d3c53c5d1a03c0d282d9d45
llama.cpp
/
ggml
History
Alberto Cabrera Pérez
5b0b8d8cfb
sycl : Reenabled mmvq path for the SYCL Nvidia Backend (
#8372
)
...
* SYCL : Reenabled mmvq path for the SYCL Nvidia Backend * Reduced verbosity of comment
2024-07-09 22:03:15 +08:00
..
cmake
…
include
Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (
#8258
)
2024-07-02 12:18:10 -04:00
src
sycl : Reenabled mmvq path for the SYCL Nvidia Backend (
#8372
)
2024-07-09 22:03:15 +08:00
CMakeLists.txt
ggml : add GGML_CUDA_USE_GRAPHS option, restore GGML_CUDA_FORCE_CUBLAS (cmake) (
#8140
)
2024-06-26 21:34:14 +02:00
ggml_vk_generate_shaders.py
py : type-check all Python scripts with Pyright (
#8341
)
2024-07-07 15:04:39 -04:00