This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-06-26 19:55:04 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
f09b7cb609d80b8031803f89255991dc8b35db69
llama.cpp
/
ggml
History
Neo Zhang Jianyu
f09b7cb609
rm get_work_group_size() by local cache for performance (
#8286
)
...
Co-authored-by: arthw <
14088817+arthw@users.noreply.github.com
>
2024-07-05 10:32:29 +08:00
..
cmake
llama : reorganize source code + improve CMake (
#8006
)
2024-06-26 18:33:02 +03:00
include
Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (
#8258
)
2024-07-02 12:18:10 -04:00
src
rm get_work_group_size() by local cache for performance (
#8286
)
2024-07-05 10:32:29 +08:00
CMakeLists.txt
ggml : add GGML_CUDA_USE_GRAPHS option, restore GGML_CUDA_FORCE_CUBLAS (cmake) (
#8140
)
2024-06-26 21:34:14 +02:00
ggml_vk_generate_shaders.py
llama : reorganize source code + improve CMake (
#8006
)
2024-06-26 18:33:02 +03:00