This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-01 15:09:32 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
b7c11d36e605b35206901d0e21905f1b99508e33
llama.cpp
/
ggml
/
include
History
Johannes Gäßler
a15ef8f8a0
CUDA: fix partial offloading for ne0 % 256 != 0 (
#8572
)
2024-07-18 23:48:47 +02:00
..
ggml-alloc.h
…
ggml-backend.h
CUDA: fix partial offloading for ne0 % 256 != 0 (
#8572
)
2024-07-18 23:48:47 +02:00
ggml-blas.h
…
ggml-cann.h
[CANN] Add Ascend NPU backend (
#6035
)
2024-07-17 14:23:50 +03:00
ggml-cuda.h
…
ggml-kompute.h
…
ggml-metal.h
Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (
#8258
)
2024-07-02 12:18:10 -04:00
ggml-rpc.h
…
ggml-sycl.h
…
ggml-vulkan.h
…
ggml.h
[CANN] Add Ascend NPU backend (
#6035
)
2024-07-17 14:23:50 +03:00